00:00:00.001 Started by upstream project "autotest-per-patch" build number 127219 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.087 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.088 The recommended git tool is: git 00:00:00.088 using credential 00000000-0000-0000-0000-000000000002 00:00:00.091 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.134 Fetching changes from the remote Git repository 00:00:00.135 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.195 Using shallow fetch with depth 1 00:00:00.195 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.195 > git --version # timeout=10 00:00:00.235 > git --version # 'git version 2.39.2' 00:00:00.235 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.262 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.262 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.224 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.234 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.245 Checking out Revision 4313f32deecbb7108199ebd1913b403a3005dece (FETCH_HEAD) 00:00:06.245 > git config core.sparsecheckout # timeout=10 00:00:06.258 > git read-tree -mu HEAD # timeout=10 00:00:06.274 > git checkout -f 4313f32deecbb7108199ebd1913b403a3005dece # timeout=5 00:00:06.296 Commit message: "packer: Add bios builder" 00:00:06.296 > git rev-list --no-walk 4313f32deecbb7108199ebd1913b403a3005dece # timeout=10 00:00:06.379 [Pipeline] Start of Pipeline 00:00:06.390 [Pipeline] library 00:00:06.391 Loading library shm_lib@master 00:00:06.391 Library shm_lib@master is cached. Copying from home. 00:00:06.404 [Pipeline] node 00:00:06.431 Running on GP6 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:06.432 [Pipeline] { 00:00:06.441 [Pipeline] catchError 00:00:06.442 [Pipeline] { 00:00:06.453 [Pipeline] wrap 00:00:06.460 [Pipeline] { 00:00:06.465 [Pipeline] stage 00:00:06.467 [Pipeline] { (Prologue) 00:00:06.634 [Pipeline] sh 00:00:07.547 + logger -p user.info -t JENKINS-CI 00:00:07.573 [Pipeline] echo 00:00:07.574 Node: GP6 00:00:07.583 [Pipeline] sh 00:00:07.937 [Pipeline] setCustomBuildProperty 00:00:07.951 [Pipeline] echo 00:00:07.952 Cleanup processes 00:00:07.958 [Pipeline] sh 00:00:08.254 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.254 17348 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.271 [Pipeline] sh 00:00:08.567 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.567 ++ grep -v 'sudo pgrep' 00:00:08.567 ++ awk '{print $1}' 00:00:08.567 + sudo kill -9 00:00:08.567 + true 00:00:08.584 [Pipeline] cleanWs 00:00:08.596 [WS-CLEANUP] Deleting project workspace... 00:00:08.596 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.612 [WS-CLEANUP] done 00:00:08.616 [Pipeline] setCustomBuildProperty 00:00:08.632 [Pipeline] sh 00:00:08.925 + sudo git config --global --replace-all safe.directory '*' 00:00:08.991 [Pipeline] httpRequest 00:00:10.633 [Pipeline] echo 00:00:10.635 Sorcerer 10.211.164.101 is alive 00:00:10.644 [Pipeline] httpRequest 00:00:10.650 HttpMethod: GET 00:00:10.651 URL: http://10.211.164.101/packages/jbp_4313f32deecbb7108199ebd1913b403a3005dece.tar.gz 00:00:10.652 Sending request to url: http://10.211.164.101/packages/jbp_4313f32deecbb7108199ebd1913b403a3005dece.tar.gz 00:00:10.678 Response Code: HTTP/1.1 200 OK 00:00:10.679 Success: Status code 200 is in the accepted range: 200,404 00:00:10.679 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_4313f32deecbb7108199ebd1913b403a3005dece.tar.gz 00:00:25.419 [Pipeline] sh 00:00:25.740 + tar --no-same-owner -xf jbp_4313f32deecbb7108199ebd1913b403a3005dece.tar.gz 00:00:25.759 [Pipeline] httpRequest 00:00:25.787 [Pipeline] echo 00:00:25.789 Sorcerer 10.211.164.101 is alive 00:00:25.798 [Pipeline] httpRequest 00:00:25.805 HttpMethod: GET 00:00:25.805 URL: http://10.211.164.101/packages/spdk_477912bde78e3585fa2c24b6d4c1ca669ea1e1a3.tar.gz 00:00:25.807 Sending request to url: http://10.211.164.101/packages/spdk_477912bde78e3585fa2c24b6d4c1ca669ea1e1a3.tar.gz 00:00:25.834 Response Code: HTTP/1.1 200 OK 00:00:25.835 Success: Status code 200 is in the accepted range: 200,404 00:00:25.835 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_477912bde78e3585fa2c24b6d4c1ca669ea1e1a3.tar.gz 00:02:53.391 [Pipeline] sh 00:02:53.677 + tar --no-same-owner -xf spdk_477912bde78e3585fa2c24b6d4c1ca669ea1e1a3.tar.gz 00:02:56.230 [Pipeline] sh 00:02:56.521 + git -C spdk log --oneline -n5 00:02:56.521 477912bde lib/accel: add spdk_accel_append_dix_generate/verify 00:02:56.521 325310f6a accel_perf: add support for DIX Generate/Verify 00:02:56.521 fcdc45f1b test/accel/dif: add DIX Generate/Verify suites 00:02:56.521 ae7704717 lib/accel: add DIX verify 00:02:56.521 8183d73cc lib/accel: add DIX generate 00:02:56.535 [Pipeline] } 00:02:56.552 [Pipeline] // stage 00:02:56.563 [Pipeline] stage 00:02:56.566 [Pipeline] { (Prepare) 00:02:56.582 [Pipeline] writeFile 00:02:56.597 [Pipeline] sh 00:02:56.884 + logger -p user.info -t JENKINS-CI 00:02:56.898 [Pipeline] sh 00:02:57.190 + logger -p user.info -t JENKINS-CI 00:02:57.203 [Pipeline] sh 00:02:57.494 + cat autorun-spdk.conf 00:02:57.494 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:57.494 SPDK_TEST_NVMF=1 00:02:57.494 SPDK_TEST_NVME_CLI=1 00:02:57.494 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:57.494 SPDK_TEST_NVMF_NICS=e810 00:02:57.494 SPDK_TEST_VFIOUSER=1 00:02:57.494 SPDK_RUN_UBSAN=1 00:02:57.494 NET_TYPE=phy 00:02:57.503 RUN_NIGHTLY=0 00:02:57.508 [Pipeline] readFile 00:02:57.564 [Pipeline] withEnv 00:02:57.566 [Pipeline] { 00:02:57.579 [Pipeline] sh 00:02:57.863 + set -ex 00:02:57.863 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:02:57.863 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:57.863 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:57.863 ++ SPDK_TEST_NVMF=1 00:02:57.863 ++ SPDK_TEST_NVME_CLI=1 00:02:57.863 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:57.863 ++ SPDK_TEST_NVMF_NICS=e810 00:02:57.863 ++ SPDK_TEST_VFIOUSER=1 00:02:57.863 ++ SPDK_RUN_UBSAN=1 00:02:57.863 ++ NET_TYPE=phy 00:02:57.863 ++ RUN_NIGHTLY=0 00:02:57.863 + case $SPDK_TEST_NVMF_NICS in 00:02:57.863 + DRIVERS=ice 00:02:57.863 + [[ tcp == \r\d\m\a ]] 00:02:57.863 + [[ -n ice ]] 00:02:57.863 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:02:57.863 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:03:01.165 rmmod: ERROR: Module irdma is not currently loaded 00:03:01.165 rmmod: ERROR: Module i40iw is not currently loaded 00:03:01.165 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:03:01.165 + true 00:03:01.165 + for D in $DRIVERS 00:03:01.165 + sudo modprobe ice 00:03:01.165 + exit 0 00:03:01.175 [Pipeline] } 00:03:01.195 [Pipeline] // withEnv 00:03:01.201 [Pipeline] } 00:03:01.220 [Pipeline] // stage 00:03:01.231 [Pipeline] catchError 00:03:01.234 [Pipeline] { 00:03:01.250 [Pipeline] timeout 00:03:01.250 Timeout set to expire in 50 min 00:03:01.252 [Pipeline] { 00:03:01.267 [Pipeline] stage 00:03:01.269 [Pipeline] { (Tests) 00:03:01.288 [Pipeline] sh 00:03:01.572 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:03:01.572 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:03:01.572 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:03:01.572 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:03:01.572 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:01.572 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:03:01.572 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:03:01.572 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:03:01.572 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:03:01.572 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:03:01.572 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:03:01.572 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:03:01.572 + source /etc/os-release 00:03:01.572 ++ NAME='Fedora Linux' 00:03:01.572 ++ VERSION='38 (Cloud Edition)' 00:03:01.572 ++ ID=fedora 00:03:01.572 ++ VERSION_ID=38 00:03:01.572 ++ VERSION_CODENAME= 00:03:01.572 ++ PLATFORM_ID=platform:f38 00:03:01.572 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:03:01.572 ++ ANSI_COLOR='0;38;2;60;110;180' 00:03:01.572 ++ LOGO=fedora-logo-icon 00:03:01.572 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:03:01.572 ++ HOME_URL=https://fedoraproject.org/ 00:03:01.572 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:03:01.572 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:03:01.572 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:03:01.572 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:03:01.572 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:03:01.572 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:03:01.572 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:03:01.572 ++ SUPPORT_END=2024-05-14 00:03:01.572 ++ VARIANT='Cloud Edition' 00:03:01.572 ++ VARIANT_ID=cloud 00:03:01.572 + uname -a 00:03:01.572 Linux spdk-gp-06 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:03:01.572 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:02.515 Hugepages 00:03:02.515 node hugesize free / total 00:03:02.515 node0 1048576kB 0 / 0 00:03:02.515 node0 2048kB 0 / 0 00:03:02.515 node1 1048576kB 0 / 0 00:03:02.515 node1 2048kB 0 / 0 00:03:02.515 00:03:02.515 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:02.515 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:03:02.515 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:03:02.515 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:03:02.515 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:03:02.515 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:03:02.515 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:03:02.515 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:03:02.515 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:03:02.515 NVMe 0000:0b:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:03:02.515 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:03:02.515 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:03:02.515 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:03:02.515 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:03:02.515 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:03:02.515 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:03:02.515 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:03:02.515 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:03:02.515 + rm -f /tmp/spdk-ld-path 00:03:02.515 + source autorun-spdk.conf 00:03:02.515 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:02.515 ++ SPDK_TEST_NVMF=1 00:03:02.515 ++ SPDK_TEST_NVME_CLI=1 00:03:02.515 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:02.515 ++ SPDK_TEST_NVMF_NICS=e810 00:03:02.515 ++ SPDK_TEST_VFIOUSER=1 00:03:02.515 ++ SPDK_RUN_UBSAN=1 00:03:02.515 ++ NET_TYPE=phy 00:03:02.515 ++ RUN_NIGHTLY=0 00:03:02.515 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:03:02.515 + [[ -n '' ]] 00:03:02.515 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:02.515 + for M in /var/spdk/build-*-manifest.txt 00:03:02.515 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:03:02.515 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:03:02.515 + for M in /var/spdk/build-*-manifest.txt 00:03:02.515 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:03:02.515 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:03:02.515 ++ uname 00:03:02.515 + [[ Linux == \L\i\n\u\x ]] 00:03:02.515 + sudo dmesg -T 00:03:02.515 + sudo dmesg --clear 00:03:02.775 + dmesg_pid=18629 00:03:02.775 + [[ Fedora Linux == FreeBSD ]] 00:03:02.775 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:02.775 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:02.775 + sudo dmesg -Tw 00:03:02.775 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:03:02.775 + [[ -x /usr/src/fio-static/fio ]] 00:03:02.775 + export FIO_BIN=/usr/src/fio-static/fio 00:03:02.775 + FIO_BIN=/usr/src/fio-static/fio 00:03:02.775 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:03:02.775 + [[ ! -v VFIO_QEMU_BIN ]] 00:03:02.775 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:03:02.775 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:02.775 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:02.775 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:03:02.775 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:02.775 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:02.775 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:02.775 Test configuration: 00:03:02.775 SPDK_RUN_FUNCTIONAL_TEST=1 00:03:02.775 SPDK_TEST_NVMF=1 00:03:02.775 SPDK_TEST_NVME_CLI=1 00:03:02.775 SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:02.775 SPDK_TEST_NVMF_NICS=e810 00:03:02.775 SPDK_TEST_VFIOUSER=1 00:03:02.775 SPDK_RUN_UBSAN=1 00:03:02.775 NET_TYPE=phy 00:03:02.775 RUN_NIGHTLY=0 13:57:10 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:02.775 13:57:10 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:03:02.775 13:57:10 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:02.775 13:57:10 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:02.775 13:57:10 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:02.775 13:57:10 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:02.775 13:57:10 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:02.775 13:57:10 -- paths/export.sh@5 -- $ export PATH 00:03:02.775 13:57:10 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:02.775 13:57:10 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:02.775 13:57:10 -- common/autobuild_common.sh@447 -- $ date +%s 00:03:02.775 13:57:10 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721995030.XXXXXX 00:03:02.775 13:57:10 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721995030.1BmpUG 00:03:02.775 13:57:10 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:03:02.775 13:57:10 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:03:02.775 13:57:10 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:03:02.775 13:57:10 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:03:02.775 13:57:10 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:03:02.775 13:57:10 -- common/autobuild_common.sh@463 -- $ get_config_params 00:03:02.775 13:57:10 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:03:02.775 13:57:10 -- common/autotest_common.sh@10 -- $ set +x 00:03:02.775 13:57:10 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:03:02.775 13:57:10 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:03:02.775 13:57:10 -- pm/common@17 -- $ local monitor 00:03:02.775 13:57:10 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:02.775 13:57:10 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:02.775 13:57:10 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:02.775 13:57:10 -- pm/common@21 -- $ date +%s 00:03:02.775 13:57:10 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:02.775 13:57:10 -- pm/common@21 -- $ date +%s 00:03:02.775 13:57:10 -- pm/common@25 -- $ sleep 1 00:03:02.775 13:57:10 -- pm/common@21 -- $ date +%s 00:03:02.775 13:57:10 -- pm/common@21 -- $ date +%s 00:03:02.775 13:57:10 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721995030 00:03:02.775 13:57:10 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721995030 00:03:02.775 13:57:10 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721995030 00:03:02.775 13:57:10 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721995030 00:03:02.775 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721995030_collect-vmstat.pm.log 00:03:02.775 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721995030_collect-cpu-load.pm.log 00:03:02.775 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721995030_collect-cpu-temp.pm.log 00:03:02.775 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721995030_collect-bmc-pm.bmc.pm.log 00:03:03.718 13:57:11 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:03:03.718 13:57:11 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:03:03.718 13:57:11 -- spdk/autobuild.sh@12 -- $ umask 022 00:03:03.718 13:57:11 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:03.718 13:57:11 -- spdk/autobuild.sh@16 -- $ date -u 00:03:03.718 Fri Jul 26 11:57:11 AM UTC 2024 00:03:03.718 13:57:11 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:03:03.718 v24.09-pre-326-g477912bde 00:03:03.718 13:57:11 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:03:03.718 13:57:11 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:03:03.718 13:57:11 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:03:03.718 13:57:11 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:03:03.718 13:57:11 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:03:03.718 13:57:11 -- common/autotest_common.sh@10 -- $ set +x 00:03:03.718 ************************************ 00:03:03.718 START TEST ubsan 00:03:03.718 ************************************ 00:03:03.718 13:57:11 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:03:03.718 using ubsan 00:03:03.718 00:03:03.718 real 0m0.000s 00:03:03.718 user 0m0.000s 00:03:03.718 sys 0m0.000s 00:03:03.718 13:57:11 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:03.718 13:57:11 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:03:03.718 ************************************ 00:03:03.718 END TEST ubsan 00:03:03.718 ************************************ 00:03:03.718 13:57:11 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:03:03.718 13:57:11 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:03.718 13:57:11 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:03.718 13:57:11 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:03.718 13:57:11 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:03.718 13:57:11 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:03.718 13:57:11 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:03.718 13:57:11 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:03.718 13:57:11 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:03:04.285 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:03:04.285 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:03:05.661 Using 'verbs' RDMA provider 00:03:18.817 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:03:28.810 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:03:28.810 Creating mk/config.mk...done. 00:03:28.810 Creating mk/cc.flags.mk...done. 00:03:28.810 Type 'make' to build. 00:03:28.810 13:57:36 -- spdk/autobuild.sh@69 -- $ run_test make make -j48 00:03:28.810 13:57:36 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:03:28.810 13:57:36 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:03:28.810 13:57:36 -- common/autotest_common.sh@10 -- $ set +x 00:03:29.069 ************************************ 00:03:29.069 START TEST make 00:03:29.069 ************************************ 00:03:29.069 13:57:36 make -- common/autotest_common.sh@1125 -- $ make -j48 00:03:29.329 make[1]: Nothing to be done for 'all'. 00:03:31.887 The Meson build system 00:03:31.887 Version: 1.3.1 00:03:31.887 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:03:31.887 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:31.887 Build type: native build 00:03:31.887 Project name: libvfio-user 00:03:31.887 Project version: 0.0.1 00:03:31.887 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:03:31.887 C linker for the host machine: cc ld.bfd 2.39-16 00:03:31.887 Host machine cpu family: x86_64 00:03:31.887 Host machine cpu: x86_64 00:03:31.887 Run-time dependency threads found: YES 00:03:31.887 Library dl found: YES 00:03:31.887 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:03:31.887 Run-time dependency json-c found: YES 0.17 00:03:31.887 Run-time dependency cmocka found: YES 1.1.7 00:03:31.887 Program pytest-3 found: NO 00:03:31.887 Program flake8 found: NO 00:03:31.887 Program misspell-fixer found: NO 00:03:31.887 Program restructuredtext-lint found: NO 00:03:31.887 Program valgrind found: YES (/usr/bin/valgrind) 00:03:31.887 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:31.887 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:31.887 Compiler for C supports arguments -Wwrite-strings: YES 00:03:31.887 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:31.887 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:03:31.888 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:03:31.888 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:31.888 Build targets in project: 8 00:03:31.888 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:03:31.888 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:03:31.888 00:03:31.888 libvfio-user 0.0.1 00:03:31.888 00:03:31.888 User defined options 00:03:31.888 buildtype : debug 00:03:31.888 default_library: shared 00:03:31.888 libdir : /usr/local/lib 00:03:31.888 00:03:31.888 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:32.155 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:32.427 [1/37] Compiling C object samples/null.p/null.c.o 00:03:32.427 [2/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:03:32.427 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:03:32.427 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:03:32.427 [5/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:03:32.427 [6/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:03:32.427 [7/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:03:32.697 [8/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:03:32.697 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:03:32.697 [10/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:03:32.697 [11/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:03:32.697 [12/37] Compiling C object samples/lspci.p/lspci.c.o 00:03:32.697 [13/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:03:32.697 [14/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:03:32.697 [15/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:03:32.697 [16/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:03:32.697 [17/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:03:32.697 [18/37] Compiling C object samples/server.p/server.c.o 00:03:32.697 [19/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:03:32.697 [20/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:03:32.697 [21/37] Compiling C object test/unit_tests.p/mocks.c.o 00:03:32.697 [22/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:03:32.697 [23/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:03:32.697 [24/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:03:32.697 [25/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:03:32.697 [26/37] Compiling C object samples/client.p/client.c.o 00:03:32.963 [27/37] Linking target samples/client 00:03:32.963 [28/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:03:32.963 [29/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:03:32.963 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:03:32.963 [31/37] Linking target test/unit_tests 00:03:33.225 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:03:33.225 [33/37] Linking target samples/server 00:03:33.225 [34/37] Linking target samples/null 00:03:33.225 [35/37] Linking target samples/gpio-pci-idio-16 00:03:33.225 [36/37] Linking target samples/lspci 00:03:33.225 [37/37] Linking target samples/shadow_ioeventfd_server 00:03:33.225 INFO: autodetecting backend as ninja 00:03:33.225 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:33.492 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:34.060 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:34.060 ninja: no work to do. 00:03:38.272 The Meson build system 00:03:38.272 Version: 1.3.1 00:03:38.272 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:03:38.272 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:03:38.272 Build type: native build 00:03:38.272 Program cat found: YES (/usr/bin/cat) 00:03:38.272 Project name: DPDK 00:03:38.272 Project version: 24.03.0 00:03:38.272 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:03:38.272 C linker for the host machine: cc ld.bfd 2.39-16 00:03:38.272 Host machine cpu family: x86_64 00:03:38.272 Host machine cpu: x86_64 00:03:38.272 Message: ## Building in Developer Mode ## 00:03:38.272 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:38.272 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:03:38.272 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:03:38.272 Program python3 found: YES (/usr/bin/python3) 00:03:38.272 Program cat found: YES (/usr/bin/cat) 00:03:38.272 Compiler for C supports arguments -march=native: YES 00:03:38.272 Checking for size of "void *" : 8 00:03:38.272 Checking for size of "void *" : 8 (cached) 00:03:38.272 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:03:38.272 Library m found: YES 00:03:38.272 Library numa found: YES 00:03:38.272 Has header "numaif.h" : YES 00:03:38.272 Library fdt found: NO 00:03:38.272 Library execinfo found: NO 00:03:38.272 Has header "execinfo.h" : YES 00:03:38.272 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:03:38.272 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:38.272 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:38.272 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:38.272 Run-time dependency openssl found: YES 3.0.9 00:03:38.272 Run-time dependency libpcap found: YES 1.10.4 00:03:38.272 Has header "pcap.h" with dependency libpcap: YES 00:03:38.272 Compiler for C supports arguments -Wcast-qual: YES 00:03:38.272 Compiler for C supports arguments -Wdeprecated: YES 00:03:38.272 Compiler for C supports arguments -Wformat: YES 00:03:38.272 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:38.272 Compiler for C supports arguments -Wformat-security: NO 00:03:38.272 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:38.272 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:38.272 Compiler for C supports arguments -Wnested-externs: YES 00:03:38.272 Compiler for C supports arguments -Wold-style-definition: YES 00:03:38.272 Compiler for C supports arguments -Wpointer-arith: YES 00:03:38.272 Compiler for C supports arguments -Wsign-compare: YES 00:03:38.272 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:38.272 Compiler for C supports arguments -Wundef: YES 00:03:38.272 Compiler for C supports arguments -Wwrite-strings: YES 00:03:38.272 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:38.272 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:38.272 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:38.272 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:38.272 Program objdump found: YES (/usr/bin/objdump) 00:03:38.272 Compiler for C supports arguments -mavx512f: YES 00:03:38.272 Checking if "AVX512 checking" compiles: YES 00:03:38.272 Fetching value of define "__SSE4_2__" : 1 00:03:38.272 Fetching value of define "__AES__" : 1 00:03:38.272 Fetching value of define "__AVX__" : 1 00:03:38.272 Fetching value of define "__AVX2__" : (undefined) 00:03:38.272 Fetching value of define "__AVX512BW__" : (undefined) 00:03:38.272 Fetching value of define "__AVX512CD__" : (undefined) 00:03:38.272 Fetching value of define "__AVX512DQ__" : (undefined) 00:03:38.273 Fetching value of define "__AVX512F__" : (undefined) 00:03:38.273 Fetching value of define "__AVX512VL__" : (undefined) 00:03:38.273 Fetching value of define "__PCLMUL__" : 1 00:03:38.273 Fetching value of define "__RDRND__" : 1 00:03:38.273 Fetching value of define "__RDSEED__" : (undefined) 00:03:38.273 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:03:38.273 Fetching value of define "__znver1__" : (undefined) 00:03:38.273 Fetching value of define "__znver2__" : (undefined) 00:03:38.273 Fetching value of define "__znver3__" : (undefined) 00:03:38.273 Fetching value of define "__znver4__" : (undefined) 00:03:38.273 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:38.273 Message: lib/log: Defining dependency "log" 00:03:38.273 Message: lib/kvargs: Defining dependency "kvargs" 00:03:38.273 Message: lib/telemetry: Defining dependency "telemetry" 00:03:38.273 Checking for function "getentropy" : NO 00:03:38.273 Message: lib/eal: Defining dependency "eal" 00:03:38.273 Message: lib/ring: Defining dependency "ring" 00:03:38.273 Message: lib/rcu: Defining dependency "rcu" 00:03:38.273 Message: lib/mempool: Defining dependency "mempool" 00:03:38.273 Message: lib/mbuf: Defining dependency "mbuf" 00:03:38.273 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:38.273 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:03:38.273 Compiler for C supports arguments -mpclmul: YES 00:03:38.273 Compiler for C supports arguments -maes: YES 00:03:38.273 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:38.273 Compiler for C supports arguments -mavx512bw: YES 00:03:38.273 Compiler for C supports arguments -mavx512dq: YES 00:03:38.273 Compiler for C supports arguments -mavx512vl: YES 00:03:38.273 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:38.273 Compiler for C supports arguments -mavx2: YES 00:03:38.273 Compiler for C supports arguments -mavx: YES 00:03:38.273 Message: lib/net: Defining dependency "net" 00:03:38.273 Message: lib/meter: Defining dependency "meter" 00:03:38.273 Message: lib/ethdev: Defining dependency "ethdev" 00:03:38.273 Message: lib/pci: Defining dependency "pci" 00:03:38.273 Message: lib/cmdline: Defining dependency "cmdline" 00:03:38.273 Message: lib/hash: Defining dependency "hash" 00:03:38.273 Message: lib/timer: Defining dependency "timer" 00:03:38.273 Message: lib/compressdev: Defining dependency "compressdev" 00:03:38.273 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:38.273 Message: lib/dmadev: Defining dependency "dmadev" 00:03:38.273 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:38.273 Message: lib/power: Defining dependency "power" 00:03:38.273 Message: lib/reorder: Defining dependency "reorder" 00:03:38.273 Message: lib/security: Defining dependency "security" 00:03:38.273 Has header "linux/userfaultfd.h" : YES 00:03:38.273 Has header "linux/vduse.h" : YES 00:03:38.273 Message: lib/vhost: Defining dependency "vhost" 00:03:38.273 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:38.273 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:38.273 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:38.273 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:38.273 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:03:38.273 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:03:38.273 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:03:38.273 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:03:38.273 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:03:38.273 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:03:38.273 Program doxygen found: YES (/usr/bin/doxygen) 00:03:38.273 Configuring doxy-api-html.conf using configuration 00:03:38.273 Configuring doxy-api-man.conf using configuration 00:03:38.273 Program mandb found: YES (/usr/bin/mandb) 00:03:38.273 Program sphinx-build found: NO 00:03:38.273 Configuring rte_build_config.h using configuration 00:03:38.273 Message: 00:03:38.273 ================= 00:03:38.273 Applications Enabled 00:03:38.273 ================= 00:03:38.273 00:03:38.273 apps: 00:03:38.273 00:03:38.273 00:03:38.273 Message: 00:03:38.273 ================= 00:03:38.273 Libraries Enabled 00:03:38.273 ================= 00:03:38.273 00:03:38.273 libs: 00:03:38.273 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:03:38.273 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:03:38.273 cryptodev, dmadev, power, reorder, security, vhost, 00:03:38.273 00:03:38.273 Message: 00:03:38.273 =============== 00:03:38.273 Drivers Enabled 00:03:38.273 =============== 00:03:38.273 00:03:38.273 common: 00:03:38.273 00:03:38.273 bus: 00:03:38.273 pci, vdev, 00:03:38.273 mempool: 00:03:38.273 ring, 00:03:38.273 dma: 00:03:38.273 00:03:38.273 net: 00:03:38.273 00:03:38.273 crypto: 00:03:38.273 00:03:38.273 compress: 00:03:38.273 00:03:38.273 vdpa: 00:03:38.273 00:03:38.273 00:03:38.273 Message: 00:03:38.273 ================= 00:03:38.273 Content Skipped 00:03:38.273 ================= 00:03:38.273 00:03:38.273 apps: 00:03:38.273 dumpcap: explicitly disabled via build config 00:03:38.273 graph: explicitly disabled via build config 00:03:38.273 pdump: explicitly disabled via build config 00:03:38.273 proc-info: explicitly disabled via build config 00:03:38.273 test-acl: explicitly disabled via build config 00:03:38.273 test-bbdev: explicitly disabled via build config 00:03:38.273 test-cmdline: explicitly disabled via build config 00:03:38.273 test-compress-perf: explicitly disabled via build config 00:03:38.273 test-crypto-perf: explicitly disabled via build config 00:03:38.273 test-dma-perf: explicitly disabled via build config 00:03:38.273 test-eventdev: explicitly disabled via build config 00:03:38.273 test-fib: explicitly disabled via build config 00:03:38.273 test-flow-perf: explicitly disabled via build config 00:03:38.273 test-gpudev: explicitly disabled via build config 00:03:38.273 test-mldev: explicitly disabled via build config 00:03:38.273 test-pipeline: explicitly disabled via build config 00:03:38.273 test-pmd: explicitly disabled via build config 00:03:38.273 test-regex: explicitly disabled via build config 00:03:38.273 test-sad: explicitly disabled via build config 00:03:38.273 test-security-perf: explicitly disabled via build config 00:03:38.273 00:03:38.273 libs: 00:03:38.273 argparse: explicitly disabled via build config 00:03:38.273 metrics: explicitly disabled via build config 00:03:38.273 acl: explicitly disabled via build config 00:03:38.273 bbdev: explicitly disabled via build config 00:03:38.273 bitratestats: explicitly disabled via build config 00:03:38.273 bpf: explicitly disabled via build config 00:03:38.273 cfgfile: explicitly disabled via build config 00:03:38.273 distributor: explicitly disabled via build config 00:03:38.273 efd: explicitly disabled via build config 00:03:38.273 eventdev: explicitly disabled via build config 00:03:38.273 dispatcher: explicitly disabled via build config 00:03:38.273 gpudev: explicitly disabled via build config 00:03:38.273 gro: explicitly disabled via build config 00:03:38.273 gso: explicitly disabled via build config 00:03:38.273 ip_frag: explicitly disabled via build config 00:03:38.273 jobstats: explicitly disabled via build config 00:03:38.273 latencystats: explicitly disabled via build config 00:03:38.273 lpm: explicitly disabled via build config 00:03:38.273 member: explicitly disabled via build config 00:03:38.273 pcapng: explicitly disabled via build config 00:03:38.273 rawdev: explicitly disabled via build config 00:03:38.273 regexdev: explicitly disabled via build config 00:03:38.273 mldev: explicitly disabled via build config 00:03:38.273 rib: explicitly disabled via build config 00:03:38.273 sched: explicitly disabled via build config 00:03:38.273 stack: explicitly disabled via build config 00:03:38.273 ipsec: explicitly disabled via build config 00:03:38.273 pdcp: explicitly disabled via build config 00:03:38.273 fib: explicitly disabled via build config 00:03:38.273 port: explicitly disabled via build config 00:03:38.273 pdump: explicitly disabled via build config 00:03:38.273 table: explicitly disabled via build config 00:03:38.273 pipeline: explicitly disabled via build config 00:03:38.273 graph: explicitly disabled via build config 00:03:38.273 node: explicitly disabled via build config 00:03:38.273 00:03:38.273 drivers: 00:03:38.273 common/cpt: not in enabled drivers build config 00:03:38.273 common/dpaax: not in enabled drivers build config 00:03:38.273 common/iavf: not in enabled drivers build config 00:03:38.273 common/idpf: not in enabled drivers build config 00:03:38.273 common/ionic: not in enabled drivers build config 00:03:38.273 common/mvep: not in enabled drivers build config 00:03:38.273 common/octeontx: not in enabled drivers build config 00:03:38.273 bus/auxiliary: not in enabled drivers build config 00:03:38.273 bus/cdx: not in enabled drivers build config 00:03:38.273 bus/dpaa: not in enabled drivers build config 00:03:38.273 bus/fslmc: not in enabled drivers build config 00:03:38.273 bus/ifpga: not in enabled drivers build config 00:03:38.273 bus/platform: not in enabled drivers build config 00:03:38.273 bus/uacce: not in enabled drivers build config 00:03:38.273 bus/vmbus: not in enabled drivers build config 00:03:38.273 common/cnxk: not in enabled drivers build config 00:03:38.273 common/mlx5: not in enabled drivers build config 00:03:38.273 common/nfp: not in enabled drivers build config 00:03:38.273 common/nitrox: not in enabled drivers build config 00:03:38.273 common/qat: not in enabled drivers build config 00:03:38.273 common/sfc_efx: not in enabled drivers build config 00:03:38.273 mempool/bucket: not in enabled drivers build config 00:03:38.273 mempool/cnxk: not in enabled drivers build config 00:03:38.273 mempool/dpaa: not in enabled drivers build config 00:03:38.273 mempool/dpaa2: not in enabled drivers build config 00:03:38.273 mempool/octeontx: not in enabled drivers build config 00:03:38.273 mempool/stack: not in enabled drivers build config 00:03:38.274 dma/cnxk: not in enabled drivers build config 00:03:38.274 dma/dpaa: not in enabled drivers build config 00:03:38.274 dma/dpaa2: not in enabled drivers build config 00:03:38.274 dma/hisilicon: not in enabled drivers build config 00:03:38.274 dma/idxd: not in enabled drivers build config 00:03:38.274 dma/ioat: not in enabled drivers build config 00:03:38.274 dma/skeleton: not in enabled drivers build config 00:03:38.274 net/af_packet: not in enabled drivers build config 00:03:38.274 net/af_xdp: not in enabled drivers build config 00:03:38.274 net/ark: not in enabled drivers build config 00:03:38.274 net/atlantic: not in enabled drivers build config 00:03:38.274 net/avp: not in enabled drivers build config 00:03:38.274 net/axgbe: not in enabled drivers build config 00:03:38.274 net/bnx2x: not in enabled drivers build config 00:03:38.274 net/bnxt: not in enabled drivers build config 00:03:38.274 net/bonding: not in enabled drivers build config 00:03:38.274 net/cnxk: not in enabled drivers build config 00:03:38.274 net/cpfl: not in enabled drivers build config 00:03:38.274 net/cxgbe: not in enabled drivers build config 00:03:38.274 net/dpaa: not in enabled drivers build config 00:03:38.274 net/dpaa2: not in enabled drivers build config 00:03:38.274 net/e1000: not in enabled drivers build config 00:03:38.274 net/ena: not in enabled drivers build config 00:03:38.274 net/enetc: not in enabled drivers build config 00:03:38.274 net/enetfec: not in enabled drivers build config 00:03:38.274 net/enic: not in enabled drivers build config 00:03:38.274 net/failsafe: not in enabled drivers build config 00:03:38.274 net/fm10k: not in enabled drivers build config 00:03:38.274 net/gve: not in enabled drivers build config 00:03:38.274 net/hinic: not in enabled drivers build config 00:03:38.274 net/hns3: not in enabled drivers build config 00:03:38.274 net/i40e: not in enabled drivers build config 00:03:38.274 net/iavf: not in enabled drivers build config 00:03:38.274 net/ice: not in enabled drivers build config 00:03:38.274 net/idpf: not in enabled drivers build config 00:03:38.274 net/igc: not in enabled drivers build config 00:03:38.274 net/ionic: not in enabled drivers build config 00:03:38.274 net/ipn3ke: not in enabled drivers build config 00:03:38.274 net/ixgbe: not in enabled drivers build config 00:03:38.274 net/mana: not in enabled drivers build config 00:03:38.274 net/memif: not in enabled drivers build config 00:03:38.274 net/mlx4: not in enabled drivers build config 00:03:38.274 net/mlx5: not in enabled drivers build config 00:03:38.274 net/mvneta: not in enabled drivers build config 00:03:38.274 net/mvpp2: not in enabled drivers build config 00:03:38.274 net/netvsc: not in enabled drivers build config 00:03:38.274 net/nfb: not in enabled drivers build config 00:03:38.274 net/nfp: not in enabled drivers build config 00:03:38.274 net/ngbe: not in enabled drivers build config 00:03:38.274 net/null: not in enabled drivers build config 00:03:38.274 net/octeontx: not in enabled drivers build config 00:03:38.274 net/octeon_ep: not in enabled drivers build config 00:03:38.274 net/pcap: not in enabled drivers build config 00:03:38.274 net/pfe: not in enabled drivers build config 00:03:38.274 net/qede: not in enabled drivers build config 00:03:38.274 net/ring: not in enabled drivers build config 00:03:38.274 net/sfc: not in enabled drivers build config 00:03:38.274 net/softnic: not in enabled drivers build config 00:03:38.274 net/tap: not in enabled drivers build config 00:03:38.274 net/thunderx: not in enabled drivers build config 00:03:38.274 net/txgbe: not in enabled drivers build config 00:03:38.274 net/vdev_netvsc: not in enabled drivers build config 00:03:38.274 net/vhost: not in enabled drivers build config 00:03:38.274 net/virtio: not in enabled drivers build config 00:03:38.274 net/vmxnet3: not in enabled drivers build config 00:03:38.274 raw/*: missing internal dependency, "rawdev" 00:03:38.274 crypto/armv8: not in enabled drivers build config 00:03:38.274 crypto/bcmfs: not in enabled drivers build config 00:03:38.274 crypto/caam_jr: not in enabled drivers build config 00:03:38.274 crypto/ccp: not in enabled drivers build config 00:03:38.274 crypto/cnxk: not in enabled drivers build config 00:03:38.274 crypto/dpaa_sec: not in enabled drivers build config 00:03:38.274 crypto/dpaa2_sec: not in enabled drivers build config 00:03:38.274 crypto/ipsec_mb: not in enabled drivers build config 00:03:38.274 crypto/mlx5: not in enabled drivers build config 00:03:38.274 crypto/mvsam: not in enabled drivers build config 00:03:38.274 crypto/nitrox: not in enabled drivers build config 00:03:38.274 crypto/null: not in enabled drivers build config 00:03:38.274 crypto/octeontx: not in enabled drivers build config 00:03:38.274 crypto/openssl: not in enabled drivers build config 00:03:38.274 crypto/scheduler: not in enabled drivers build config 00:03:38.274 crypto/uadk: not in enabled drivers build config 00:03:38.274 crypto/virtio: not in enabled drivers build config 00:03:38.274 compress/isal: not in enabled drivers build config 00:03:38.274 compress/mlx5: not in enabled drivers build config 00:03:38.274 compress/nitrox: not in enabled drivers build config 00:03:38.274 compress/octeontx: not in enabled drivers build config 00:03:38.274 compress/zlib: not in enabled drivers build config 00:03:38.274 regex/*: missing internal dependency, "regexdev" 00:03:38.274 ml/*: missing internal dependency, "mldev" 00:03:38.274 vdpa/ifc: not in enabled drivers build config 00:03:38.274 vdpa/mlx5: not in enabled drivers build config 00:03:38.274 vdpa/nfp: not in enabled drivers build config 00:03:38.274 vdpa/sfc: not in enabled drivers build config 00:03:38.274 event/*: missing internal dependency, "eventdev" 00:03:38.274 baseband/*: missing internal dependency, "bbdev" 00:03:38.274 gpu/*: missing internal dependency, "gpudev" 00:03:38.274 00:03:38.274 00:03:38.274 Build targets in project: 85 00:03:38.274 00:03:38.274 DPDK 24.03.0 00:03:38.274 00:03:38.274 User defined options 00:03:38.274 buildtype : debug 00:03:38.274 default_library : shared 00:03:38.274 libdir : lib 00:03:38.274 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:03:38.274 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:03:38.274 c_link_args : 00:03:38.274 cpu_instruction_set: native 00:03:38.274 disable_apps : test-sad,test-acl,test-dma-perf,test-pipeline,test-compress-perf,test-fib,test-flow-perf,test-crypto-perf,test-bbdev,test-eventdev,pdump,test-mldev,test-cmdline,graph,test-security-perf,test-pmd,test,proc-info,test-regex,dumpcap,test-gpudev 00:03:38.274 disable_libs : port,sched,rib,node,ipsec,distributor,gro,eventdev,pdcp,acl,member,latencystats,efd,stack,regexdev,rawdev,bpf,metrics,gpudev,pipeline,pdump,table,fib,dispatcher,mldev,gso,cfgfile,bitratestats,ip_frag,graph,lpm,jobstats,argparse,pcapng,bbdev 00:03:38.274 enable_docs : false 00:03:38.274 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:03:38.274 enable_kmods : false 00:03:38.274 max_lcores : 128 00:03:38.274 tests : false 00:03:38.274 00:03:38.274 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:38.543 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:03:38.543 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:38.543 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:38.543 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:38.543 [4/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:03:38.811 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:38.811 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:38.811 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:38.811 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:38.811 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:38.811 [10/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:38.811 [11/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:38.811 [12/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:38.811 [13/268] Linking static target lib/librte_kvargs.a 00:03:38.811 [14/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:03:38.811 [15/268] Linking static target lib/librte_log.a 00:03:38.811 [16/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:39.389 [17/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:39.389 [18/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:39.389 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:39.389 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:39.389 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:39.656 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:39.656 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:39.656 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:39.656 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:39.656 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:39.656 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:39.656 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:39.656 [29/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:39.656 [30/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:39.656 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:39.656 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:39.656 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:39.656 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:39.656 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:39.656 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:39.656 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:39.656 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:39.656 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:39.656 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:39.656 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:39.656 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:39.656 [43/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:39.656 [44/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:39.656 [45/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:39.656 [46/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:39.656 [47/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:39.656 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:39.656 [49/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:39.656 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:39.656 [51/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:39.656 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:39.656 [53/268] Linking static target lib/librte_telemetry.a 00:03:39.656 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:39.656 [55/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:39.656 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:39.656 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:39.656 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:39.656 [59/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:39.919 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:39.919 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:39.919 [62/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:39.919 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:39.919 [64/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:39.919 [65/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:39.919 [66/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:39.919 [67/268] Linking target lib/librte_log.so.24.1 00:03:40.187 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:40.187 [69/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:40.187 [70/268] Linking static target lib/librte_pci.a 00:03:40.187 [71/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:40.187 [72/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:40.452 [73/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:40.452 [74/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:03:40.452 [75/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:03:40.452 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:40.452 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:40.452 [78/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:40.452 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:40.452 [80/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:40.452 [81/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:03:40.452 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:40.452 [83/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:40.452 [84/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:40.452 [85/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:40.452 [86/268] Linking static target lib/librte_ring.a 00:03:40.452 [87/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:40.452 [88/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:40.452 [89/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:40.452 [90/268] Linking target lib/librte_kvargs.so.24.1 00:03:40.452 [91/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:40.452 [92/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:40.726 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:40.726 [94/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:40.726 [95/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:40.726 [96/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:40.726 [97/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:40.726 [98/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:03:40.726 [99/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:40.726 [100/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:40.726 [101/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:40.726 [102/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:40.726 [103/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:40.726 [104/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:40.726 [105/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:40.726 [106/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:40.726 [107/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:40.726 [108/268] Linking static target lib/librte_meter.a 00:03:40.726 [109/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:40.726 [110/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:40.726 [111/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:40.726 [112/268] Linking static target lib/librte_rcu.a 00:03:40.726 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:40.726 [114/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:40.726 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:40.726 [116/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:40.726 [117/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:40.726 [118/268] Linking static target lib/librte_mempool.a 00:03:40.988 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:40.988 [120/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:40.988 [121/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:03:40.988 [122/268] Linking target lib/librte_telemetry.so.24.1 00:03:40.988 [123/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:40.988 [124/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:40.988 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:40.988 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:40.988 [127/268] Linking static target lib/librte_eal.a 00:03:40.988 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:40.988 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:40.988 [130/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:40.988 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:41.254 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:41.254 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:41.254 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:03:41.254 [135/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:03:41.254 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:41.254 [137/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:41.254 [138/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:41.254 [139/268] Linking static target lib/librte_net.a 00:03:41.254 [140/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:41.254 [141/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:41.254 [142/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:41.254 [143/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:41.519 [144/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:41.519 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:41.519 [146/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:03:41.519 [147/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:41.519 [148/268] Linking static target lib/librte_cmdline.a 00:03:41.519 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:41.519 [150/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:41.519 [151/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:41.519 [152/268] Linking static target lib/librte_timer.a 00:03:41.519 [153/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:41.519 [154/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:41.783 [155/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:41.783 [156/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:41.783 [157/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:41.783 [158/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:41.783 [159/268] Linking static target lib/librte_dmadev.a 00:03:41.783 [160/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:41.783 [161/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:41.783 [162/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:41.783 [163/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:41.783 [164/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:41.783 [165/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:41.783 [166/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:41.783 [167/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:42.043 [168/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:42.043 [169/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:42.043 [170/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:42.043 [171/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:42.043 [172/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:42.043 [173/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:42.043 [174/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:42.044 [175/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:42.044 [176/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:42.044 [177/268] Linking static target lib/librte_power.a 00:03:42.044 [178/268] Linking static target lib/librte_compressdev.a 00:03:42.303 [179/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:42.303 [180/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:42.303 [181/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:42.303 [182/268] Linking static target lib/librte_hash.a 00:03:42.303 [183/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:42.303 [184/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:42.303 [185/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:42.303 [186/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:42.303 [187/268] Linking static target lib/librte_reorder.a 00:03:42.303 [188/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:42.303 [189/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:42.303 [190/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:42.303 [191/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:42.303 [192/268] Linking static target lib/librte_mbuf.a 00:03:42.303 [193/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:42.303 [194/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:42.303 [195/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:42.303 [196/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:42.303 [197/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:42.562 [198/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:42.562 [199/268] Linking static target lib/librte_security.a 00:03:42.562 [200/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:42.562 [201/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:42.562 [202/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:42.562 [203/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:42.562 [204/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:42.562 [205/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:42.562 [206/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:42.562 [207/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:42.562 [208/268] Linking static target drivers/librte_bus_vdev.a 00:03:42.821 [209/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:42.821 [210/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:42.821 [211/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:42.821 [212/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:42.821 [213/268] Linking static target drivers/librte_bus_pci.a 00:03:42.821 [214/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:42.821 [215/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:42.821 [216/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:42.821 [217/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:42.821 [218/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:42.821 [219/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:42.821 [220/268] Linking static target drivers/librte_mempool_ring.a 00:03:42.821 [221/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:43.081 [222/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:43.081 [223/268] Linking static target lib/librte_ethdev.a 00:03:43.081 [224/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:43.081 [225/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:43.081 [226/268] Linking static target lib/librte_cryptodev.a 00:03:44.455 [227/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:45.022 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:46.923 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:47.181 [230/268] Linking target lib/librte_eal.so.24.1 00:03:47.181 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:47.181 [232/268] Linking target lib/librte_timer.so.24.1 00:03:47.181 [233/268] Linking target lib/librte_meter.so.24.1 00:03:47.181 [234/268] Linking target lib/librte_ring.so.24.1 00:03:47.181 [235/268] Linking target lib/librte_pci.so.24.1 00:03:47.181 [236/268] Linking target drivers/librte_bus_vdev.so.24.1 00:03:47.181 [237/268] Linking target lib/librte_dmadev.so.24.1 00:03:47.181 [238/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:47.439 [239/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:47.439 [240/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:47.439 [241/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:47.439 [242/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:47.439 [243/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:47.439 [244/268] Linking target lib/librte_rcu.so.24.1 00:03:47.439 [245/268] Linking target lib/librte_mempool.so.24.1 00:03:47.439 [246/268] Linking target drivers/librte_bus_pci.so.24.1 00:03:47.439 [247/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:47.439 [248/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:47.698 [249/268] Linking target drivers/librte_mempool_ring.so.24.1 00:03:47.698 [250/268] Linking target lib/librte_mbuf.so.24.1 00:03:47.698 [251/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:47.698 [252/268] Linking target lib/librte_reorder.so.24.1 00:03:47.698 [253/268] Linking target lib/librte_compressdev.so.24.1 00:03:47.698 [254/268] Linking target lib/librte_net.so.24.1 00:03:47.698 [255/268] Linking target lib/librte_cryptodev.so.24.1 00:03:47.956 [256/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:47.956 [257/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:47.956 [258/268] Linking target lib/librte_cmdline.so.24.1 00:03:47.956 [259/268] Linking target lib/librte_hash.so.24.1 00:03:47.956 [260/268] Linking target lib/librte_security.so.24.1 00:03:47.956 [261/268] Linking target lib/librte_ethdev.so.24.1 00:03:47.956 [262/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:47.956 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:48.216 [264/268] Linking target lib/librte_power.so.24.1 00:03:50.757 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:50.757 [266/268] Linking static target lib/librte_vhost.a 00:03:51.695 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:51.695 [268/268] Linking target lib/librte_vhost.so.24.1 00:03:51.695 INFO: autodetecting backend as ninja 00:03:51.695 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 48 00:03:52.630 CC lib/ut_mock/mock.o 00:03:52.630 CC lib/log/log.o 00:03:52.630 CC lib/log/log_flags.o 00:03:52.630 CC lib/ut/ut.o 00:03:52.630 CC lib/log/log_deprecated.o 00:03:52.889 LIB libspdk_log.a 00:03:52.889 LIB libspdk_ut.a 00:03:52.889 LIB libspdk_ut_mock.a 00:03:52.889 SO libspdk_log.so.7.0 00:03:52.889 SO libspdk_ut.so.2.0 00:03:52.889 SO libspdk_ut_mock.so.6.0 00:03:52.889 SYMLINK libspdk_ut.so 00:03:52.889 SYMLINK libspdk_ut_mock.so 00:03:52.889 SYMLINK libspdk_log.so 00:03:53.147 CXX lib/trace_parser/trace.o 00:03:53.147 CC lib/util/base64.o 00:03:53.147 CC lib/ioat/ioat.o 00:03:53.147 CC lib/dma/dma.o 00:03:53.147 CC lib/util/bit_array.o 00:03:53.147 CC lib/util/cpuset.o 00:03:53.147 CC lib/util/crc16.o 00:03:53.147 CC lib/util/crc32.o 00:03:53.147 CC lib/util/crc32c.o 00:03:53.147 CC lib/util/crc32_ieee.o 00:03:53.147 CC lib/util/crc64.o 00:03:53.147 CC lib/util/dif.o 00:03:53.147 CC lib/util/fd.o 00:03:53.147 CC lib/util/fd_group.o 00:03:53.147 CC lib/util/file.o 00:03:53.147 CC lib/util/hexlify.o 00:03:53.147 CC lib/util/iov.o 00:03:53.147 CC lib/util/math.o 00:03:53.147 CC lib/util/net.o 00:03:53.147 CC lib/util/pipe.o 00:03:53.147 CC lib/util/strerror_tls.o 00:03:53.147 CC lib/util/string.o 00:03:53.147 CC lib/util/uuid.o 00:03:53.147 CC lib/util/xor.o 00:03:53.147 CC lib/util/zipf.o 00:03:53.147 CC lib/vfio_user/host/vfio_user_pci.o 00:03:53.147 CC lib/vfio_user/host/vfio_user.o 00:03:53.405 LIB libspdk_dma.a 00:03:53.405 SO libspdk_dma.so.4.0 00:03:53.405 LIB libspdk_ioat.a 00:03:53.405 SYMLINK libspdk_dma.so 00:03:53.405 SO libspdk_ioat.so.7.0 00:03:53.405 SYMLINK libspdk_ioat.so 00:03:53.405 LIB libspdk_vfio_user.a 00:03:53.405 SO libspdk_vfio_user.so.5.0 00:03:53.405 SYMLINK libspdk_vfio_user.so 00:03:53.664 LIB libspdk_util.a 00:03:53.664 SO libspdk_util.so.10.0 00:03:53.923 SYMLINK libspdk_util.so 00:03:53.923 CC lib/conf/conf.o 00:03:53.923 CC lib/rdma_provider/common.o 00:03:53.923 CC lib/rdma_utils/rdma_utils.o 00:03:53.923 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:53.923 CC lib/idxd/idxd.o 00:03:53.923 CC lib/json/json_parse.o 00:03:53.923 CC lib/vmd/vmd.o 00:03:53.923 CC lib/env_dpdk/env.o 00:03:53.923 CC lib/idxd/idxd_user.o 00:03:53.923 CC lib/json/json_util.o 00:03:53.923 CC lib/vmd/led.o 00:03:53.923 CC lib/env_dpdk/memory.o 00:03:53.923 CC lib/json/json_write.o 00:03:53.923 CC lib/idxd/idxd_kernel.o 00:03:53.923 CC lib/env_dpdk/pci.o 00:03:53.923 CC lib/env_dpdk/init.o 00:03:53.923 CC lib/env_dpdk/threads.o 00:03:53.923 CC lib/env_dpdk/pci_ioat.o 00:03:53.923 CC lib/env_dpdk/pci_virtio.o 00:03:53.923 CC lib/env_dpdk/pci_vmd.o 00:03:53.923 CC lib/env_dpdk/pci_idxd.o 00:03:53.923 CC lib/env_dpdk/pci_event.o 00:03:53.923 CC lib/env_dpdk/sigbus_handler.o 00:03:53.923 CC lib/env_dpdk/pci_dpdk.o 00:03:53.923 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:53.923 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:54.181 LIB libspdk_rdma_provider.a 00:03:54.181 SO libspdk_rdma_provider.so.6.0 00:03:54.181 LIB libspdk_conf.a 00:03:54.181 SO libspdk_conf.so.6.0 00:03:54.181 LIB libspdk_rdma_utils.a 00:03:54.181 SYMLINK libspdk_rdma_provider.so 00:03:54.181 LIB libspdk_json.a 00:03:54.181 SO libspdk_rdma_utils.so.1.0 00:03:54.181 SYMLINK libspdk_conf.so 00:03:54.440 SO libspdk_json.so.6.0 00:03:54.440 SYMLINK libspdk_rdma_utils.so 00:03:54.440 SYMLINK libspdk_json.so 00:03:54.440 CC lib/jsonrpc/jsonrpc_server.o 00:03:54.440 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:54.440 CC lib/jsonrpc/jsonrpc_client.o 00:03:54.440 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:54.698 LIB libspdk_vmd.a 00:03:54.698 LIB libspdk_idxd.a 00:03:54.698 SO libspdk_vmd.so.6.0 00:03:54.698 SO libspdk_idxd.so.12.0 00:03:54.698 SYMLINK libspdk_vmd.so 00:03:54.698 SYMLINK libspdk_idxd.so 00:03:54.698 LIB libspdk_jsonrpc.a 00:03:54.957 SO libspdk_jsonrpc.so.6.0 00:03:54.957 SYMLINK libspdk_jsonrpc.so 00:03:54.957 LIB libspdk_trace_parser.a 00:03:54.957 SO libspdk_trace_parser.so.5.0 00:03:54.957 SYMLINK libspdk_trace_parser.so 00:03:54.957 CC lib/rpc/rpc.o 00:03:55.216 LIB libspdk_rpc.a 00:03:55.216 SO libspdk_rpc.so.6.0 00:03:55.473 SYMLINK libspdk_rpc.so 00:03:55.473 CC lib/keyring/keyring.o 00:03:55.473 CC lib/keyring/keyring_rpc.o 00:03:55.473 CC lib/notify/notify.o 00:03:55.473 CC lib/trace/trace.o 00:03:55.473 CC lib/notify/notify_rpc.o 00:03:55.473 CC lib/trace/trace_flags.o 00:03:55.473 CC lib/trace/trace_rpc.o 00:03:55.731 LIB libspdk_notify.a 00:03:55.731 SO libspdk_notify.so.6.0 00:03:55.731 LIB libspdk_keyring.a 00:03:55.731 SYMLINK libspdk_notify.so 00:03:55.731 LIB libspdk_trace.a 00:03:55.731 SO libspdk_keyring.so.1.0 00:03:55.731 SO libspdk_trace.so.10.0 00:03:55.731 SYMLINK libspdk_keyring.so 00:03:55.990 SYMLINK libspdk_trace.so 00:03:55.990 CC lib/thread/thread.o 00:03:55.990 CC lib/thread/iobuf.o 00:03:55.990 CC lib/sock/sock.o 00:03:55.990 CC lib/sock/sock_rpc.o 00:03:55.990 LIB libspdk_env_dpdk.a 00:03:56.249 SO libspdk_env_dpdk.so.15.0 00:03:56.250 SYMLINK libspdk_env_dpdk.so 00:03:56.509 LIB libspdk_sock.a 00:03:56.509 SO libspdk_sock.so.10.0 00:03:56.509 SYMLINK libspdk_sock.so 00:03:56.768 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:56.768 CC lib/nvme/nvme_ctrlr.o 00:03:56.768 CC lib/nvme/nvme_fabric.o 00:03:56.768 CC lib/nvme/nvme_ns_cmd.o 00:03:56.768 CC lib/nvme/nvme_ns.o 00:03:56.768 CC lib/nvme/nvme_pcie_common.o 00:03:56.768 CC lib/nvme/nvme_pcie.o 00:03:56.768 CC lib/nvme/nvme_qpair.o 00:03:56.768 CC lib/nvme/nvme.o 00:03:56.768 CC lib/nvme/nvme_quirks.o 00:03:56.768 CC lib/nvme/nvme_transport.o 00:03:56.768 CC lib/nvme/nvme_discovery.o 00:03:56.768 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:56.768 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:56.768 CC lib/nvme/nvme_tcp.o 00:03:56.768 CC lib/nvme/nvme_opal.o 00:03:56.768 CC lib/nvme/nvme_io_msg.o 00:03:56.768 CC lib/nvme/nvme_poll_group.o 00:03:56.768 CC lib/nvme/nvme_zns.o 00:03:56.768 CC lib/nvme/nvme_stubs.o 00:03:56.768 CC lib/nvme/nvme_auth.o 00:03:56.768 CC lib/nvme/nvme_cuse.o 00:03:56.768 CC lib/nvme/nvme_vfio_user.o 00:03:56.768 CC lib/nvme/nvme_rdma.o 00:03:57.705 LIB libspdk_thread.a 00:03:57.705 SO libspdk_thread.so.10.1 00:03:57.705 SYMLINK libspdk_thread.so 00:03:57.705 CC lib/accel/accel.o 00:03:57.705 CC lib/accel/accel_rpc.o 00:03:57.705 CC lib/accel/accel_sw.o 00:03:57.705 CC lib/blob/blobstore.o 00:03:57.705 CC lib/init/json_config.o 00:03:57.705 CC lib/virtio/virtio.o 00:03:57.705 CC lib/init/subsystem.o 00:03:57.705 CC lib/virtio/virtio_vhost_user.o 00:03:57.705 CC lib/blob/request.o 00:03:57.705 CC lib/vfu_tgt/tgt_endpoint.o 00:03:57.705 CC lib/virtio/virtio_vfio_user.o 00:03:57.705 CC lib/init/subsystem_rpc.o 00:03:57.705 CC lib/vfu_tgt/tgt_rpc.o 00:03:57.705 CC lib/virtio/virtio_pci.o 00:03:57.705 CC lib/blob/zeroes.o 00:03:57.705 CC lib/init/rpc.o 00:03:57.705 CC lib/blob/blob_bs_dev.o 00:03:57.964 LIB libspdk_init.a 00:03:58.223 SO libspdk_init.so.5.0 00:03:58.223 LIB libspdk_virtio.a 00:03:58.223 LIB libspdk_vfu_tgt.a 00:03:58.223 SYMLINK libspdk_init.so 00:03:58.223 SO libspdk_virtio.so.7.0 00:03:58.223 SO libspdk_vfu_tgt.so.3.0 00:03:58.223 SYMLINK libspdk_vfu_tgt.so 00:03:58.223 SYMLINK libspdk_virtio.so 00:03:58.223 CC lib/event/app.o 00:03:58.223 CC lib/event/reactor.o 00:03:58.223 CC lib/event/log_rpc.o 00:03:58.223 CC lib/event/app_rpc.o 00:03:58.223 CC lib/event/scheduler_static.o 00:03:58.790 LIB libspdk_event.a 00:03:58.790 SO libspdk_event.so.14.0 00:03:58.790 SYMLINK libspdk_event.so 00:03:58.790 LIB libspdk_accel.a 00:03:59.049 SO libspdk_accel.so.16.0 00:03:59.049 SYMLINK libspdk_accel.so 00:03:59.049 LIB libspdk_nvme.a 00:03:59.308 CC lib/bdev/bdev.o 00:03:59.308 CC lib/bdev/bdev_rpc.o 00:03:59.308 CC lib/bdev/bdev_zone.o 00:03:59.308 CC lib/bdev/part.o 00:03:59.308 CC lib/bdev/scsi_nvme.o 00:03:59.308 SO libspdk_nvme.so.13.1 00:03:59.566 SYMLINK libspdk_nvme.so 00:04:00.942 LIB libspdk_blob.a 00:04:00.942 SO libspdk_blob.so.11.0 00:04:00.942 SYMLINK libspdk_blob.so 00:04:00.942 CC lib/blobfs/blobfs.o 00:04:00.942 CC lib/blobfs/tree.o 00:04:00.942 CC lib/lvol/lvol.o 00:04:01.881 LIB libspdk_bdev.a 00:04:01.881 SO libspdk_bdev.so.16.0 00:04:01.881 LIB libspdk_blobfs.a 00:04:01.881 SO libspdk_blobfs.so.10.0 00:04:01.881 SYMLINK libspdk_bdev.so 00:04:01.881 SYMLINK libspdk_blobfs.so 00:04:01.881 LIB libspdk_lvol.a 00:04:01.881 SO libspdk_lvol.so.10.0 00:04:01.881 CC lib/scsi/dev.o 00:04:01.881 CC lib/scsi/lun.o 00:04:01.881 CC lib/scsi/port.o 00:04:01.881 CC lib/scsi/scsi.o 00:04:01.882 CC lib/scsi/scsi_bdev.o 00:04:01.882 CC lib/scsi/scsi_pr.o 00:04:01.882 CC lib/nbd/nbd.o 00:04:01.882 CC lib/scsi/scsi_rpc.o 00:04:01.882 CC lib/nbd/nbd_rpc.o 00:04:01.882 CC lib/scsi/task.o 00:04:01.882 CC lib/ublk/ublk.o 00:04:01.882 CC lib/ublk/ublk_rpc.o 00:04:01.882 CC lib/nvmf/ctrlr.o 00:04:01.882 CC lib/ftl/ftl_core.o 00:04:01.882 CC lib/nvmf/ctrlr_discovery.o 00:04:01.882 CC lib/ftl/ftl_init.o 00:04:01.882 CC lib/nvmf/ctrlr_bdev.o 00:04:01.882 CC lib/ftl/ftl_layout.o 00:04:01.882 CC lib/nvmf/subsystem.o 00:04:01.882 CC lib/nvmf/nvmf.o 00:04:01.882 CC lib/ftl/ftl_debug.o 00:04:01.882 CC lib/ftl/ftl_io.o 00:04:01.882 CC lib/nvmf/nvmf_rpc.o 00:04:01.882 CC lib/ftl/ftl_sb.o 00:04:01.882 CC lib/ftl/ftl_l2p.o 00:04:01.882 CC lib/nvmf/transport.o 00:04:01.882 CC lib/nvmf/tcp.o 00:04:01.882 SYMLINK libspdk_lvol.so 00:04:01.882 CC lib/ftl/ftl_nv_cache.o 00:04:01.882 CC lib/ftl/ftl_l2p_flat.o 00:04:01.882 CC lib/nvmf/stubs.o 00:04:01.882 CC lib/nvmf/mdns_server.o 00:04:01.882 CC lib/ftl/ftl_band.o 00:04:01.882 CC lib/nvmf/vfio_user.o 00:04:01.882 CC lib/nvmf/rdma.o 00:04:01.882 CC lib/ftl/ftl_band_ops.o 00:04:01.882 CC lib/ftl/ftl_writer.o 00:04:01.882 CC lib/nvmf/auth.o 00:04:01.882 CC lib/ftl/ftl_rq.o 00:04:01.882 CC lib/ftl/ftl_reloc.o 00:04:01.882 CC lib/ftl/ftl_l2p_cache.o 00:04:01.882 CC lib/ftl/ftl_p2l.o 00:04:01.882 CC lib/ftl/mngt/ftl_mngt.o 00:04:02.149 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:02.149 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:02.149 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:02.149 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:02.149 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:02.149 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:02.419 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:02.419 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:02.419 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:02.419 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:02.419 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:02.419 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:02.419 CC lib/ftl/utils/ftl_conf.o 00:04:02.419 CC lib/ftl/utils/ftl_md.o 00:04:02.419 CC lib/ftl/utils/ftl_mempool.o 00:04:02.419 CC lib/ftl/utils/ftl_bitmap.o 00:04:02.419 CC lib/ftl/utils/ftl_property.o 00:04:02.419 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:02.419 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:02.419 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:02.419 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:02.419 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:02.419 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:02.681 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:02.681 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:02.681 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:02.681 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:02.681 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:02.681 CC lib/ftl/base/ftl_base_dev.o 00:04:02.682 CC lib/ftl/base/ftl_base_bdev.o 00:04:02.682 CC lib/ftl/ftl_trace.o 00:04:02.943 LIB libspdk_nbd.a 00:04:02.943 SO libspdk_nbd.so.7.0 00:04:02.943 LIB libspdk_scsi.a 00:04:02.943 SYMLINK libspdk_nbd.so 00:04:02.943 SO libspdk_scsi.so.9.0 00:04:03.203 SYMLINK libspdk_scsi.so 00:04:03.203 LIB libspdk_ublk.a 00:04:03.203 SO libspdk_ublk.so.3.0 00:04:03.203 SYMLINK libspdk_ublk.so 00:04:03.203 CC lib/vhost/vhost.o 00:04:03.203 CC lib/iscsi/conn.o 00:04:03.203 CC lib/vhost/vhost_rpc.o 00:04:03.203 CC lib/iscsi/init_grp.o 00:04:03.203 CC lib/vhost/vhost_scsi.o 00:04:03.203 CC lib/iscsi/iscsi.o 00:04:03.203 CC lib/vhost/vhost_blk.o 00:04:03.203 CC lib/iscsi/md5.o 00:04:03.203 CC lib/vhost/rte_vhost_user.o 00:04:03.203 CC lib/iscsi/param.o 00:04:03.203 CC lib/iscsi/portal_grp.o 00:04:03.203 CC lib/iscsi/tgt_node.o 00:04:03.203 CC lib/iscsi/iscsi_subsystem.o 00:04:03.203 CC lib/iscsi/iscsi_rpc.o 00:04:03.203 CC lib/iscsi/task.o 00:04:03.462 LIB libspdk_ftl.a 00:04:03.719 SO libspdk_ftl.so.9.0 00:04:03.978 SYMLINK libspdk_ftl.so 00:04:04.548 LIB libspdk_vhost.a 00:04:04.548 SO libspdk_vhost.so.8.0 00:04:04.548 LIB libspdk_nvmf.a 00:04:04.548 SYMLINK libspdk_vhost.so 00:04:04.548 SO libspdk_nvmf.so.19.0 00:04:04.807 LIB libspdk_iscsi.a 00:04:04.807 SO libspdk_iscsi.so.8.0 00:04:04.807 SYMLINK libspdk_nvmf.so 00:04:04.807 SYMLINK libspdk_iscsi.so 00:04:05.066 CC module/vfu_device/vfu_virtio.o 00:04:05.066 CC module/vfu_device/vfu_virtio_blk.o 00:04:05.066 CC module/env_dpdk/env_dpdk_rpc.o 00:04:05.066 CC module/vfu_device/vfu_virtio_scsi.o 00:04:05.066 CC module/vfu_device/vfu_virtio_rpc.o 00:04:05.333 CC module/blob/bdev/blob_bdev.o 00:04:05.333 CC module/accel/error/accel_error.o 00:04:05.333 CC module/accel/ioat/accel_ioat.o 00:04:05.333 CC module/accel/dsa/accel_dsa.o 00:04:05.333 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:05.333 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:05.333 CC module/scheduler/gscheduler/gscheduler.o 00:04:05.333 CC module/accel/error/accel_error_rpc.o 00:04:05.333 CC module/keyring/linux/keyring.o 00:04:05.333 CC module/accel/ioat/accel_ioat_rpc.o 00:04:05.333 CC module/accel/dsa/accel_dsa_rpc.o 00:04:05.333 CC module/keyring/linux/keyring_rpc.o 00:04:05.333 CC module/sock/posix/posix.o 00:04:05.333 CC module/accel/iaa/accel_iaa.o 00:04:05.333 CC module/accel/iaa/accel_iaa_rpc.o 00:04:05.333 CC module/keyring/file/keyring.o 00:04:05.333 CC module/keyring/file/keyring_rpc.o 00:04:05.333 LIB libspdk_env_dpdk_rpc.a 00:04:05.333 SO libspdk_env_dpdk_rpc.so.6.0 00:04:05.333 SYMLINK libspdk_env_dpdk_rpc.so 00:04:05.333 LIB libspdk_keyring_file.a 00:04:05.333 LIB libspdk_scheduler_gscheduler.a 00:04:05.333 LIB libspdk_scheduler_dpdk_governor.a 00:04:05.333 LIB libspdk_keyring_linux.a 00:04:05.597 SO libspdk_keyring_file.so.1.0 00:04:05.597 SO libspdk_scheduler_gscheduler.so.4.0 00:04:05.597 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:05.597 LIB libspdk_accel_error.a 00:04:05.597 SO libspdk_keyring_linux.so.1.0 00:04:05.597 LIB libspdk_accel_ioat.a 00:04:05.597 LIB libspdk_scheduler_dynamic.a 00:04:05.597 SO libspdk_accel_error.so.2.0 00:04:05.597 LIB libspdk_accel_iaa.a 00:04:05.597 SO libspdk_accel_ioat.so.6.0 00:04:05.597 SO libspdk_scheduler_dynamic.so.4.0 00:04:05.597 SYMLINK libspdk_scheduler_gscheduler.so 00:04:05.597 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:05.597 SYMLINK libspdk_keyring_file.so 00:04:05.597 SO libspdk_accel_iaa.so.3.0 00:04:05.597 SYMLINK libspdk_keyring_linux.so 00:04:05.597 LIB libspdk_accel_dsa.a 00:04:05.597 SYMLINK libspdk_accel_error.so 00:04:05.597 LIB libspdk_blob_bdev.a 00:04:05.597 SYMLINK libspdk_accel_ioat.so 00:04:05.597 SYMLINK libspdk_scheduler_dynamic.so 00:04:05.597 SO libspdk_accel_dsa.so.5.0 00:04:05.597 SYMLINK libspdk_accel_iaa.so 00:04:05.597 SO libspdk_blob_bdev.so.11.0 00:04:05.597 SYMLINK libspdk_accel_dsa.so 00:04:05.597 SYMLINK libspdk_blob_bdev.so 00:04:05.859 LIB libspdk_vfu_device.a 00:04:05.859 SO libspdk_vfu_device.so.3.0 00:04:05.859 CC module/bdev/delay/vbdev_delay.o 00:04:05.859 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:05.859 CC module/blobfs/bdev/blobfs_bdev.o 00:04:05.859 CC module/bdev/error/vbdev_error.o 00:04:05.859 CC module/bdev/malloc/bdev_malloc.o 00:04:05.859 CC module/bdev/null/bdev_null.o 00:04:05.859 CC module/bdev/lvol/vbdev_lvol.o 00:04:05.859 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:05.859 CC module/bdev/null/bdev_null_rpc.o 00:04:05.859 CC module/bdev/error/vbdev_error_rpc.o 00:04:05.859 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:05.859 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:05.859 CC module/bdev/nvme/bdev_nvme.o 00:04:05.859 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:05.859 CC module/bdev/aio/bdev_aio.o 00:04:05.859 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:05.859 CC module/bdev/gpt/gpt.o 00:04:05.859 CC module/bdev/aio/bdev_aio_rpc.o 00:04:05.859 CC module/bdev/nvme/nvme_rpc.o 00:04:05.859 CC module/bdev/split/vbdev_split.o 00:04:05.859 CC module/bdev/nvme/bdev_mdns_client.o 00:04:05.859 CC module/bdev/gpt/vbdev_gpt.o 00:04:05.859 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:05.859 CC module/bdev/raid/bdev_raid.o 00:04:05.859 CC module/bdev/split/vbdev_split_rpc.o 00:04:05.859 CC module/bdev/passthru/vbdev_passthru.o 00:04:05.859 CC module/bdev/nvme/vbdev_opal.o 00:04:05.859 CC module/bdev/ftl/bdev_ftl.o 00:04:05.859 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:05.859 CC module/bdev/iscsi/bdev_iscsi.o 00:04:05.859 CC module/bdev/raid/bdev_raid_rpc.o 00:04:05.859 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:05.859 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:05.859 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:05.859 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:05.859 CC module/bdev/raid/bdev_raid_sb.o 00:04:05.859 CC module/bdev/raid/raid0.o 00:04:05.859 CC module/bdev/raid/raid1.o 00:04:05.859 CC module/bdev/raid/concat.o 00:04:05.859 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:05.859 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:05.859 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:06.118 SYMLINK libspdk_vfu_device.so 00:04:06.118 LIB libspdk_sock_posix.a 00:04:06.118 SO libspdk_sock_posix.so.6.0 00:04:06.377 LIB libspdk_blobfs_bdev.a 00:04:06.377 SO libspdk_blobfs_bdev.so.6.0 00:04:06.377 SYMLINK libspdk_sock_posix.so 00:04:06.377 LIB libspdk_bdev_null.a 00:04:06.377 SYMLINK libspdk_blobfs_bdev.so 00:04:06.377 LIB libspdk_bdev_split.a 00:04:06.377 LIB libspdk_bdev_gpt.a 00:04:06.377 SO libspdk_bdev_null.so.6.0 00:04:06.377 LIB libspdk_bdev_ftl.a 00:04:06.377 SO libspdk_bdev_split.so.6.0 00:04:06.377 LIB libspdk_bdev_error.a 00:04:06.377 SO libspdk_bdev_gpt.so.6.0 00:04:06.377 SO libspdk_bdev_ftl.so.6.0 00:04:06.377 SO libspdk_bdev_error.so.6.0 00:04:06.377 LIB libspdk_bdev_delay.a 00:04:06.377 SYMLINK libspdk_bdev_null.so 00:04:06.377 SYMLINK libspdk_bdev_split.so 00:04:06.377 LIB libspdk_bdev_malloc.a 00:04:06.377 SO libspdk_bdev_delay.so.6.0 00:04:06.377 LIB libspdk_bdev_passthru.a 00:04:06.377 LIB libspdk_bdev_iscsi.a 00:04:06.377 SYMLINK libspdk_bdev_gpt.so 00:04:06.377 SYMLINK libspdk_bdev_error.so 00:04:06.377 SYMLINK libspdk_bdev_ftl.so 00:04:06.377 LIB libspdk_bdev_zone_block.a 00:04:06.377 SO libspdk_bdev_malloc.so.6.0 00:04:06.377 SO libspdk_bdev_passthru.so.6.0 00:04:06.377 SO libspdk_bdev_iscsi.so.6.0 00:04:06.377 SO libspdk_bdev_zone_block.so.6.0 00:04:06.377 LIB libspdk_bdev_aio.a 00:04:06.636 SYMLINK libspdk_bdev_delay.so 00:04:06.636 SO libspdk_bdev_aio.so.6.0 00:04:06.636 SYMLINK libspdk_bdev_passthru.so 00:04:06.636 SYMLINK libspdk_bdev_malloc.so 00:04:06.636 SYMLINK libspdk_bdev_zone_block.so 00:04:06.636 SYMLINK libspdk_bdev_iscsi.so 00:04:06.636 SYMLINK libspdk_bdev_aio.so 00:04:06.636 LIB libspdk_bdev_lvol.a 00:04:06.636 SO libspdk_bdev_lvol.so.6.0 00:04:06.636 LIB libspdk_bdev_virtio.a 00:04:06.636 SYMLINK libspdk_bdev_lvol.so 00:04:06.636 SO libspdk_bdev_virtio.so.6.0 00:04:06.895 SYMLINK libspdk_bdev_virtio.so 00:04:06.895 LIB libspdk_bdev_raid.a 00:04:07.153 SO libspdk_bdev_raid.so.6.0 00:04:07.153 SYMLINK libspdk_bdev_raid.so 00:04:08.537 LIB libspdk_bdev_nvme.a 00:04:08.537 SO libspdk_bdev_nvme.so.7.0 00:04:08.537 SYMLINK libspdk_bdev_nvme.so 00:04:08.795 CC module/event/subsystems/sock/sock.o 00:04:08.795 CC module/event/subsystems/keyring/keyring.o 00:04:08.795 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:08.795 CC module/event/subsystems/iobuf/iobuf.o 00:04:08.795 CC module/event/subsystems/vmd/vmd.o 00:04:08.795 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:04:08.795 CC module/event/subsystems/scheduler/scheduler.o 00:04:08.795 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:08.795 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:08.795 LIB libspdk_event_keyring.a 00:04:08.795 LIB libspdk_event_vhost_blk.a 00:04:08.795 LIB libspdk_event_vfu_tgt.a 00:04:08.795 LIB libspdk_event_vmd.a 00:04:08.795 LIB libspdk_event_scheduler.a 00:04:08.795 LIB libspdk_event_sock.a 00:04:08.795 SO libspdk_event_keyring.so.1.0 00:04:08.795 LIB libspdk_event_iobuf.a 00:04:08.795 SO libspdk_event_vhost_blk.so.3.0 00:04:08.795 SO libspdk_event_vfu_tgt.so.3.0 00:04:08.795 SO libspdk_event_scheduler.so.4.0 00:04:09.054 SO libspdk_event_sock.so.5.0 00:04:09.054 SO libspdk_event_vmd.so.6.0 00:04:09.054 SO libspdk_event_iobuf.so.3.0 00:04:09.054 SYMLINK libspdk_event_keyring.so 00:04:09.054 SYMLINK libspdk_event_vhost_blk.so 00:04:09.054 SYMLINK libspdk_event_vfu_tgt.so 00:04:09.054 SYMLINK libspdk_event_scheduler.so 00:04:09.054 SYMLINK libspdk_event_sock.so 00:04:09.054 SYMLINK libspdk_event_vmd.so 00:04:09.054 SYMLINK libspdk_event_iobuf.so 00:04:09.054 CC module/event/subsystems/accel/accel.o 00:04:09.312 LIB libspdk_event_accel.a 00:04:09.312 SO libspdk_event_accel.so.6.0 00:04:09.312 SYMLINK libspdk_event_accel.so 00:04:09.570 CC module/event/subsystems/bdev/bdev.o 00:04:09.830 LIB libspdk_event_bdev.a 00:04:09.830 SO libspdk_event_bdev.so.6.0 00:04:09.830 SYMLINK libspdk_event_bdev.so 00:04:10.089 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:10.089 CC module/event/subsystems/ublk/ublk.o 00:04:10.089 CC module/event/subsystems/scsi/scsi.o 00:04:10.089 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:10.089 CC module/event/subsystems/nbd/nbd.o 00:04:10.089 LIB libspdk_event_ublk.a 00:04:10.089 LIB libspdk_event_nbd.a 00:04:10.089 LIB libspdk_event_scsi.a 00:04:10.089 SO libspdk_event_ublk.so.3.0 00:04:10.089 SO libspdk_event_nbd.so.6.0 00:04:10.089 SO libspdk_event_scsi.so.6.0 00:04:10.089 SYMLINK libspdk_event_ublk.so 00:04:10.089 SYMLINK libspdk_event_nbd.so 00:04:10.089 SYMLINK libspdk_event_scsi.so 00:04:10.348 LIB libspdk_event_nvmf.a 00:04:10.348 SO libspdk_event_nvmf.so.6.0 00:04:10.348 SYMLINK libspdk_event_nvmf.so 00:04:10.348 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:10.348 CC module/event/subsystems/iscsi/iscsi.o 00:04:10.605 LIB libspdk_event_vhost_scsi.a 00:04:10.605 SO libspdk_event_vhost_scsi.so.3.0 00:04:10.605 LIB libspdk_event_iscsi.a 00:04:10.605 SO libspdk_event_iscsi.so.6.0 00:04:10.605 SYMLINK libspdk_event_vhost_scsi.so 00:04:10.605 SYMLINK libspdk_event_iscsi.so 00:04:10.865 SO libspdk.so.6.0 00:04:10.865 SYMLINK libspdk.so 00:04:10.865 CXX app/trace/trace.o 00:04:10.865 CC app/trace_record/trace_record.o 00:04:10.865 CC app/spdk_nvme_perf/perf.o 00:04:10.865 CC app/spdk_top/spdk_top.o 00:04:10.865 CC test/rpc_client/rpc_client_test.o 00:04:10.865 CC app/spdk_nvme_identify/identify.o 00:04:10.865 CC app/spdk_nvme_discover/discovery_aer.o 00:04:10.865 CC app/spdk_lspci/spdk_lspci.o 00:04:10.865 TEST_HEADER include/spdk/accel.h 00:04:10.865 TEST_HEADER include/spdk/accel_module.h 00:04:10.865 TEST_HEADER include/spdk/assert.h 00:04:10.865 TEST_HEADER include/spdk/barrier.h 00:04:10.865 TEST_HEADER include/spdk/base64.h 00:04:10.865 TEST_HEADER include/spdk/bdev.h 00:04:10.865 TEST_HEADER include/spdk/bdev_module.h 00:04:10.865 TEST_HEADER include/spdk/bdev_zone.h 00:04:10.865 TEST_HEADER include/spdk/bit_array.h 00:04:10.865 TEST_HEADER include/spdk/bit_pool.h 00:04:10.865 TEST_HEADER include/spdk/blob_bdev.h 00:04:10.865 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:10.865 TEST_HEADER include/spdk/blob.h 00:04:10.865 TEST_HEADER include/spdk/blobfs.h 00:04:10.865 TEST_HEADER include/spdk/conf.h 00:04:10.865 TEST_HEADER include/spdk/config.h 00:04:10.865 TEST_HEADER include/spdk/cpuset.h 00:04:10.865 TEST_HEADER include/spdk/crc16.h 00:04:10.865 TEST_HEADER include/spdk/crc64.h 00:04:10.865 TEST_HEADER include/spdk/crc32.h 00:04:10.865 TEST_HEADER include/spdk/dif.h 00:04:10.865 TEST_HEADER include/spdk/dma.h 00:04:10.865 TEST_HEADER include/spdk/env_dpdk.h 00:04:10.865 TEST_HEADER include/spdk/endian.h 00:04:10.865 TEST_HEADER include/spdk/env.h 00:04:10.865 TEST_HEADER include/spdk/event.h 00:04:10.865 TEST_HEADER include/spdk/fd_group.h 00:04:10.865 TEST_HEADER include/spdk/fd.h 00:04:10.865 TEST_HEADER include/spdk/file.h 00:04:10.865 TEST_HEADER include/spdk/ftl.h 00:04:10.865 TEST_HEADER include/spdk/gpt_spec.h 00:04:10.865 TEST_HEADER include/spdk/hexlify.h 00:04:10.865 TEST_HEADER include/spdk/histogram_data.h 00:04:10.865 TEST_HEADER include/spdk/idxd.h 00:04:10.865 TEST_HEADER include/spdk/idxd_spec.h 00:04:10.865 TEST_HEADER include/spdk/init.h 00:04:10.865 TEST_HEADER include/spdk/ioat.h 00:04:10.865 TEST_HEADER include/spdk/ioat_spec.h 00:04:10.865 TEST_HEADER include/spdk/iscsi_spec.h 00:04:10.865 TEST_HEADER include/spdk/json.h 00:04:10.865 TEST_HEADER include/spdk/jsonrpc.h 00:04:10.865 TEST_HEADER include/spdk/keyring.h 00:04:10.865 TEST_HEADER include/spdk/keyring_module.h 00:04:10.865 TEST_HEADER include/spdk/likely.h 00:04:10.865 TEST_HEADER include/spdk/log.h 00:04:10.865 TEST_HEADER include/spdk/lvol.h 00:04:10.865 TEST_HEADER include/spdk/mmio.h 00:04:10.865 TEST_HEADER include/spdk/memory.h 00:04:10.865 TEST_HEADER include/spdk/nbd.h 00:04:10.865 TEST_HEADER include/spdk/net.h 00:04:10.865 TEST_HEADER include/spdk/notify.h 00:04:10.865 TEST_HEADER include/spdk/nvme.h 00:04:10.865 TEST_HEADER include/spdk/nvme_intel.h 00:04:10.865 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:10.865 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:10.865 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:10.865 TEST_HEADER include/spdk/nvme_spec.h 00:04:10.865 TEST_HEADER include/spdk/nvme_zns.h 00:04:10.865 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:10.865 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:10.865 TEST_HEADER include/spdk/nvmf.h 00:04:10.865 TEST_HEADER include/spdk/nvmf_spec.h 00:04:10.865 TEST_HEADER include/spdk/nvmf_transport.h 00:04:10.865 TEST_HEADER include/spdk/opal_spec.h 00:04:10.865 TEST_HEADER include/spdk/opal.h 00:04:10.865 TEST_HEADER include/spdk/pipe.h 00:04:10.865 TEST_HEADER include/spdk/pci_ids.h 00:04:10.865 TEST_HEADER include/spdk/queue.h 00:04:10.865 TEST_HEADER include/spdk/reduce.h 00:04:10.865 TEST_HEADER include/spdk/rpc.h 00:04:10.865 TEST_HEADER include/spdk/scheduler.h 00:04:10.865 TEST_HEADER include/spdk/scsi.h 00:04:10.865 TEST_HEADER include/spdk/scsi_spec.h 00:04:10.865 TEST_HEADER include/spdk/sock.h 00:04:10.865 TEST_HEADER include/spdk/stdinc.h 00:04:10.865 TEST_HEADER include/spdk/string.h 00:04:10.865 TEST_HEADER include/spdk/thread.h 00:04:10.865 TEST_HEADER include/spdk/trace.h 00:04:10.865 TEST_HEADER include/spdk/trace_parser.h 00:04:10.865 TEST_HEADER include/spdk/tree.h 00:04:11.130 TEST_HEADER include/spdk/ublk.h 00:04:11.130 TEST_HEADER include/spdk/util.h 00:04:11.130 TEST_HEADER include/spdk/uuid.h 00:04:11.130 TEST_HEADER include/spdk/version.h 00:04:11.130 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:11.130 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:11.130 TEST_HEADER include/spdk/vhost.h 00:04:11.130 TEST_HEADER include/spdk/vmd.h 00:04:11.130 TEST_HEADER include/spdk/xor.h 00:04:11.130 TEST_HEADER include/spdk/zipf.h 00:04:11.130 CXX test/cpp_headers/accel.o 00:04:11.130 CXX test/cpp_headers/assert.o 00:04:11.130 CXX test/cpp_headers/accel_module.o 00:04:11.130 CXX test/cpp_headers/barrier.o 00:04:11.130 CC app/nvmf_tgt/nvmf_main.o 00:04:11.130 CXX test/cpp_headers/base64.o 00:04:11.130 CXX test/cpp_headers/bdev.o 00:04:11.130 CXX test/cpp_headers/bdev_module.o 00:04:11.130 CC app/spdk_dd/spdk_dd.o 00:04:11.130 CXX test/cpp_headers/bdev_zone.o 00:04:11.130 CXX test/cpp_headers/bit_array.o 00:04:11.130 CXX test/cpp_headers/bit_pool.o 00:04:11.130 CXX test/cpp_headers/blob_bdev.o 00:04:11.130 CXX test/cpp_headers/blobfs_bdev.o 00:04:11.130 CXX test/cpp_headers/blobfs.o 00:04:11.130 CXX test/cpp_headers/blob.o 00:04:11.130 CXX test/cpp_headers/conf.o 00:04:11.130 CXX test/cpp_headers/config.o 00:04:11.130 CXX test/cpp_headers/cpuset.o 00:04:11.130 CXX test/cpp_headers/crc16.o 00:04:11.130 CC app/iscsi_tgt/iscsi_tgt.o 00:04:11.130 CC examples/util/zipf/zipf.o 00:04:11.130 CXX test/cpp_headers/crc32.o 00:04:11.130 CC app/spdk_tgt/spdk_tgt.o 00:04:11.130 CC examples/ioat/perf/perf.o 00:04:11.130 CC test/thread/poller_perf/poller_perf.o 00:04:11.130 CC test/env/vtophys/vtophys.o 00:04:11.130 CC examples/ioat/verify/verify.o 00:04:11.130 CC test/env/pci/pci_ut.o 00:04:11.131 CC test/app/histogram_perf/histogram_perf.o 00:04:11.131 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:11.131 CC test/app/jsoncat/jsoncat.o 00:04:11.131 CC test/env/memory/memory_ut.o 00:04:11.131 CC app/fio/nvme/fio_plugin.o 00:04:11.131 CC test/app/stub/stub.o 00:04:11.131 CC test/dma/test_dma/test_dma.o 00:04:11.131 CC test/app/bdev_svc/bdev_svc.o 00:04:11.131 CC app/fio/bdev/fio_plugin.o 00:04:11.131 LINK spdk_lspci 00:04:11.407 CC test/env/mem_callbacks/mem_callbacks.o 00:04:11.407 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:11.407 LINK rpc_client_test 00:04:11.407 LINK spdk_nvme_discover 00:04:11.407 LINK interrupt_tgt 00:04:11.407 LINK jsoncat 00:04:11.407 LINK vtophys 00:04:11.407 LINK zipf 00:04:11.407 LINK histogram_perf 00:04:11.407 LINK poller_perf 00:04:11.407 LINK nvmf_tgt 00:04:11.407 CXX test/cpp_headers/crc64.o 00:04:11.407 CXX test/cpp_headers/dif.o 00:04:11.407 CXX test/cpp_headers/dma.o 00:04:11.407 LINK env_dpdk_post_init 00:04:11.407 CXX test/cpp_headers/endian.o 00:04:11.407 CXX test/cpp_headers/env_dpdk.o 00:04:11.407 CXX test/cpp_headers/env.o 00:04:11.407 CXX test/cpp_headers/event.o 00:04:11.407 CXX test/cpp_headers/fd_group.o 00:04:11.407 CXX test/cpp_headers/fd.o 00:04:11.407 CXX test/cpp_headers/file.o 00:04:11.407 CXX test/cpp_headers/ftl.o 00:04:11.407 CXX test/cpp_headers/gpt_spec.o 00:04:11.407 CXX test/cpp_headers/hexlify.o 00:04:11.407 CXX test/cpp_headers/histogram_data.o 00:04:11.407 LINK spdk_trace_record 00:04:11.407 LINK stub 00:04:11.407 LINK iscsi_tgt 00:04:11.407 LINK ioat_perf 00:04:11.675 CXX test/cpp_headers/idxd.o 00:04:11.675 CXX test/cpp_headers/idxd_spec.o 00:04:11.675 LINK verify 00:04:11.675 LINK spdk_tgt 00:04:11.675 LINK bdev_svc 00:04:11.675 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:11.675 CXX test/cpp_headers/init.o 00:04:11.675 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:11.675 CXX test/cpp_headers/ioat.o 00:04:11.675 CXX test/cpp_headers/ioat_spec.o 00:04:11.675 LINK spdk_dd 00:04:11.675 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:11.675 CXX test/cpp_headers/iscsi_spec.o 00:04:11.675 CXX test/cpp_headers/json.o 00:04:11.675 CXX test/cpp_headers/jsonrpc.o 00:04:11.675 CXX test/cpp_headers/keyring.o 00:04:11.939 CXX test/cpp_headers/keyring_module.o 00:04:11.939 LINK pci_ut 00:04:11.939 CXX test/cpp_headers/likely.o 00:04:11.939 CXX test/cpp_headers/log.o 00:04:11.939 CXX test/cpp_headers/lvol.o 00:04:11.939 CXX test/cpp_headers/memory.o 00:04:11.939 CXX test/cpp_headers/mmio.o 00:04:11.939 CXX test/cpp_headers/nbd.o 00:04:11.939 CXX test/cpp_headers/net.o 00:04:11.939 CXX test/cpp_headers/notify.o 00:04:11.939 CXX test/cpp_headers/nvme.o 00:04:11.939 CXX test/cpp_headers/nvme_intel.o 00:04:11.939 CXX test/cpp_headers/nvme_ocssd.o 00:04:11.939 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:11.939 LINK spdk_trace 00:04:11.939 CXX test/cpp_headers/nvme_spec.o 00:04:11.939 CXX test/cpp_headers/nvme_zns.o 00:04:11.939 CXX test/cpp_headers/nvmf_cmd.o 00:04:11.939 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:11.939 CXX test/cpp_headers/nvmf.o 00:04:11.939 CXX test/cpp_headers/nvmf_spec.o 00:04:11.939 CXX test/cpp_headers/nvmf_transport.o 00:04:11.939 CXX test/cpp_headers/opal.o 00:04:11.939 LINK test_dma 00:04:11.939 CXX test/cpp_headers/opal_spec.o 00:04:11.939 CXX test/cpp_headers/pci_ids.o 00:04:11.939 CXX test/cpp_headers/pipe.o 00:04:11.939 CXX test/cpp_headers/queue.o 00:04:12.206 CXX test/cpp_headers/reduce.o 00:04:12.206 CC examples/sock/hello_world/hello_sock.o 00:04:12.206 CXX test/cpp_headers/rpc.o 00:04:12.206 CXX test/cpp_headers/scheduler.o 00:04:12.206 CC test/event/event_perf/event_perf.o 00:04:12.206 LINK nvme_fuzz 00:04:12.206 CC examples/thread/thread/thread_ex.o 00:04:12.206 CC examples/idxd/perf/perf.o 00:04:12.206 CC examples/vmd/lsvmd/lsvmd.o 00:04:12.206 CXX test/cpp_headers/scsi.o 00:04:12.206 CXX test/cpp_headers/scsi_spec.o 00:04:12.206 CC test/event/reactor/reactor.o 00:04:12.206 CXX test/cpp_headers/sock.o 00:04:12.206 CC examples/vmd/led/led.o 00:04:12.206 CXX test/cpp_headers/stdinc.o 00:04:12.206 CXX test/cpp_headers/string.o 00:04:12.469 CXX test/cpp_headers/thread.o 00:04:12.469 CXX test/cpp_headers/trace.o 00:04:12.469 CC test/event/reactor_perf/reactor_perf.o 00:04:12.469 CXX test/cpp_headers/trace_parser.o 00:04:12.469 CC test/event/app_repeat/app_repeat.o 00:04:12.469 CXX test/cpp_headers/tree.o 00:04:12.469 CXX test/cpp_headers/ublk.o 00:04:12.469 CXX test/cpp_headers/util.o 00:04:12.469 CXX test/cpp_headers/uuid.o 00:04:12.469 LINK spdk_bdev 00:04:12.469 CXX test/cpp_headers/version.o 00:04:12.469 CXX test/cpp_headers/vfio_user_pci.o 00:04:12.469 CXX test/cpp_headers/vfio_user_spec.o 00:04:12.469 CXX test/cpp_headers/vhost.o 00:04:12.469 CXX test/cpp_headers/vmd.o 00:04:12.469 CXX test/cpp_headers/xor.o 00:04:12.469 CXX test/cpp_headers/zipf.o 00:04:12.469 LINK spdk_nvme 00:04:12.469 CC test/event/scheduler/scheduler.o 00:04:12.469 LINK spdk_nvme_perf 00:04:12.469 LINK mem_callbacks 00:04:12.469 LINK lsvmd 00:04:12.469 LINK event_perf 00:04:12.469 LINK spdk_nvme_identify 00:04:12.469 CC app/vhost/vhost.o 00:04:12.469 LINK vhost_fuzz 00:04:12.730 LINK reactor 00:04:12.730 LINK spdk_top 00:04:12.730 LINK led 00:04:12.730 LINK reactor_perf 00:04:12.730 LINK hello_sock 00:04:12.730 LINK app_repeat 00:04:12.730 LINK thread 00:04:12.730 CC test/nvme/e2edp/nvme_dp.o 00:04:12.730 CC test/nvme/overhead/overhead.o 00:04:12.730 CC test/nvme/aer/aer.o 00:04:12.730 CC test/nvme/sgl/sgl.o 00:04:12.730 CC test/nvme/reset/reset.o 00:04:12.730 CC test/nvme/startup/startup.o 00:04:12.730 CC test/nvme/err_injection/err_injection.o 00:04:12.730 CC test/nvme/reserve/reserve.o 00:04:12.730 CC test/nvme/simple_copy/simple_copy.o 00:04:12.730 CC test/nvme/connect_stress/connect_stress.o 00:04:12.730 CC test/nvme/boot_partition/boot_partition.o 00:04:12.730 CC test/accel/dif/dif.o 00:04:12.730 CC test/blobfs/mkfs/mkfs.o 00:04:12.730 CC test/nvme/compliance/nvme_compliance.o 00:04:12.730 CC test/nvme/fused_ordering/fused_ordering.o 00:04:12.730 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:12.989 CC test/nvme/fdp/fdp.o 00:04:12.989 CC test/lvol/esnap/esnap.o 00:04:12.989 CC test/nvme/cuse/cuse.o 00:04:12.989 LINK idxd_perf 00:04:12.989 LINK vhost 00:04:12.989 LINK scheduler 00:04:12.989 LINK boot_partition 00:04:12.989 LINK connect_stress 00:04:12.989 LINK startup 00:04:13.251 LINK simple_copy 00:04:13.251 LINK reset 00:04:13.251 CC examples/nvme/arbitration/arbitration.o 00:04:13.251 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:13.251 LINK err_injection 00:04:13.251 CC examples/nvme/hello_world/hello_world.o 00:04:13.251 CC examples/nvme/abort/abort.o 00:04:13.251 LINK overhead 00:04:13.251 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:13.251 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:13.251 CC examples/nvme/hotplug/hotplug.o 00:04:13.251 CC examples/nvme/reconnect/reconnect.o 00:04:13.251 LINK fused_ordering 00:04:13.251 LINK sgl 00:04:13.251 LINK mkfs 00:04:13.251 LINK reserve 00:04:13.251 LINK doorbell_aers 00:04:13.251 CC examples/accel/perf/accel_perf.o 00:04:13.251 LINK memory_ut 00:04:13.251 LINK nvme_dp 00:04:13.251 CC examples/blob/cli/blobcli.o 00:04:13.251 LINK aer 00:04:13.251 CC examples/blob/hello_world/hello_blob.o 00:04:13.251 LINK fdp 00:04:13.251 LINK nvme_compliance 00:04:13.510 LINK cmb_copy 00:04:13.510 LINK hello_world 00:04:13.510 LINK dif 00:04:13.510 LINK pmr_persistence 00:04:13.510 LINK arbitration 00:04:13.510 LINK hotplug 00:04:13.510 LINK reconnect 00:04:13.769 LINK hello_blob 00:04:13.769 LINK abort 00:04:13.769 LINK nvme_manage 00:04:13.769 LINK accel_perf 00:04:13.769 LINK blobcli 00:04:13.769 CC test/bdev/bdevio/bdevio.o 00:04:14.028 CC examples/bdev/hello_world/hello_bdev.o 00:04:14.028 CC examples/bdev/bdevperf/bdevperf.o 00:04:14.287 LINK iscsi_fuzz 00:04:14.287 LINK bdevio 00:04:14.287 LINK hello_bdev 00:04:14.547 LINK cuse 00:04:14.805 LINK bdevperf 00:04:15.373 CC examples/nvmf/nvmf/nvmf.o 00:04:15.632 LINK nvmf 00:04:18.174 LINK esnap 00:04:18.174 00:04:18.174 real 0m49.144s 00:04:18.174 user 10m11.556s 00:04:18.174 sys 2m31.429s 00:04:18.174 13:58:25 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:04:18.174 13:58:25 make -- common/autotest_common.sh@10 -- $ set +x 00:04:18.174 ************************************ 00:04:18.174 END TEST make 00:04:18.174 ************************************ 00:04:18.174 13:58:26 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:18.174 13:58:26 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:18.174 13:58:26 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:18.174 13:58:26 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:18.174 13:58:26 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:04:18.174 13:58:26 -- pm/common@44 -- $ pid=18664 00:04:18.174 13:58:26 -- pm/common@50 -- $ kill -TERM 18664 00:04:18.174 13:58:26 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:18.174 13:58:26 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:04:18.174 13:58:26 -- pm/common@44 -- $ pid=18666 00:04:18.174 13:58:26 -- pm/common@50 -- $ kill -TERM 18666 00:04:18.174 13:58:26 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:18.174 13:58:26 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:04:18.174 13:58:26 -- pm/common@44 -- $ pid=18668 00:04:18.174 13:58:26 -- pm/common@50 -- $ kill -TERM 18668 00:04:18.174 13:58:26 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:18.174 13:58:26 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:04:18.174 13:58:26 -- pm/common@44 -- $ pid=18700 00:04:18.174 13:58:26 -- pm/common@50 -- $ sudo -E kill -TERM 18700 00:04:18.175 13:58:26 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:18.175 13:58:26 -- nvmf/common.sh@7 -- # uname -s 00:04:18.175 13:58:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:18.175 13:58:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:18.175 13:58:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:18.175 13:58:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:18.175 13:58:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:18.175 13:58:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:18.175 13:58:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:18.175 13:58:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:18.175 13:58:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:18.175 13:58:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:18.175 13:58:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:04:18.175 13:58:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:04:18.175 13:58:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:18.175 13:58:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:18.175 13:58:26 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:18.175 13:58:26 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:18.175 13:58:26 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:18.175 13:58:26 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:18.175 13:58:26 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:18.175 13:58:26 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:18.175 13:58:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:18.175 13:58:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:18.175 13:58:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:18.175 13:58:26 -- paths/export.sh@5 -- # export PATH 00:04:18.175 13:58:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:18.175 13:58:26 -- nvmf/common.sh@47 -- # : 0 00:04:18.175 13:58:26 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:18.175 13:58:26 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:18.175 13:58:26 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:18.175 13:58:26 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:18.175 13:58:26 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:18.175 13:58:26 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:18.175 13:58:26 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:18.175 13:58:26 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:18.175 13:58:26 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:18.175 13:58:26 -- spdk/autotest.sh@32 -- # uname -s 00:04:18.175 13:58:26 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:18.175 13:58:26 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:18.175 13:58:26 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:04:18.175 13:58:26 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:04:18.175 13:58:26 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:04:18.175 13:58:26 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:18.175 13:58:26 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:18.175 13:58:26 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:18.175 13:58:26 -- spdk/autotest.sh@48 -- # udevadm_pid=74760 00:04:18.175 13:58:26 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:18.175 13:58:26 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:18.175 13:58:26 -- pm/common@17 -- # local monitor 00:04:18.175 13:58:26 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:18.175 13:58:26 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:18.175 13:58:26 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:18.175 13:58:26 -- pm/common@21 -- # date +%s 00:04:18.175 13:58:26 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:18.175 13:58:26 -- pm/common@21 -- # date +%s 00:04:18.175 13:58:26 -- pm/common@25 -- # sleep 1 00:04:18.175 13:58:26 -- pm/common@21 -- # date +%s 00:04:18.175 13:58:26 -- pm/common@21 -- # date +%s 00:04:18.175 13:58:26 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721995106 00:04:18.175 13:58:26 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721995106 00:04:18.175 13:58:26 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721995106 00:04:18.175 13:58:26 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721995106 00:04:18.436 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721995106_collect-vmstat.pm.log 00:04:18.436 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721995106_collect-cpu-load.pm.log 00:04:18.436 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721995106_collect-cpu-temp.pm.log 00:04:18.436 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721995106_collect-bmc-pm.bmc.pm.log 00:04:19.381 13:58:27 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:19.381 13:58:27 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:19.381 13:58:27 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:19.381 13:58:27 -- common/autotest_common.sh@10 -- # set +x 00:04:19.381 13:58:27 -- spdk/autotest.sh@59 -- # create_test_list 00:04:19.381 13:58:27 -- common/autotest_common.sh@748 -- # xtrace_disable 00:04:19.381 13:58:27 -- common/autotest_common.sh@10 -- # set +x 00:04:19.381 13:58:27 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:04:19.381 13:58:27 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:19.381 13:58:27 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:19.381 13:58:27 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:04:19.381 13:58:27 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:19.381 13:58:27 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:19.381 13:58:27 -- common/autotest_common.sh@1455 -- # uname 00:04:19.381 13:58:27 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:04:19.381 13:58:27 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:19.381 13:58:27 -- common/autotest_common.sh@1475 -- # uname 00:04:19.381 13:58:27 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:04:19.381 13:58:27 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:04:19.381 13:58:27 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:04:19.381 13:58:27 -- spdk/autotest.sh@72 -- # hash lcov 00:04:19.381 13:58:27 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:04:19.381 13:58:27 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:04:19.381 --rc lcov_branch_coverage=1 00:04:19.381 --rc lcov_function_coverage=1 00:04:19.381 --rc genhtml_branch_coverage=1 00:04:19.381 --rc genhtml_function_coverage=1 00:04:19.381 --rc genhtml_legend=1 00:04:19.381 --rc geninfo_all_blocks=1 00:04:19.381 ' 00:04:19.381 13:58:27 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:04:19.381 --rc lcov_branch_coverage=1 00:04:19.381 --rc lcov_function_coverage=1 00:04:19.381 --rc genhtml_branch_coverage=1 00:04:19.381 --rc genhtml_function_coverage=1 00:04:19.381 --rc genhtml_legend=1 00:04:19.381 --rc geninfo_all_blocks=1 00:04:19.381 ' 00:04:19.381 13:58:27 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:04:19.381 --rc lcov_branch_coverage=1 00:04:19.381 --rc lcov_function_coverage=1 00:04:19.381 --rc genhtml_branch_coverage=1 00:04:19.381 --rc genhtml_function_coverage=1 00:04:19.381 --rc genhtml_legend=1 00:04:19.381 --rc geninfo_all_blocks=1 00:04:19.381 --no-external' 00:04:19.381 13:58:27 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:04:19.381 --rc lcov_branch_coverage=1 00:04:19.381 --rc lcov_function_coverage=1 00:04:19.381 --rc genhtml_branch_coverage=1 00:04:19.381 --rc genhtml_function_coverage=1 00:04:19.381 --rc genhtml_legend=1 00:04:19.381 --rc geninfo_all_blocks=1 00:04:19.381 --no-external' 00:04:19.381 13:58:27 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:04:19.381 lcov: LCOV version 1.14 00:04:19.381 13:58:27 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:04:34.276 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:34.276 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:04:49.159 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:04:49.159 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:04:49.159 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:04:49.159 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:04:49.159 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:04:49.159 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:04:49.159 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:04:49.159 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:04:49.159 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:04:49.159 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:04:49.159 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:04:49.159 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:04:49.159 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:04:49.159 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:04:49.159 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:04:49.159 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:04:49.159 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:04:49.159 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:04:49.159 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:04:49.159 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:04:49.159 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:04:49.159 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:04:49.159 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:04:49.159 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:04:49.159 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:04:49.159 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:04:49.159 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:04:49.159 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:04:49.159 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:04:49.160 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:04:49.160 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:04:49.160 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:04:49.160 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:04:49.160 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:04:49.160 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:04:49.160 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:04:49.160 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:04:49.160 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:04:49.160 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:04:49.160 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:04:49.160 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:04:49.160 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:04:49.160 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:04:49.160 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:04:49.160 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:04:49.160 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:04:49.160 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:04:49.160 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:04:49.160 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:04:49.160 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:04:49.160 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:04:49.160 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:04:49.160 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:04:49.160 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:04:49.160 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:04:49.160 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:04:49.160 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:04:49.160 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:04:49.160 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:04:49.160 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:04:49.160 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:04:49.160 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:04:49.160 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:04:49.160 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:04:49.160 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:04:49.160 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:04:49.160 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:04:49.160 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:04:49.160 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:04:49.160 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:04:49.160 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:04:49.160 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:04:49.160 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:04:49.160 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:04:49.160 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:04:49.160 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:04:49.160 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:04:49.160 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:04:49.160 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:04:49.160 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:04:49.160 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:04:49.160 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:04:49.160 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:04:49.160 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:04:49.160 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:04:49.160 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:04:49.160 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:04:49.160 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:04:49.160 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:04:49.160 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:04:49.160 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:04:49.160 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:04:49.160 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:04:49.160 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:04:49.160 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:04:49.160 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:04:49.160 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:04:49.160 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:04:49.160 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:04:49.160 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:04:49.160 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:04:49.160 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:04:49.160 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno:no functions found 00:04:49.160 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno 00:04:49.160 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:04:49.160 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:04:49.160 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:04:49.160 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:04:49.160 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:04:49.160 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:04:49.160 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:04:49.160 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:04:49.160 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:04:49.160 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:04:49.160 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:04:49.160 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:04:49.160 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:04:49.160 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:04:49.160 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:04:49.160 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:04:49.160 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:04:49.160 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:04:49.160 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:04:49.160 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:04:49.160 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:04:49.160 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:04:49.161 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:04:49.161 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:04:49.161 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:04:49.161 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:04:49.161 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:04:49.161 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:04:49.161 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:04:49.161 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:04:49.161 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:04:49.161 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:04:49.161 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:04:49.161 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:04:49.161 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:04:49.161 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:04:49.161 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:04:49.161 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:04:49.161 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:04:49.161 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:04:49.161 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:04:49.161 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:04:49.161 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:04:49.161 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:04:49.161 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:04:49.161 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:04:49.161 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:04:49.161 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:04:49.161 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:04:49.161 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:04:49.161 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:04:49.161 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:04:49.161 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:04:49.161 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:04:49.161 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:04:49.161 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:04:49.161 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:04:49.161 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:04:49.161 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:04:49.161 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:04:49.161 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:04:49.161 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:04:49.161 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:04:49.161 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:04:49.161 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:04:49.161 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:04:49.161 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:04:49.161 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:04:49.161 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:04:49.161 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:04:49.161 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:04:49.161 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:04:49.161 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:04:49.161 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:04:53.350 13:59:01 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:04:53.350 13:59:01 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:53.350 13:59:01 -- common/autotest_common.sh@10 -- # set +x 00:04:53.350 13:59:01 -- spdk/autotest.sh@91 -- # rm -f 00:04:53.350 13:59:01 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:54.727 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:04:54.727 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:04:54.728 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:04:54.728 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:04:54.728 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:04:54.728 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:04:54.728 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:04:54.728 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:04:54.728 0000:0b:00.0 (8086 0a54): Already using the nvme driver 00:04:54.728 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:04:54.728 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:04:54.728 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:04:54.728 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:04:54.728 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:04:54.728 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:04:54.728 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:04:54.728 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:04:54.988 13:59:02 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:04:54.988 13:59:02 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:54.988 13:59:02 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:54.988 13:59:02 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:54.988 13:59:02 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:54.988 13:59:02 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:54.988 13:59:02 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:54.988 13:59:02 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:54.988 13:59:02 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:54.988 13:59:02 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:04:54.988 13:59:02 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:54.988 13:59:02 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:54.988 13:59:02 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:04:54.988 13:59:02 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:04:54.988 13:59:02 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:54.988 No valid GPT data, bailing 00:04:54.988 13:59:02 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:54.988 13:59:02 -- scripts/common.sh@391 -- # pt= 00:04:54.988 13:59:02 -- scripts/common.sh@392 -- # return 1 00:04:54.988 13:59:02 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:54.988 1+0 records in 00:04:54.988 1+0 records out 00:04:54.988 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00187611 s, 559 MB/s 00:04:54.988 13:59:02 -- spdk/autotest.sh@118 -- # sync 00:04:54.988 13:59:02 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:54.988 13:59:02 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:54.988 13:59:02 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:56.890 13:59:04 -- spdk/autotest.sh@124 -- # uname -s 00:04:56.890 13:59:04 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:04:56.890 13:59:04 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:04:56.890 13:59:04 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:56.890 13:59:04 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:56.890 13:59:04 -- common/autotest_common.sh@10 -- # set +x 00:04:56.890 ************************************ 00:04:56.890 START TEST setup.sh 00:04:56.890 ************************************ 00:04:56.890 13:59:04 setup.sh -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:04:56.890 * Looking for test storage... 00:04:56.890 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:56.890 13:59:04 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:04:56.890 13:59:04 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:56.890 13:59:04 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:04:56.890 13:59:04 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:56.890 13:59:04 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:56.890 13:59:04 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:56.890 ************************************ 00:04:56.890 START TEST acl 00:04:56.890 ************************************ 00:04:56.890 13:59:04 setup.sh.acl -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:04:56.890 * Looking for test storage... 00:04:56.890 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:56.890 13:59:04 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:04:56.890 13:59:04 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:56.890 13:59:04 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:56.890 13:59:04 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:56.890 13:59:04 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:56.890 13:59:04 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:56.891 13:59:04 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:56.891 13:59:04 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:56.891 13:59:04 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:56.891 13:59:04 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:04:56.891 13:59:04 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:04:56.891 13:59:04 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:04:56.891 13:59:04 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:04:56.891 13:59:04 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:04:56.891 13:59:04 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:56.891 13:59:04 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:58.800 13:59:06 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:04:58.800 13:59:06 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:04:58.800 13:59:06 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:58.800 13:59:06 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:04:58.800 13:59:06 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:04:58.800 13:59:06 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:59.737 Hugepages 00:04:59.737 node hugesize free / total 00:04:59.737 13:59:07 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:59.737 13:59:07 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:59.737 13:59:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:59.737 13:59:07 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:59.737 13:59:07 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:59.737 13:59:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:59.737 13:59:07 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:59.737 13:59:07 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:59.737 13:59:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:59.737 00:04:59.737 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:59.737 13:59:07 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:59.737 13:59:07 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:59.737 13:59:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:59.737 13:59:07 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:04:59.737 13:59:07 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:59.737 13:59:07 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:59.737 13:59:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:59.737 13:59:07 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:04:59.737 13:59:07 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:59.737 13:59:07 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:59.737 13:59:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:59.737 13:59:07 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:04:59.737 13:59:07 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:59.737 13:59:07 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:59.737 13:59:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:59.737 13:59:07 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:04:59.737 13:59:07 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:59.737 13:59:07 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:59.737 13:59:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:59.737 13:59:07 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:04:59.737 13:59:07 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:59.737 13:59:07 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:59.737 13:59:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:59.737 13:59:07 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:04:59.737 13:59:07 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:59.737 13:59:07 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:59.737 13:59:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:59.737 13:59:07 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:04:59.737 13:59:07 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:59.737 13:59:07 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:59.737 13:59:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:59.737 13:59:07 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:04:59.737 13:59:07 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:59.737 13:59:07 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:59.737 13:59:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:59.737 13:59:07 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:0b:00.0 == *:*:*.* ]] 00:04:59.737 13:59:07 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:59.737 13:59:07 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\b\:\0\0\.\0* ]] 00:04:59.737 13:59:07 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:59.737 13:59:07 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:59.737 13:59:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:59.737 13:59:07 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:04:59.737 13:59:07 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:59.737 13:59:07 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:59.737 13:59:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:59.737 13:59:07 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:04:59.737 13:59:07 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:59.737 13:59:07 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:59.737 13:59:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:59.737 13:59:07 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:04:59.737 13:59:07 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:59.737 13:59:07 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:59.737 13:59:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:59.737 13:59:07 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:04:59.737 13:59:07 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:59.737 13:59:07 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:59.737 13:59:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:59.737 13:59:07 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:04:59.737 13:59:07 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:59.737 13:59:07 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:59.737 13:59:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:59.737 13:59:07 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:04:59.737 13:59:07 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:59.737 13:59:07 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:59.737 13:59:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:59.737 13:59:07 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:04:59.737 13:59:07 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:59.737 13:59:07 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:59.737 13:59:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:59.737 13:59:07 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:04:59.737 13:59:07 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:59.737 13:59:07 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:59.737 13:59:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:59.737 13:59:07 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:04:59.737 13:59:07 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:04:59.737 13:59:07 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:59.737 13:59:07 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:59.737 13:59:07 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:59.737 ************************************ 00:04:59.737 START TEST denied 00:04:59.737 ************************************ 00:04:59.737 13:59:07 setup.sh.acl.denied -- common/autotest_common.sh@1125 -- # denied 00:04:59.738 13:59:07 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:0b:00.0' 00:04:59.738 13:59:07 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:04:59.738 13:59:07 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:0b:00.0' 00:04:59.738 13:59:07 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:04:59.738 13:59:07 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:01.117 0000:0b:00.0 (8086 0a54): Skipping denied controller at 0000:0b:00.0 00:05:01.117 13:59:08 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:0b:00.0 00:05:01.117 13:59:08 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:05:01.117 13:59:08 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:05:01.117 13:59:08 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:0b:00.0 ]] 00:05:01.117 13:59:08 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:0b:00.0/driver 00:05:01.117 13:59:08 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:01.117 13:59:08 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:01.117 13:59:08 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:05:01.117 13:59:08 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:01.117 13:59:08 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:03.656 00:05:03.656 real 0m3.839s 00:05:03.656 user 0m1.091s 00:05:03.656 sys 0m1.764s 00:05:03.656 13:59:11 setup.sh.acl.denied -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:03.656 13:59:11 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:05:03.656 ************************************ 00:05:03.656 END TEST denied 00:05:03.656 ************************************ 00:05:03.656 13:59:11 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:05:03.656 13:59:11 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:03.656 13:59:11 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:03.656 13:59:11 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:03.656 ************************************ 00:05:03.656 START TEST allowed 00:05:03.656 ************************************ 00:05:03.656 13:59:11 setup.sh.acl.allowed -- common/autotest_common.sh@1125 -- # allowed 00:05:03.656 13:59:11 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:0b:00.0 00:05:03.656 13:59:11 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:05:03.656 13:59:11 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:05:03.656 13:59:11 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:0b:00.0 .*: nvme -> .*' 00:05:03.656 13:59:11 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:06.191 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:05:06.191 13:59:13 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:05:06.191 13:59:13 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:05:06.191 13:59:13 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:05:06.191 13:59:13 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:06.191 13:59:13 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:07.571 00:05:07.571 real 0m3.921s 00:05:07.571 user 0m1.020s 00:05:07.571 sys 0m1.815s 00:05:07.571 13:59:15 setup.sh.acl.allowed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:07.571 13:59:15 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:05:07.571 ************************************ 00:05:07.571 END TEST allowed 00:05:07.571 ************************************ 00:05:07.571 00:05:07.571 real 0m10.629s 00:05:07.571 user 0m3.200s 00:05:07.571 sys 0m5.436s 00:05:07.571 13:59:15 setup.sh.acl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:07.571 13:59:15 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:07.571 ************************************ 00:05:07.571 END TEST acl 00:05:07.571 ************************************ 00:05:07.571 13:59:15 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:05:07.571 13:59:15 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:07.571 13:59:15 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:07.571 13:59:15 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:07.571 ************************************ 00:05:07.571 START TEST hugepages 00:05:07.571 ************************************ 00:05:07.571 13:59:15 setup.sh.hugepages -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:05:07.571 * Looking for test storage... 00:05:07.571 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:05:07.571 13:59:15 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:05:07.571 13:59:15 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:05:07.571 13:59:15 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:05:07.571 13:59:15 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:05:07.571 13:59:15 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:05:07.571 13:59:15 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:05:07.571 13:59:15 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:05:07.571 13:59:15 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:05:07.571 13:59:15 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:05:07.571 13:59:15 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:05:07.571 13:59:15 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:07.571 13:59:15 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:07.571 13:59:15 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:07.571 13:59:15 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:05:07.571 13:59:15 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:07.571 13:59:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.571 13:59:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.571 13:59:15 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 45065892 kB' 'MemAvailable: 48446132 kB' 'Buffers: 11392 kB' 'Cached: 8766996 kB' 'SwapCached: 0 kB' 'Active: 6137932 kB' 'Inactive: 3424408 kB' 'Active(anon): 5765084 kB' 'Inactive(anon): 0 kB' 'Active(file): 372848 kB' 'Inactive(file): 3424408 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 787264 kB' 'Mapped: 147812 kB' 'Shmem: 4981132 kB' 'KReclaimable: 153212 kB' 'Slab: 434076 kB' 'SReclaimable: 153212 kB' 'SUnreclaim: 280864 kB' 'KernelStack: 12736 kB' 'PageTables: 8208 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36562312 kB' 'Committed_AS: 7377772 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 192916 kB' 'VmallocChunk: 0 kB' 'Percpu: 30720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 425564 kB' 'DirectMap2M: 8931328 kB' 'DirectMap1G: 59768832 kB' 00:05:07.571 13:59:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.572 13:59:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.572 13:59:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.572 13:59:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.572 13:59:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.572 13:59:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.572 13:59:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.572 13:59:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.572 13:59:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.572 13:59:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.572 13:59:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.572 13:59:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.572 13:59:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.572 13:59:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.572 13:59:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.572 13:59:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.572 13:59:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.572 13:59:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.572 13:59:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.572 13:59:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.572 13:59:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.572 13:59:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.572 13:59:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.572 13:59:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.572 13:59:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.572 13:59:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.572 13:59:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.572 13:59:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.572 13:59:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.572 13:59:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.572 13:59:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.572 13:59:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.572 13:59:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.572 13:59:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.572 13:59:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.572 13:59:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.572 13:59:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.572 13:59:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.572 13:59:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.572 13:59:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.572 13:59:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.572 13:59:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.572 13:59:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.572 13:59:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.572 13:59:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.572 13:59:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.572 13:59:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.572 13:59:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.572 13:59:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.572 13:59:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.572 13:59:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.572 13:59:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.572 13:59:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.572 13:59:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.572 13:59:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.572 13:59:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.572 13:59:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.572 13:59:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.572 13:59:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.572 13:59:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.572 13:59:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.572 13:59:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.572 13:59:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.572 13:59:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.572 13:59:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.572 13:59:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.572 13:59:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.572 13:59:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.572 13:59:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.572 13:59:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.572 13:59:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.572 13:59:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.572 13:59:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.572 13:59:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.572 13:59:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.572 13:59:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.572 13:59:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.572 13:59:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.572 13:59:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.572 13:59:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.572 13:59:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.572 13:59:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.572 13:59:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.572 13:59:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.572 13:59:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.572 13:59:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.572 13:59:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.572 13:59:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.572 13:59:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.572 13:59:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.572 13:59:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.572 13:59:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.572 13:59:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.572 13:59:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.572 13:59:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.572 13:59:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.572 13:59:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.572 13:59:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.572 13:59:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.572 13:59:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.572 13:59:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.572 13:59:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.572 13:59:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.572 13:59:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.572 13:59:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.572 13:59:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.572 13:59:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.572 13:59:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.572 13:59:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.572 13:59:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.572 13:59:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.572 13:59:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.572 13:59:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.573 13:59:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.573 13:59:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.573 13:59:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.573 13:59:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.573 13:59:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.573 13:59:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.573 13:59:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.573 13:59:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.573 13:59:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.573 13:59:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.573 13:59:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.573 13:59:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.573 13:59:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.573 13:59:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.573 13:59:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.573 13:59:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.573 13:59:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.573 13:59:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.573 13:59:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.573 13:59:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.573 13:59:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.573 13:59:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.573 13:59:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.573 13:59:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.573 13:59:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.573 13:59:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.573 13:59:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.573 13:59:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.573 13:59:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.573 13:59:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.573 13:59:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.573 13:59:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.573 13:59:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.573 13:59:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.573 13:59:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.573 13:59:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.573 13:59:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.573 13:59:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.573 13:59:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.573 13:59:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.573 13:59:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.573 13:59:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.573 13:59:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.573 13:59:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.573 13:59:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.573 13:59:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.573 13:59:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.573 13:59:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.573 13:59:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.573 13:59:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.573 13:59:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.573 13:59:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.573 13:59:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.573 13:59:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.573 13:59:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.573 13:59:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.573 13:59:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.573 13:59:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.573 13:59:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.573 13:59:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.573 13:59:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.573 13:59:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.573 13:59:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.573 13:59:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.573 13:59:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.573 13:59:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.573 13:59:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.573 13:59:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.573 13:59:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.573 13:59:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.573 13:59:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.573 13:59:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.573 13:59:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.573 13:59:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.573 13:59:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.573 13:59:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.573 13:59:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.573 13:59:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.573 13:59:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.573 13:59:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.573 13:59:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.573 13:59:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.573 13:59:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.573 13:59:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.573 13:59:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.573 13:59:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.573 13:59:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.573 13:59:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.573 13:59:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.573 13:59:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.573 13:59:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.573 13:59:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.573 13:59:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:07.573 13:59:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:07.573 13:59:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:07.573 13:59:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.573 13:59:15 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:05:07.573 13:59:15 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:05:07.573 13:59:15 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:05:07.573 13:59:15 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:05:07.573 13:59:15 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:05:07.573 13:59:15 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:05:07.573 13:59:15 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:05:07.573 13:59:15 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:05:07.573 13:59:15 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:05:07.573 13:59:15 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:05:07.573 13:59:15 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:05:07.573 13:59:15 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:07.573 13:59:15 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:05:07.573 13:59:15 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:07.573 13:59:15 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:05:07.573 13:59:15 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:07.573 13:59:15 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:07.573 13:59:15 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:05:07.573 13:59:15 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:05:07.573 13:59:15 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:07.573 13:59:15 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:07.573 13:59:15 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:07.573 13:59:15 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:07.573 13:59:15 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:07.573 13:59:15 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:07.573 13:59:15 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:07.573 13:59:15 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:07.573 13:59:15 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:07.574 13:59:15 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:07.574 13:59:15 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:07.574 13:59:15 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:07.574 13:59:15 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:05:07.574 13:59:15 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:07.574 13:59:15 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:07.574 13:59:15 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:07.832 ************************************ 00:05:07.832 START TEST default_setup 00:05:07.832 ************************************ 00:05:07.832 13:59:15 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1125 -- # default_setup 00:05:07.832 13:59:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:05:07.832 13:59:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:05:07.832 13:59:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:07.832 13:59:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:05:07.832 13:59:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:07.832 13:59:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:05:07.832 13:59:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:07.832 13:59:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:07.832 13:59:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:07.832 13:59:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:07.832 13:59:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:05:07.832 13:59:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:07.832 13:59:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:07.832 13:59:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:07.832 13:59:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:07.832 13:59:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:07.832 13:59:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:07.832 13:59:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:07.833 13:59:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:05:07.833 13:59:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:05:07.833 13:59:15 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:05:07.833 13:59:15 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:09.212 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:09.212 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:09.212 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:09.212 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:09.212 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:09.212 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:09.212 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:09.212 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:09.212 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:09.212 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:09.212 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:09.212 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:09.212 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:09.212 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:09.212 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:09.212 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:10.162 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:05:10.162 13:59:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:05:10.162 13:59:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:05:10.162 13:59:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:05:10.162 13:59:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:05:10.162 13:59:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:05:10.162 13:59:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:05:10.162 13:59:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:05:10.162 13:59:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:10.162 13:59:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:10.162 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:10.162 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:10.162 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:10.162 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:10.162 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:10.162 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:10.162 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:10.162 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:10.162 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:10.162 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.162 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.162 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 47159632 kB' 'MemAvailable: 50539696 kB' 'Buffers: 11392 kB' 'Cached: 8767080 kB' 'SwapCached: 0 kB' 'Active: 6159264 kB' 'Inactive: 3424408 kB' 'Active(anon): 5786416 kB' 'Inactive(anon): 0 kB' 'Active(file): 372848 kB' 'Inactive(file): 3424408 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 808404 kB' 'Mapped: 147904 kB' 'Shmem: 4981216 kB' 'KReclaimable: 152860 kB' 'Slab: 433224 kB' 'SReclaimable: 152860 kB' 'SUnreclaim: 280364 kB' 'KernelStack: 12560 kB' 'PageTables: 7632 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 7395460 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193012 kB' 'VmallocChunk: 0 kB' 'Percpu: 30720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 425564 kB' 'DirectMap2M: 8931328 kB' 'DirectMap1G: 59768832 kB' 00:05:10.162 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.162 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.162 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.162 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.162 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.162 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.162 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.162 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.162 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.162 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.162 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.162 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.162 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.162 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.162 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.162 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.162 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.162 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.162 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.162 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.162 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.162 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.162 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.162 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.162 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.162 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.162 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.162 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.162 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.162 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.162 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.162 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.162 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.162 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.162 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.162 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.162 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.162 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.162 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.162 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.162 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.162 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.162 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.162 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.162 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.162 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.162 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.162 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.162 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.162 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.162 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.162 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.162 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.162 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.162 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.162 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.162 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.162 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.162 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.162 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.162 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.162 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.162 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.162 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.162 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.162 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.162 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.162 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.162 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.162 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.162 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.162 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.162 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.162 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.162 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.162 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.162 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.162 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.162 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.162 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.162 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.162 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.163 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.163 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.163 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.163 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.163 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.163 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.163 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.163 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.163 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.163 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.163 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.163 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.163 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.163 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.163 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.163 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.163 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.163 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.163 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.163 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.163 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.163 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.163 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.163 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.163 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.163 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.163 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.163 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.163 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.163 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.163 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.163 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.163 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.163 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.163 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.163 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.163 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.163 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.163 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.163 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.163 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.163 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.163 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.163 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.163 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.163 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.163 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.163 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.163 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.163 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.163 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.163 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.163 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.163 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.163 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.163 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.163 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.163 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.163 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.163 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.163 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.163 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.163 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.163 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.163 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.163 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.163 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.163 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.163 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.163 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.163 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.163 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.163 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.163 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.163 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.163 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.163 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.163 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.163 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.163 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:10.163 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:10.163 13:59:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:05:10.163 13:59:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:10.163 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:10.163 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:10.163 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:10.163 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:10.163 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:10.163 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:10.163 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:10.163 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:10.163 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:10.163 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.163 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.163 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 47159380 kB' 'MemAvailable: 50539444 kB' 'Buffers: 11392 kB' 'Cached: 8767080 kB' 'SwapCached: 0 kB' 'Active: 6159348 kB' 'Inactive: 3424408 kB' 'Active(anon): 5786500 kB' 'Inactive(anon): 0 kB' 'Active(file): 372848 kB' 'Inactive(file): 3424408 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 808564 kB' 'Mapped: 147904 kB' 'Shmem: 4981216 kB' 'KReclaimable: 152860 kB' 'Slab: 433224 kB' 'SReclaimable: 152860 kB' 'SUnreclaim: 280364 kB' 'KernelStack: 12624 kB' 'PageTables: 7972 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 7395480 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 192980 kB' 'VmallocChunk: 0 kB' 'Percpu: 30720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 425564 kB' 'DirectMap2M: 8931328 kB' 'DirectMap1G: 59768832 kB' 00:05:10.163 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.163 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.163 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.163 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.163 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.163 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.163 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.163 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.163 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.164 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.164 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.164 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.164 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.164 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.164 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.164 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.164 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.164 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.164 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.164 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.164 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.164 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.164 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.164 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.164 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.164 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.164 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.164 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.164 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.164 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.164 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.164 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.164 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.164 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.164 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.164 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.164 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.164 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.164 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.164 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.164 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.164 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.164 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.164 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.164 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.164 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.164 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.164 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.164 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.164 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.164 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.164 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.164 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.164 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.164 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.164 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.164 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.164 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.164 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.164 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.164 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.164 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.164 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.164 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.164 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.164 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.164 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.164 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.164 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.164 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.164 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.164 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.164 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.164 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.164 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.164 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.164 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.164 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.164 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.164 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.164 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.164 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.164 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.164 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.164 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.164 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.164 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.164 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.164 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.164 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.164 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.164 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.164 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.164 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.164 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.164 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.164 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.164 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.164 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.164 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.164 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.164 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.164 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.164 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.164 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.164 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.164 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.164 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.164 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.164 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.164 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.164 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.164 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.164 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.164 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.164 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.164 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.164 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.164 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.164 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.165 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.165 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.165 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.165 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.165 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.165 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.165 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.165 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.165 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.165 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.165 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.165 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.165 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.165 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.165 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.165 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.165 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.165 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.165 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.165 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.165 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.165 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.165 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.165 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.165 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.165 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.165 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.165 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.165 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.165 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.165 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.165 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.165 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.165 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.165 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.165 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.165 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.165 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.165 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.165 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.165 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.165 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.165 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.165 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.165 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.165 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.165 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.165 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.165 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.165 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.165 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.165 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.165 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.165 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.165 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.165 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.165 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.165 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.165 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.165 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.165 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.165 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.165 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.165 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.165 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.165 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.165 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.165 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.165 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.165 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.165 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.165 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.165 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.165 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.165 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.165 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.165 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.165 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.165 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.165 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.165 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.165 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.165 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.165 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.165 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.165 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:10.165 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:10.165 13:59:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:05:10.165 13:59:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:10.165 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:10.165 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:10.165 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:10.165 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:10.165 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:10.165 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:10.165 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:10.165 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:10.165 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:10.165 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.165 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.165 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 47158008 kB' 'MemAvailable: 50538072 kB' 'Buffers: 11392 kB' 'Cached: 8767100 kB' 'SwapCached: 0 kB' 'Active: 6158728 kB' 'Inactive: 3424408 kB' 'Active(anon): 5785880 kB' 'Inactive(anon): 0 kB' 'Active(file): 372848 kB' 'Inactive(file): 3424408 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 807888 kB' 'Mapped: 147884 kB' 'Shmem: 4981236 kB' 'KReclaimable: 152860 kB' 'Slab: 433280 kB' 'SReclaimable: 152860 kB' 'SUnreclaim: 280420 kB' 'KernelStack: 12608 kB' 'PageTables: 7884 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 7395500 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 192996 kB' 'VmallocChunk: 0 kB' 'Percpu: 30720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 425564 kB' 'DirectMap2M: 8931328 kB' 'DirectMap1G: 59768832 kB' 00:05:10.165 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.165 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.165 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.165 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.165 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.165 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.165 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.166 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.166 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.166 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.166 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.166 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.166 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.166 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.166 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.166 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.166 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.166 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.166 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.166 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.166 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.166 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.166 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.166 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.166 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.166 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.166 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.166 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.166 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.166 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.166 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.166 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.166 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.166 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.166 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.166 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.166 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.166 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.166 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.166 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.166 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.166 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.166 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.166 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.166 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.166 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.166 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.166 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.166 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.166 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.166 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.166 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.166 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.166 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.166 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.166 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.166 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.166 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.166 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.166 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.166 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.166 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.166 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.166 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.166 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.166 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.166 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.166 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.166 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.166 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.166 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.166 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.166 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.166 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.166 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.166 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.166 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.166 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.166 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.166 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.166 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.166 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.166 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.166 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.166 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.166 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.166 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.166 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.166 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.166 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.166 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.166 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.166 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.166 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.166 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.166 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.166 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.166 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.166 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.166 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.166 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.166 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.166 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.166 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.167 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.167 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.167 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.167 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.167 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.167 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.167 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.167 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.167 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.167 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.167 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.167 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.167 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.167 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.167 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.167 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.167 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.167 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.167 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.167 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.167 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.167 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.167 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.167 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.167 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.167 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.167 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.167 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.167 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.167 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.167 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.167 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.167 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.167 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.167 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.167 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.167 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.167 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.167 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.167 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.167 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.167 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.167 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.167 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.167 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.167 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.167 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.167 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.167 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.167 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.167 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.167 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.167 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.167 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.167 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.167 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.167 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.167 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.167 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.167 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.167 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.167 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.167 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.167 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.167 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.167 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.167 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.167 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.167 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.167 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.167 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.167 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.167 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.167 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.167 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.167 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.167 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.167 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.167 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.167 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.167 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.167 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.167 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.167 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.167 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.167 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.167 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.167 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.167 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.167 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.167 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.167 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.167 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.167 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.167 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.167 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.167 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.167 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:10.167 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:10.167 13:59:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:05:10.167 13:59:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:10.167 nr_hugepages=1024 00:05:10.167 13:59:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:10.167 resv_hugepages=0 00:05:10.167 13:59:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:10.167 surplus_hugepages=0 00:05:10.167 13:59:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:10.167 anon_hugepages=0 00:05:10.167 13:59:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:10.167 13:59:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:10.167 13:59:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:10.167 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:10.167 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:10.167 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:10.167 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:10.167 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:10.167 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:10.167 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:10.167 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:10.167 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:10.167 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.167 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.168 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 47158008 kB' 'MemAvailable: 50538072 kB' 'Buffers: 11392 kB' 'Cached: 8767124 kB' 'SwapCached: 0 kB' 'Active: 6158756 kB' 'Inactive: 3424408 kB' 'Active(anon): 5785908 kB' 'Inactive(anon): 0 kB' 'Active(file): 372848 kB' 'Inactive(file): 3424408 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 807888 kB' 'Mapped: 147884 kB' 'Shmem: 4981260 kB' 'KReclaimable: 152860 kB' 'Slab: 433280 kB' 'SReclaimable: 152860 kB' 'SUnreclaim: 280420 kB' 'KernelStack: 12608 kB' 'PageTables: 7884 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 7395524 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 192996 kB' 'VmallocChunk: 0 kB' 'Percpu: 30720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 425564 kB' 'DirectMap2M: 8931328 kB' 'DirectMap1G: 59768832 kB' 00:05:10.168 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.168 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.168 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.168 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.168 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.168 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.168 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.168 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.168 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.168 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.168 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.168 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.168 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.168 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.168 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.168 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.168 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.168 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.168 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.168 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.168 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.168 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.168 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.168 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.168 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.168 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.168 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.168 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.168 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.168 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.168 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.168 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.168 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.168 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.168 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.168 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.168 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.168 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.168 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.168 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.168 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.168 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.168 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.168 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.168 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.168 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.168 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.168 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.168 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.168 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.168 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.168 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.168 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.168 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.168 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.168 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.168 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.168 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.168 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.168 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.168 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.168 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.168 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.168 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.168 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.168 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.168 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.168 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.168 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.168 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.168 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.168 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.168 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.168 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.168 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.168 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.168 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.168 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.168 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.168 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.168 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.168 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.168 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.168 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.168 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.168 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.168 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.168 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.168 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.168 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.168 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.168 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.168 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.168 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.168 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.168 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.168 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.168 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.168 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.168 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.168 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.168 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.168 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.168 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.168 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.168 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.168 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.169 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.169 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.169 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.169 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.169 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.169 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.169 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.169 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.169 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.169 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.169 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.169 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.169 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.169 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.169 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.169 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.169 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.169 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.169 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.169 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.169 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.169 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.169 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.169 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.169 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.169 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.169 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.169 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.169 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.169 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.169 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.169 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.169 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.169 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.169 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.169 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.169 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.169 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.169 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.169 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.169 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.169 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.169 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.169 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.169 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.169 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.169 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.169 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.169 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.169 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.169 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.169 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.169 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.169 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.169 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.169 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.169 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.169 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.169 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.169 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.169 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.169 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.169 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.169 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.169 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.169 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.169 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.169 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.169 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.169 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.169 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.169 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.169 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.169 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.169 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.169 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.169 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.169 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.169 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.169 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.169 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.169 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.169 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.169 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.169 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.169 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.169 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:05:10.169 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:10.169 13:59:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:10.169 13:59:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:05:10.169 13:59:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:05:10.169 13:59:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:10.169 13:59:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:10.169 13:59:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:10.169 13:59:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:05:10.169 13:59:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:10.169 13:59:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:10.169 13:59:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:10.169 13:59:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:10.169 13:59:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:10.169 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:10.169 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:05:10.169 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:10.169 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:10.169 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:10.169 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:10.169 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:10.169 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:10.169 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:10.169 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.169 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.170 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 27764156 kB' 'MemUsed: 5065728 kB' 'SwapCached: 0 kB' 'Active: 1776688 kB' 'Inactive: 171572 kB' 'Active(anon): 1591312 kB' 'Inactive(anon): 0 kB' 'Active(file): 185376 kB' 'Inactive(file): 171572 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1512500 kB' 'Mapped: 107368 kB' 'AnonPages: 438948 kB' 'Shmem: 1155552 kB' 'KernelStack: 6776 kB' 'PageTables: 4132 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 82452 kB' 'Slab: 213640 kB' 'SReclaimable: 82452 kB' 'SUnreclaim: 131188 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:10.170 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.170 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.170 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.170 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.170 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.170 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.170 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.170 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.170 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.170 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.170 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.170 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.170 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.170 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.170 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.170 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.170 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.170 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.170 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.170 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.170 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.170 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.170 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.170 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.170 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.170 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.170 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.170 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.170 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.170 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.170 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.170 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.170 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.170 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.170 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.170 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.170 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.170 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.170 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.170 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.170 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.170 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.170 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.170 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.170 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.170 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.170 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.170 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.170 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.170 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.170 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.170 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.170 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.170 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.170 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.170 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.170 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.170 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.170 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.170 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.170 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.170 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.170 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.170 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.170 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.170 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.170 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.170 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.170 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.170 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.170 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.170 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.170 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.170 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.170 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.170 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.170 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.170 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.170 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.170 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.170 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.170 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.170 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.170 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.170 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.170 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.170 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.170 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.170 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.170 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.170 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.170 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.170 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.170 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.170 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.170 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.170 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.170 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.170 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.170 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.170 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.170 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.170 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.170 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.170 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.170 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.170 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.170 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.170 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.170 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.170 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.170 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.430 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.430 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.430 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.430 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.430 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.430 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.430 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.430 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.430 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.430 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.430 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.430 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.430 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.430 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.430 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.430 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.430 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.430 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.430 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.430 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.430 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.430 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.430 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.430 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.430 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.430 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.430 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.430 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.430 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.430 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.430 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.430 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.430 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.430 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:10.430 13:59:18 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:10.430 13:59:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:10.430 13:59:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:10.430 13:59:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:10.430 13:59:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:10.430 13:59:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:10.430 node0=1024 expecting 1024 00:05:10.430 13:59:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:10.430 00:05:10.430 real 0m2.589s 00:05:10.430 user 0m0.665s 00:05:10.430 sys 0m0.976s 00:05:10.430 13:59:18 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:10.430 13:59:18 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:05:10.430 ************************************ 00:05:10.430 END TEST default_setup 00:05:10.430 ************************************ 00:05:10.430 13:59:18 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:05:10.430 13:59:18 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:10.430 13:59:18 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:10.430 13:59:18 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:10.430 ************************************ 00:05:10.430 START TEST per_node_1G_alloc 00:05:10.430 ************************************ 00:05:10.430 13:59:18 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1125 -- # per_node_1G_alloc 00:05:10.430 13:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:05:10.430 13:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:05:10.430 13:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:05:10.430 13:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:05:10.430 13:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:05:10.430 13:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:05:10.430 13:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:05:10.430 13:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:10.430 13:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:10.430 13:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:05:10.430 13:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:05:10.430 13:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:10.430 13:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:10.431 13:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:10.431 13:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:10.431 13:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:10.431 13:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:05:10.431 13:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:10.431 13:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:05:10.431 13:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:10.431 13:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:05:10.431 13:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:05:10.431 13:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:05:10.431 13:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:05:10.431 13:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:05:10.431 13:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:10.431 13:59:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:11.368 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:11.368 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:11.368 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:11.368 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:11.368 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:11.368 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:11.368 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:11.368 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:11.368 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:11.368 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:11.368 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:11.368 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:11.368 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:11.368 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:11.368 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:11.368 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:11.368 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:11.638 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:05:11.638 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:05:11.638 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:05:11.638 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:11.638 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:11.638 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:11.638 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:11.638 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:11.638 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:11.638 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:11.638 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:11.638 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:11.638 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:11.638 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:11.638 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:11.638 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:11.638 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:11.638 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:11.638 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:11.638 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.638 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.638 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 47161420 kB' 'MemAvailable: 50541484 kB' 'Buffers: 11392 kB' 'Cached: 8767200 kB' 'SwapCached: 0 kB' 'Active: 6161240 kB' 'Inactive: 3424408 kB' 'Active(anon): 5788392 kB' 'Inactive(anon): 0 kB' 'Active(file): 372848 kB' 'Inactive(file): 3424408 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 810260 kB' 'Mapped: 147916 kB' 'Shmem: 4981336 kB' 'KReclaimable: 152860 kB' 'Slab: 433420 kB' 'SReclaimable: 152860 kB' 'SUnreclaim: 280560 kB' 'KernelStack: 12608 kB' 'PageTables: 7832 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 7395708 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193092 kB' 'VmallocChunk: 0 kB' 'Percpu: 30720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 425564 kB' 'DirectMap2M: 8931328 kB' 'DirectMap1G: 59768832 kB' 00:05:11.638 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.638 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.638 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.638 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.638 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.638 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.638 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.638 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.638 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.638 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.638 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.638 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.638 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.638 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.638 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.638 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.638 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.638 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.638 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.638 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.638 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.638 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.638 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.638 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.638 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.638 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.638 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.638 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.638 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.639 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.639 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.639 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.639 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.639 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.639 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.639 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.639 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.639 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.639 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.639 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.639 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.639 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.639 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.639 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.639 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.639 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.639 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.639 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.639 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.639 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.639 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.639 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.639 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.639 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.639 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.639 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.639 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.639 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.639 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.639 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.639 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.639 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.639 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.639 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.639 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.639 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.639 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.639 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.639 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.639 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.639 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.639 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.639 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.639 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.639 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.639 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.639 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.639 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.639 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.639 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.639 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.639 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.639 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.639 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.639 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.639 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.639 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.639 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.639 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.639 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.639 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.639 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.639 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.639 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.639 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.639 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.639 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.639 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.639 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.639 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.639 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.639 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.639 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.639 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.639 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.639 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.639 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.639 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.639 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.639 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.639 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.639 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.639 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.639 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.639 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.639 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.639 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.639 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.639 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.639 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.639 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.639 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.639 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.639 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.639 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.639 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.639 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.639 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.639 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.639 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.639 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.639 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.639 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.639 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.639 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.639 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.639 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.639 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.639 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.639 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.639 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.639 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.639 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.639 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.639 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.639 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.640 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.640 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.640 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.640 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.640 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.640 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.640 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.640 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.640 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.640 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.640 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.640 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.640 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.640 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.640 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.640 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:11.640 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:11.640 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:11.640 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:11.640 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:11.640 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:11.640 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:11.640 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:11.640 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:11.640 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:11.640 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:11.640 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:11.640 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:11.640 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.640 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.640 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 47160800 kB' 'MemAvailable: 50540864 kB' 'Buffers: 11392 kB' 'Cached: 8767200 kB' 'SwapCached: 0 kB' 'Active: 6161356 kB' 'Inactive: 3424408 kB' 'Active(anon): 5788508 kB' 'Inactive(anon): 0 kB' 'Active(file): 372848 kB' 'Inactive(file): 3424408 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 810404 kB' 'Mapped: 147900 kB' 'Shmem: 4981336 kB' 'KReclaimable: 152860 kB' 'Slab: 433388 kB' 'SReclaimable: 152860 kB' 'SUnreclaim: 280528 kB' 'KernelStack: 12640 kB' 'PageTables: 7900 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 7395728 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193060 kB' 'VmallocChunk: 0 kB' 'Percpu: 30720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 425564 kB' 'DirectMap2M: 8931328 kB' 'DirectMap1G: 59768832 kB' 00:05:11.640 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.640 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.640 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.640 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.640 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.640 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.640 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.640 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.640 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.640 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.640 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.640 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.640 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.640 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.640 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.640 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.640 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.640 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.640 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.640 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.640 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.640 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.640 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.640 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.640 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.640 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.640 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.640 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.640 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.640 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.640 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.640 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.640 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.640 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.640 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.640 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.640 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.640 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.640 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.640 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.640 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.640 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.640 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.640 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.640 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.640 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.640 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.640 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.640 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.640 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.640 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.640 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.640 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.640 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.640 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.640 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.640 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.640 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.640 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.640 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.640 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.640 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.640 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.640 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.640 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.640 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.640 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.640 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.640 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.640 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.640 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.640 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.641 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.641 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.641 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.641 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.641 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.641 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.641 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.641 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.641 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.641 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.641 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.641 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.641 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.641 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.641 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.641 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.641 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.641 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.641 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.641 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.641 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.641 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.641 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.641 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.641 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.641 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.641 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.641 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.641 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.641 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.641 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.641 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.641 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.641 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.641 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.641 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.641 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.641 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.641 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.641 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.641 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.641 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.641 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.641 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.641 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.641 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.641 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.641 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.641 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.641 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.641 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.641 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.641 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.641 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.641 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.641 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.641 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.641 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.641 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.641 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.641 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.641 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.641 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.641 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.641 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.641 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.641 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.641 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.641 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.641 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.641 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.641 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.641 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.641 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.641 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.641 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.641 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.641 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.641 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.641 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.641 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.641 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.641 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.641 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.641 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.641 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.641 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.641 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.641 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.641 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.641 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.641 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.641 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.641 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.641 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.641 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.641 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.641 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.641 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.641 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.641 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.641 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.641 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.641 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.641 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.641 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.641 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.641 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.641 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.641 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.641 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.641 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.641 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.641 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.641 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.641 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.642 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.642 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.642 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.642 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.642 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.642 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.642 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.642 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.642 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.642 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.642 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.642 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.642 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.642 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.642 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.642 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.642 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.642 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:11.642 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:11.642 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:11.642 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:11.642 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:11.642 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:11.642 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:11.642 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:11.642 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:11.642 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:11.642 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:11.642 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:11.642 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:11.642 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.642 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.642 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 47164280 kB' 'MemAvailable: 50544344 kB' 'Buffers: 11392 kB' 'Cached: 8767220 kB' 'SwapCached: 0 kB' 'Active: 6161112 kB' 'Inactive: 3424408 kB' 'Active(anon): 5788264 kB' 'Inactive(anon): 0 kB' 'Active(file): 372848 kB' 'Inactive(file): 3424408 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 810160 kB' 'Mapped: 147900 kB' 'Shmem: 4981356 kB' 'KReclaimable: 152860 kB' 'Slab: 433452 kB' 'SReclaimable: 152860 kB' 'SUnreclaim: 280592 kB' 'KernelStack: 12640 kB' 'PageTables: 7924 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 7395748 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193044 kB' 'VmallocChunk: 0 kB' 'Percpu: 30720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 425564 kB' 'DirectMap2M: 8931328 kB' 'DirectMap1G: 59768832 kB' 00:05:11.642 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.642 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.642 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.642 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.642 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.642 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.642 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.642 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.642 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.642 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.642 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.642 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.642 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.642 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.642 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.642 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.642 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.642 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.642 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.642 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.642 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.642 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.642 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.642 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.642 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.642 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.642 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.642 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.642 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.642 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.642 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.642 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.642 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.642 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.642 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.642 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.642 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.642 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.642 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.642 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.642 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.642 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.642 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.642 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.642 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.642 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.642 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.642 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.642 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.642 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.642 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.642 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.642 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.642 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.642 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.642 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.642 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.642 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.642 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.642 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.642 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.642 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.642 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.642 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.642 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.642 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.642 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.642 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.642 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.642 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.642 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.643 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.643 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.643 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.643 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.643 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.643 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.643 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.643 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.643 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.643 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.643 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.643 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.643 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.643 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.643 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.643 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.643 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.643 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.643 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.643 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.643 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.643 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.643 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.643 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.643 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.643 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.643 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.643 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.643 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.643 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.643 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.643 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.643 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.643 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.643 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.643 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.643 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.643 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.643 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.643 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.643 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.643 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.643 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.643 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.643 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.643 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.643 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.643 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.643 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.643 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.643 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.643 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.643 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.643 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.643 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.643 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.643 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.643 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.643 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.643 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.643 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.643 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.643 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.643 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.643 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.643 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.643 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.643 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.643 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.643 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.643 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.643 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.643 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.643 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.643 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.643 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.643 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.643 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.643 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.643 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.643 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.643 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.643 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.643 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.643 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.643 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.643 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.643 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.643 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.643 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.643 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.643 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.643 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.643 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.644 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.644 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.644 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.644 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.644 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.644 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.644 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.644 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.644 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.644 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.644 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.644 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.644 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.644 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.644 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.644 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.644 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.644 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.644 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.644 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.644 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.644 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.644 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.644 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.644 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.644 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.644 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.644 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.644 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.644 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.644 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.644 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.644 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.644 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.644 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.644 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.644 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:11.644 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:11.644 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:11.644 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:11.644 nr_hugepages=1024 00:05:11.644 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:11.644 resv_hugepages=0 00:05:11.644 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:11.644 surplus_hugepages=0 00:05:11.644 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:11.644 anon_hugepages=0 00:05:11.644 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:11.644 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:11.644 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:11.644 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:11.644 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:11.644 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:11.644 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:11.644 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:11.644 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:11.644 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:11.644 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:11.644 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:11.644 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.644 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.644 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 47165724 kB' 'MemAvailable: 50545788 kB' 'Buffers: 11392 kB' 'Cached: 8767244 kB' 'SwapCached: 0 kB' 'Active: 6161136 kB' 'Inactive: 3424408 kB' 'Active(anon): 5788288 kB' 'Inactive(anon): 0 kB' 'Active(file): 372848 kB' 'Inactive(file): 3424408 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 810124 kB' 'Mapped: 147900 kB' 'Shmem: 4981380 kB' 'KReclaimable: 152860 kB' 'Slab: 433452 kB' 'SReclaimable: 152860 kB' 'SUnreclaim: 280592 kB' 'KernelStack: 12624 kB' 'PageTables: 7872 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 7395772 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193028 kB' 'VmallocChunk: 0 kB' 'Percpu: 30720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 425564 kB' 'DirectMap2M: 8931328 kB' 'DirectMap1G: 59768832 kB' 00:05:11.644 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.644 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.644 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.644 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.644 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.644 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.644 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.644 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.644 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.644 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.644 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.644 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.644 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.644 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.644 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.644 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.644 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.644 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.644 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.644 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.644 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.644 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.644 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.644 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.644 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.644 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.644 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.644 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.644 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.644 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.644 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.644 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.644 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.644 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.644 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.644 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.644 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.644 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.644 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.644 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.644 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.644 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.644 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.644 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.645 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.645 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.645 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.645 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.645 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.645 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.645 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.645 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.645 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.645 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.645 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.645 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.645 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.645 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.645 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.645 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.645 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.645 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.645 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.645 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.645 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.645 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.645 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.645 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.645 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.645 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.645 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.645 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.645 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.645 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.645 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.645 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.645 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.645 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.645 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.645 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.645 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.645 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.645 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.645 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.645 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.645 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.645 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.645 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.645 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.645 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.645 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.645 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.645 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.645 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.645 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.645 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.645 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.645 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.645 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.645 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.645 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.645 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.645 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.645 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.645 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.645 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.645 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.645 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.645 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.645 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.645 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.645 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.645 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.645 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.645 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.645 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.645 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.645 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.645 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.645 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.645 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.645 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.645 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.645 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.645 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.645 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.645 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.645 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.645 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.645 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.645 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.645 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.645 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.645 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.645 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.645 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.645 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.645 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.645 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.645 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.645 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.645 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.645 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.645 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.645 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.645 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.645 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.645 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.645 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.645 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.645 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.645 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.645 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.645 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.645 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.645 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.645 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.645 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.645 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.645 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.645 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.646 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.646 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.646 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.646 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.646 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.646 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.646 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.646 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.646 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.646 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.646 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.646 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.646 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.646 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.646 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.646 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.646 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.646 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.646 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.646 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.646 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.646 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.646 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.646 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.646 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.646 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.646 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.646 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.646 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.646 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.646 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.646 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.646 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:05:11.646 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:11.646 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:11.646 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:11.646 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:05:11.646 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:11.646 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:11.646 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:11.646 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:11.646 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:11.646 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:11.646 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:11.646 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:11.646 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:11.646 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:11.646 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:05:11.646 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:11.646 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:11.646 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:11.646 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:11.646 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:11.646 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:11.646 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:11.646 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.646 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.646 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 28817924 kB' 'MemUsed: 4011960 kB' 'SwapCached: 0 kB' 'Active: 1778920 kB' 'Inactive: 171572 kB' 'Active(anon): 1593544 kB' 'Inactive(anon): 0 kB' 'Active(file): 185376 kB' 'Inactive(file): 171572 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1512572 kB' 'Mapped: 107384 kB' 'AnonPages: 441112 kB' 'Shmem: 1155624 kB' 'KernelStack: 6792 kB' 'PageTables: 4216 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 82452 kB' 'Slab: 213840 kB' 'SReclaimable: 82452 kB' 'SUnreclaim: 131388 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:11.646 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.646 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.646 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.646 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.646 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.646 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.646 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.646 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.646 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.646 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.646 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.646 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.646 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.646 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.646 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.646 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.646 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.646 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.646 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.646 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.646 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.646 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.646 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.646 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.646 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.646 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.646 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.646 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.646 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.646 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.646 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.646 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.646 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.646 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.646 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.646 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.646 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.646 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.646 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.646 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.646 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.646 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.646 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.646 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.646 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.646 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.646 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.646 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.646 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.646 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.647 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.647 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.647 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.647 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.647 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.647 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.647 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.647 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.647 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.647 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.647 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.647 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.647 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.647 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.647 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.647 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.647 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.647 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.647 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.647 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.647 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.647 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.647 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.647 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.647 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.647 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.647 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.647 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.647 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.647 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.647 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.647 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.647 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.647 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.647 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.647 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.647 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.647 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.647 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.647 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.647 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.647 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.647 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.647 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.647 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.647 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.647 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.647 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.647 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.647 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.647 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.647 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.647 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.647 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.647 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.647 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.647 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.647 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.647 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.647 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.647 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.647 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.647 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.647 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.647 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.647 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.647 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.647 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.647 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.647 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.647 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.647 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.647 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.647 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.647 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.647 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.647 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.647 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.647 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.647 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.647 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.647 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.647 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.647 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.647 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.647 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.647 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.647 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.647 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.647 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.647 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.647 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.647 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.647 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.647 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.647 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:11.648 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:11.648 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:11.648 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:11.648 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:11.648 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:05:11.648 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:11.648 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:05:11.648 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:11.648 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:11.648 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:11.648 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:05:11.648 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:05:11.648 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:11.648 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:11.648 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.648 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.648 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711836 kB' 'MemFree: 18348600 kB' 'MemUsed: 9363236 kB' 'SwapCached: 0 kB' 'Active: 4382252 kB' 'Inactive: 3252836 kB' 'Active(anon): 4194780 kB' 'Inactive(anon): 0 kB' 'Active(file): 187472 kB' 'Inactive(file): 3252836 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7266108 kB' 'Mapped: 40516 kB' 'AnonPages: 369020 kB' 'Shmem: 3825800 kB' 'KernelStack: 5832 kB' 'PageTables: 3656 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 70408 kB' 'Slab: 219604 kB' 'SReclaimable: 70408 kB' 'SUnreclaim: 149196 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:11.648 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.648 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.648 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.648 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.648 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.648 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.648 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.648 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.648 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.648 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.648 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.648 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.648 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.648 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.648 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.648 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.648 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.648 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.648 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.648 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.648 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.648 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.648 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.648 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.648 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.648 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.648 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.648 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.648 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.648 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.648 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.648 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.648 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.648 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.648 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.648 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.648 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.648 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.648 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.648 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.648 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.648 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.648 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.648 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.648 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.648 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.648 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.648 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.648 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.648 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.648 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.648 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.648 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.648 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.648 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.648 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.648 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.648 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.648 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.648 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.648 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.648 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.648 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.648 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.648 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.648 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.648 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.648 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.648 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.648 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.648 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.648 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.648 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.648 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.648 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.648 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.648 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.648 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.648 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.648 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.648 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.648 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.648 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.648 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.648 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.648 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.648 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.648 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.648 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.648 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.649 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.649 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.649 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.649 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.649 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.649 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.649 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.649 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.649 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.649 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.649 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.649 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.649 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.649 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.649 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.649 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.649 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.649 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.649 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.649 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.649 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.649 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.649 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.649 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.649 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.649 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.649 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.649 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.649 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.649 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.649 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.649 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.649 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.649 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.649 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.649 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.649 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.649 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.649 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.649 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.649 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.649 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.649 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.649 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.649 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.649 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.649 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.649 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.649 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.649 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.649 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.649 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.649 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.649 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.649 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.649 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:11.649 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:11.649 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:11.649 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:11.649 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:11.649 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:11.649 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:11.649 node0=512 expecting 512 00:05:11.649 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:11.649 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:11.649 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:11.649 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:05:11.649 node1=512 expecting 512 00:05:11.649 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:11.649 00:05:11.649 real 0m1.402s 00:05:11.649 user 0m0.596s 00:05:11.649 sys 0m0.769s 00:05:11.649 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:11.649 13:59:19 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:11.649 ************************************ 00:05:11.649 END TEST per_node_1G_alloc 00:05:11.649 ************************************ 00:05:11.649 13:59:19 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:05:11.649 13:59:19 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:11.649 13:59:19 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:11.649 13:59:19 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:11.908 ************************************ 00:05:11.908 START TEST even_2G_alloc 00:05:11.908 ************************************ 00:05:11.908 13:59:19 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1125 -- # even_2G_alloc 00:05:11.908 13:59:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:05:11.908 13:59:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:11.908 13:59:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:11.908 13:59:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:11.908 13:59:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:11.908 13:59:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:11.908 13:59:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:11.908 13:59:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:11.908 13:59:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:11.908 13:59:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:11.908 13:59:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:11.908 13:59:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:11.908 13:59:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:11.908 13:59:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:11.908 13:59:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:11.908 13:59:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:11.908 13:59:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:05:11.908 13:59:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:05:11.908 13:59:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:11.908 13:59:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:11.908 13:59:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:11.908 13:59:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:11.908 13:59:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:11.908 13:59:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:05:11.908 13:59:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:05:11.908 13:59:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:05:11.908 13:59:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:11.908 13:59:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:12.845 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:12.846 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:12.846 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:12.846 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:12.846 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:12.846 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:12.846 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:13.110 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:13.110 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:13.110 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:13.110 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:13.110 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:13.110 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:13.110 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:13.110 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:13.110 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:13.110 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:13.110 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:05:13.110 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:05:13.110 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:13.110 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:13.110 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:13.110 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:13.110 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:13.110 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:13.110 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:13.110 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:13.110 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:13.110 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:13.110 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:13.110 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:13.110 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:13.110 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:13.110 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:13.110 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:13.110 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.110 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.111 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 47171964 kB' 'MemAvailable: 50552324 kB' 'Buffers: 11392 kB' 'Cached: 8767336 kB' 'SwapCached: 0 kB' 'Active: 6164508 kB' 'Inactive: 3424408 kB' 'Active(anon): 5791660 kB' 'Inactive(anon): 0 kB' 'Active(file): 372848 kB' 'Inactive(file): 3424408 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 813428 kB' 'Mapped: 147524 kB' 'Shmem: 4981472 kB' 'KReclaimable: 153452 kB' 'Slab: 433824 kB' 'SReclaimable: 153452 kB' 'SUnreclaim: 280372 kB' 'KernelStack: 12512 kB' 'PageTables: 7448 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 7387568 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193028 kB' 'VmallocChunk: 0 kB' 'Percpu: 30720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 425564 kB' 'DirectMap2M: 8931328 kB' 'DirectMap1G: 59768832 kB' 00:05:13.111 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.111 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.111 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.111 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.111 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.111 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.111 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.111 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.111 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.111 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.111 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.111 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.111 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.111 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.111 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.111 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.111 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.111 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.111 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.111 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.111 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.111 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.111 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.111 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.111 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.111 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.111 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.111 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.111 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.111 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.111 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.111 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.111 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.111 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.111 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.111 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.111 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.111 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.111 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.111 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.111 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.111 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.111 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.111 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.111 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.111 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.111 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.111 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.111 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.111 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.111 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.111 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.111 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.111 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.111 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.111 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.111 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.111 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.111 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.111 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.111 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.111 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.111 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.111 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.111 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.111 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.111 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.111 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.111 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.111 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.111 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.111 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.111 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.111 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.111 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.111 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.111 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.111 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.111 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.111 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.111 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.111 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.111 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.111 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.111 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.111 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.111 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.111 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.111 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.111 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.111 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.111 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.111 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.111 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.111 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.111 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.111 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.111 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.111 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.111 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.111 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.111 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.111 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.111 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.111 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.111 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.111 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.111 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.111 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.111 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.111 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.111 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.111 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.111 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.111 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.111 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.111 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.112 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.112 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.112 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.112 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.112 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.112 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.112 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.112 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.112 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.112 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.112 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.112 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.112 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.112 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.112 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.112 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.112 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.112 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.112 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.112 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.112 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.112 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.112 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.112 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.112 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.112 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.112 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.112 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.112 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.112 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.112 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.112 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.112 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.112 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.112 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.112 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.112 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.112 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.112 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.112 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.112 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.112 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.112 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.112 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.112 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:13.112 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:13.112 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:13.112 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:13.112 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:13.112 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:13.112 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:13.112 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:13.112 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:13.112 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:13.112 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:13.112 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:13.112 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:13.112 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.112 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.112 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 47172228 kB' 'MemAvailable: 50552572 kB' 'Buffers: 11392 kB' 'Cached: 8767340 kB' 'SwapCached: 0 kB' 'Active: 6165712 kB' 'Inactive: 3424408 kB' 'Active(anon): 5792864 kB' 'Inactive(anon): 0 kB' 'Active(file): 372848 kB' 'Inactive(file): 3424408 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 814668 kB' 'Mapped: 147988 kB' 'Shmem: 4981476 kB' 'KReclaimable: 153420 kB' 'Slab: 433768 kB' 'SReclaimable: 153420 kB' 'SUnreclaim: 280348 kB' 'KernelStack: 12560 kB' 'PageTables: 7596 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 7388656 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193000 kB' 'VmallocChunk: 0 kB' 'Percpu: 30720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 425564 kB' 'DirectMap2M: 8931328 kB' 'DirectMap1G: 59768832 kB' 00:05:13.112 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.112 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.112 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.112 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.112 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.112 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.112 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.112 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.112 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.112 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.112 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.112 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.112 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.112 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.112 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.112 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.112 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.112 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.112 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.112 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.112 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.112 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.112 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.112 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.112 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.112 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.112 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.112 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.112 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.112 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.112 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.112 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.112 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.112 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.112 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.112 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.112 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.112 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.112 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.112 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.112 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.112 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.112 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.112 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.112 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.112 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.112 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.112 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.113 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.113 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.113 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.113 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.113 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.113 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.113 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.113 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.113 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.113 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.113 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.113 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.113 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.113 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.113 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.113 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.113 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.113 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.113 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.113 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.113 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.113 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.113 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.113 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.113 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.113 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.113 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.113 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.113 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.113 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.113 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.113 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.113 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.113 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.113 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.113 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.113 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.113 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.113 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.113 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.113 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.113 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.113 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.113 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.113 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.113 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.113 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.113 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.113 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.113 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.113 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.113 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.113 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.113 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.113 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.113 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.113 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.113 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.113 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.113 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.113 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.113 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.113 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.113 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.113 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.113 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.113 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.113 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.113 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.113 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.113 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.113 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.113 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.113 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.113 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.113 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.113 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.113 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.113 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.113 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.113 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.113 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.113 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.113 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.113 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.113 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.113 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.113 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.113 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.113 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.113 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.113 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.113 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.113 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.113 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.113 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.113 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.113 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.113 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.113 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.113 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.113 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.113 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.113 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.113 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.113 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.113 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.113 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.113 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.113 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.113 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.113 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.113 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.113 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.113 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.113 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.113 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.113 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.113 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.113 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.113 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.114 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.114 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.114 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.114 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.114 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.114 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.114 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.114 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.114 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.114 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.114 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.114 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.114 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.114 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.114 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.114 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.114 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.114 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.114 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.114 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.114 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.114 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.114 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.114 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.114 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.114 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.114 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.114 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.114 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.114 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.114 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.114 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.114 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.114 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.114 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.114 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.114 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:13.114 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:13.114 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:13.114 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:13.114 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:13.114 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:13.114 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:13.114 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:13.114 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:13.114 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:13.114 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:13.114 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:13.114 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:13.114 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.114 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.114 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 47171100 kB' 'MemAvailable: 50551444 kB' 'Buffers: 11392 kB' 'Cached: 8767340 kB' 'SwapCached: 0 kB' 'Active: 6161844 kB' 'Inactive: 3424408 kB' 'Active(anon): 5788996 kB' 'Inactive(anon): 0 kB' 'Active(file): 372848 kB' 'Inactive(file): 3424408 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 810788 kB' 'Mapped: 147512 kB' 'Shmem: 4981476 kB' 'KReclaimable: 153420 kB' 'Slab: 433820 kB' 'SReclaimable: 153420 kB' 'SUnreclaim: 280400 kB' 'KernelStack: 12560 kB' 'PageTables: 7540 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 7385496 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 192996 kB' 'VmallocChunk: 0 kB' 'Percpu: 30720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 425564 kB' 'DirectMap2M: 8931328 kB' 'DirectMap1G: 59768832 kB' 00:05:13.114 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.114 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.114 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.114 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.114 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.114 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.114 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.114 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.114 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.114 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.114 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.114 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.114 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.114 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.114 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.114 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.114 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.114 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.114 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.114 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.114 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.114 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.114 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.114 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.114 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.114 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.114 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.114 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.114 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.114 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.114 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.114 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.114 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.114 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.114 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.114 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.114 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.114 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.114 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.114 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.114 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.114 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.114 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.114 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.114 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.114 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.114 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.114 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.114 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.114 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.114 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.114 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.114 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.114 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.114 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.114 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.115 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.115 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.115 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.115 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.115 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.115 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.115 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.115 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.115 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.115 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.115 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.115 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.115 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.115 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.115 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.115 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.115 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.115 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.115 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.115 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.115 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.115 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.115 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.115 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.115 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.115 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.115 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.115 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.115 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.115 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.115 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.115 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.115 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.115 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.115 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.115 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.115 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.115 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.115 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.115 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.115 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.115 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.115 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.115 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.115 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.115 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.115 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.115 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.115 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.115 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.115 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.115 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.115 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.115 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.115 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.115 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.115 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.115 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.115 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.115 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.115 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.115 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.115 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.115 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.115 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.115 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.115 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.115 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.115 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.115 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.115 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.115 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.115 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.115 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.115 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.115 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.115 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.115 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.115 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.115 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.115 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.115 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.115 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.115 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.115 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.115 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.115 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.115 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.115 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.115 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.115 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.115 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.115 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.115 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.115 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.115 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.116 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.116 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.116 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.116 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.116 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.116 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.116 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.116 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.116 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.116 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.116 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.116 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.116 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.116 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.116 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.116 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.116 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.116 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.116 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.116 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.116 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.116 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.116 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.116 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.116 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.116 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.116 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.116 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.116 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.116 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.116 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.116 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.116 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.116 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.116 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.116 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.116 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.116 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.116 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.116 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.116 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.116 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.116 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.116 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.116 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.116 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.116 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.116 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.116 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.116 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:13.116 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:13.116 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:13.116 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:13.116 nr_hugepages=1024 00:05:13.116 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:13.116 resv_hugepages=0 00:05:13.116 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:13.116 surplus_hugepages=0 00:05:13.116 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:13.116 anon_hugepages=0 00:05:13.116 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:13.116 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:13.116 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:13.116 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:13.116 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:13.116 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:13.116 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:13.116 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:13.116 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:13.116 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:13.116 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:13.116 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:13.380 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.380 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.380 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 47166004 kB' 'MemAvailable: 50546348 kB' 'Buffers: 11392 kB' 'Cached: 8767380 kB' 'SwapCached: 0 kB' 'Active: 6165476 kB' 'Inactive: 3424408 kB' 'Active(anon): 5792628 kB' 'Inactive(anon): 0 kB' 'Active(file): 372848 kB' 'Inactive(file): 3424408 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 814376 kB' 'Mapped: 147512 kB' 'Shmem: 4981516 kB' 'KReclaimable: 153420 kB' 'Slab: 433820 kB' 'SReclaimable: 153420 kB' 'SUnreclaim: 280400 kB' 'KernelStack: 12560 kB' 'PageTables: 7536 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 7388700 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193000 kB' 'VmallocChunk: 0 kB' 'Percpu: 30720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 425564 kB' 'DirectMap2M: 8931328 kB' 'DirectMap1G: 59768832 kB' 00:05:13.380 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.380 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.380 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.380 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.380 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.380 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.380 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.380 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.380 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.380 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.380 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.380 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.380 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.380 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.380 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.380 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.380 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.380 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.380 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.380 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.380 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.380 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.380 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.380 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.380 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.381 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.381 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.381 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.381 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.381 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.381 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.381 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.381 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.381 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.381 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.381 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.381 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.381 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.381 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.381 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.381 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.381 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.381 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.381 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.381 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.381 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.381 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.381 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.381 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.381 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.381 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.381 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.381 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.381 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.381 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.381 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.381 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.381 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.381 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.381 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.381 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.381 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.381 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.381 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.381 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.381 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.381 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.381 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.381 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.381 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.381 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.381 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.381 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.381 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.381 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.381 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.381 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.381 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.381 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.381 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.381 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.381 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.381 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.381 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.381 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.381 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.381 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.381 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.381 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.381 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.381 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.381 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.381 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.381 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.381 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.381 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.381 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.381 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.381 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.381 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.381 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.381 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.381 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.381 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.381 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.381 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.381 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.381 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.381 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.381 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.381 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.381 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.381 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.381 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.381 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.381 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.381 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.381 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.381 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.381 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.381 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.381 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.381 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.381 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.381 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.381 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.381 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.381 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.381 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.381 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.381 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.381 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.381 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.381 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.381 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.381 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.381 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.381 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.381 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.381 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.381 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.381 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.381 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.381 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.382 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.382 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.382 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.382 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.382 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.382 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.382 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.382 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.382 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.382 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.382 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.382 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.382 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.382 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.382 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.382 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.382 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.382 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.382 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.382 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.382 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.382 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.382 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.382 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.382 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.382 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.382 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.382 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.382 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.382 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.382 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.382 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.382 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.382 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.382 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.382 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.382 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.382 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.382 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.382 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.382 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.382 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.382 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.382 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.382 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.382 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.382 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.382 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.382 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.382 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:05:13.382 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:13.382 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:13.382 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:13.382 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:05:13.382 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:13.382 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:13.382 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:13.382 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:13.382 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:13.382 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:13.382 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:13.382 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:13.382 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:13.382 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:13.382 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:05:13.382 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:13.382 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:13.382 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:13.382 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:13.382 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:13.382 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:13.382 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:13.382 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.382 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.382 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 28810856 kB' 'MemUsed: 4019028 kB' 'SwapCached: 0 kB' 'Active: 1779104 kB' 'Inactive: 171572 kB' 'Active(anon): 1593728 kB' 'Inactive(anon): 0 kB' 'Active(file): 185376 kB' 'Inactive(file): 171572 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1512572 kB' 'Mapped: 106680 kB' 'AnonPages: 441296 kB' 'Shmem: 1155624 kB' 'KernelStack: 6728 kB' 'PageTables: 3936 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 83124 kB' 'Slab: 214504 kB' 'SReclaimable: 83124 kB' 'SUnreclaim: 131380 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:13.382 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.382 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.382 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.382 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.382 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.382 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.382 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.382 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.382 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.382 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.382 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.382 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.382 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.382 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.382 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.382 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.382 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.382 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.382 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.382 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.382 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.382 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.382 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.382 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.382 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.382 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.382 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.382 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.382 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.382 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.382 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.382 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.382 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.382 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.382 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.383 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.383 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.383 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.383 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.383 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.383 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.383 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.383 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.383 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.383 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.383 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.383 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.383 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.383 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.383 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.383 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.383 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.383 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.383 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.383 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.383 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.383 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.383 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.383 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.383 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.383 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.383 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.383 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.383 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.383 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.383 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.383 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.383 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.383 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.383 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.383 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.383 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.383 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.383 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.383 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.383 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.383 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.383 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.383 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.383 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.383 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.383 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.383 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.383 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.383 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.383 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.383 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.383 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.383 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.383 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.383 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.383 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.383 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.383 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.383 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.383 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.383 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.383 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.383 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.383 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.383 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.383 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.383 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.383 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.383 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.383 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.383 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.383 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.383 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.383 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.383 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.383 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.383 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.383 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.383 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.383 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.383 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.383 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.383 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.383 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.383 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.383 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.383 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.383 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.383 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.383 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.383 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.383 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.383 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.383 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.383 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.383 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.383 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.383 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.383 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.383 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.383 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.383 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.383 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.383 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.383 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.383 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.383 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.383 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.383 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.383 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:13.383 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:13.383 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:13.383 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:13.383 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:13.383 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:05:13.383 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:13.383 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:05:13.383 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:13.383 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:13.383 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:13.384 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:05:13.384 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:05:13.384 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:13.384 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:13.384 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.384 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.384 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711836 kB' 'MemFree: 18356316 kB' 'MemUsed: 9355520 kB' 'SwapCached: 0 kB' 'Active: 4381976 kB' 'Inactive: 3252836 kB' 'Active(anon): 4194504 kB' 'Inactive(anon): 0 kB' 'Active(file): 187472 kB' 'Inactive(file): 3252836 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7266240 kB' 'Mapped: 40396 kB' 'AnonPages: 368656 kB' 'Shmem: 3825932 kB' 'KernelStack: 5864 kB' 'PageTables: 3704 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 70296 kB' 'Slab: 219316 kB' 'SReclaimable: 70296 kB' 'SUnreclaim: 149020 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:13.384 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.384 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.384 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.384 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.384 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.384 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.384 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.384 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.384 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.384 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.384 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.384 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.384 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.384 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.384 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.384 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.384 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.384 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.384 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.384 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.384 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.384 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.384 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.384 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.384 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.384 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.384 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.384 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.384 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.384 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.384 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.384 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.384 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.384 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.384 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.384 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.384 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.384 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.384 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.384 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.384 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.384 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.384 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.384 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.384 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.384 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.384 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.384 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.384 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.384 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.384 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.384 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.384 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.384 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.384 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.384 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.384 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.384 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.384 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.384 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.384 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.384 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.384 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.384 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.384 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.384 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.384 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.384 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.384 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.384 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.384 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.384 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.384 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.384 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.384 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.384 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.384 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.384 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.384 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.384 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.384 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.384 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.384 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.384 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.384 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.384 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.384 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.384 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.384 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.384 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.384 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.384 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.384 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.384 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.384 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.384 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.384 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.384 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.384 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.384 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.384 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.384 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.384 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.384 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.385 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.385 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.385 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.385 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.385 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.385 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.385 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.385 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.385 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.385 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.385 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.385 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.385 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.385 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.385 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.385 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.385 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.385 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.385 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.385 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.385 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.385 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.385 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.385 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.385 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.385 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.385 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.385 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.385 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.385 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.385 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.385 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.385 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.385 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.385 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.385 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.385 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.385 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:13.385 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.385 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.385 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.385 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:13.385 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:13.385 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:13.385 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:13.385 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:13.385 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:13.385 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:13.385 node0=512 expecting 512 00:05:13.385 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:13.385 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:13.385 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:13.385 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:05:13.385 node1=512 expecting 512 00:05:13.385 13:59:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:13.385 00:05:13.385 real 0m1.523s 00:05:13.385 user 0m0.608s 00:05:13.385 sys 0m0.880s 00:05:13.385 13:59:21 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:13.385 13:59:21 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:13.385 ************************************ 00:05:13.385 END TEST even_2G_alloc 00:05:13.385 ************************************ 00:05:13.385 13:59:21 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:05:13.385 13:59:21 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:13.385 13:59:21 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:13.385 13:59:21 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:13.385 ************************************ 00:05:13.385 START TEST odd_alloc 00:05:13.385 ************************************ 00:05:13.385 13:59:21 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1125 -- # odd_alloc 00:05:13.385 13:59:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:05:13.385 13:59:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:05:13.385 13:59:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:13.385 13:59:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:13.385 13:59:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:05:13.385 13:59:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:13.385 13:59:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:13.385 13:59:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:13.385 13:59:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:05:13.385 13:59:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:13.385 13:59:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:13.385 13:59:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:13.385 13:59:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:13.385 13:59:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:13.385 13:59:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:13.385 13:59:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:13.385 13:59:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:05:13.385 13:59:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:05:13.385 13:59:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:13.385 13:59:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:05:13.385 13:59:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:13.385 13:59:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:13.385 13:59:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:13.385 13:59:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:05:13.385 13:59:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:05:13.385 13:59:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:05:13.385 13:59:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:13.385 13:59:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:14.773 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:14.773 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:14.773 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:14.773 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:14.773 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:14.773 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:14.773 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:14.773 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:14.773 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:14.773 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:14.773 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:14.773 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:14.773 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:14.773 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:14.773 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:14.773 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:14.773 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:14.773 13:59:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:05:14.773 13:59:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:05:14.773 13:59:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:14.773 13:59:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:14.773 13:59:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:14.773 13:59:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:14.773 13:59:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:14.773 13:59:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:14.773 13:59:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:14.773 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:14.773 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:14.773 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:14.773 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:14.773 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:14.773 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:14.773 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:14.773 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:14.773 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:14.773 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.773 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.773 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 47183208 kB' 'MemAvailable: 50563692 kB' 'Buffers: 11392 kB' 'Cached: 8767464 kB' 'SwapCached: 0 kB' 'Active: 6164128 kB' 'Inactive: 3424408 kB' 'Active(anon): 5791280 kB' 'Inactive(anon): 0 kB' 'Active(file): 372848 kB' 'Inactive(file): 3424408 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 812516 kB' 'Mapped: 147128 kB' 'Shmem: 4981600 kB' 'KReclaimable: 153420 kB' 'Slab: 433840 kB' 'SReclaimable: 153420 kB' 'SUnreclaim: 280420 kB' 'KernelStack: 12928 kB' 'PageTables: 8068 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609864 kB' 'Committed_AS: 7385300 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193236 kB' 'VmallocChunk: 0 kB' 'Percpu: 30720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 425564 kB' 'DirectMap2M: 8931328 kB' 'DirectMap1G: 59768832 kB' 00:05:14.773 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.773 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.773 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.773 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.773 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.773 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.773 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.773 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.773 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.773 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.773 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.773 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.773 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.773 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.773 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.773 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.773 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.773 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.773 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.773 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.773 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.773 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.773 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.773 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.773 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.773 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.773 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.773 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.773 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.773 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.773 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.773 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.773 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.773 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.773 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.773 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.773 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.773 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.773 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.773 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.773 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.773 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.773 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.773 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.773 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.773 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.773 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.773 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.773 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.773 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.773 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.773 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.773 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.774 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.774 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.774 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.774 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.774 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.774 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.774 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.774 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.774 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.774 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.774 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.774 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.774 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.774 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.774 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.774 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.774 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.774 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.774 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.774 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.774 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.774 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.774 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.774 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.774 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.774 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.774 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.774 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.774 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.774 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.774 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.774 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.774 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.774 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.774 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.774 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.774 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.774 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.774 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.774 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.774 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.774 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.774 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.774 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.774 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.774 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.774 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.774 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.774 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.774 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.774 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.774 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.774 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.774 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.774 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.774 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.774 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.774 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.774 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.774 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.774 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.774 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.774 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.774 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.774 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.774 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.774 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.774 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.774 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.774 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.774 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.774 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.774 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.774 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.774 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.774 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.774 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.774 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.774 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.774 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.774 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.774 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.774 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.774 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.774 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.774 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.774 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.774 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.774 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.774 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.774 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.774 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.774 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.774 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.774 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.774 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.774 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.774 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.774 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.774 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.774 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.774 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.774 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.774 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.774 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.774 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.774 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.774 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.774 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:14.774 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:14.774 13:59:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:14.774 13:59:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:14.774 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:14.774 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:14.774 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:14.774 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:14.774 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:14.774 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:14.774 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:14.774 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:14.774 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:14.774 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.774 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.775 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 47181744 kB' 'MemAvailable: 50562088 kB' 'Buffers: 11392 kB' 'Cached: 8767468 kB' 'SwapCached: 0 kB' 'Active: 6163604 kB' 'Inactive: 3424408 kB' 'Active(anon): 5790756 kB' 'Inactive(anon): 0 kB' 'Active(file): 372848 kB' 'Inactive(file): 3424408 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 812368 kB' 'Mapped: 147164 kB' 'Shmem: 4981604 kB' 'KReclaimable: 153420 kB' 'Slab: 433832 kB' 'SReclaimable: 153420 kB' 'SUnreclaim: 280412 kB' 'KernelStack: 13024 kB' 'PageTables: 8544 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609864 kB' 'Committed_AS: 7385320 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193220 kB' 'VmallocChunk: 0 kB' 'Percpu: 30720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 425564 kB' 'DirectMap2M: 8931328 kB' 'DirectMap1G: 59768832 kB' 00:05:14.775 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.775 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.775 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.775 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.775 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.775 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.775 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.775 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.775 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.775 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.775 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.775 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.775 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.775 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.775 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.775 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.775 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.775 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.775 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.775 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.775 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.775 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.775 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.775 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.775 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.775 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.775 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.775 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.775 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.775 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.775 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.775 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.775 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.775 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.775 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.775 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.775 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.775 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.775 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.775 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.775 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.775 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.775 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.775 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.775 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.775 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.775 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.775 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.775 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.775 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.775 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.775 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.775 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.775 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.775 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.775 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.775 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.775 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.775 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.775 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.775 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.775 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.775 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.775 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.775 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.775 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.775 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.775 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.775 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.775 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.775 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.775 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.775 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.775 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.775 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.775 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.775 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.775 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.775 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.775 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.775 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.775 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.775 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.775 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.775 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.775 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.775 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.775 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.775 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.775 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.775 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.775 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.775 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.775 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.775 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.775 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.775 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.775 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.775 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.775 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.775 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.775 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.775 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.775 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.775 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.775 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.775 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.775 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.775 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.775 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.775 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.775 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.776 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.776 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.776 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.776 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.776 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.776 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.776 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.776 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.776 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.776 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.776 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.776 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.776 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.776 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.776 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.776 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.776 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.776 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.776 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.776 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.776 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.776 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.776 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.776 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.776 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.776 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.776 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.776 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.776 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.776 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.776 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.776 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.776 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.776 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.776 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.776 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.776 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.776 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.776 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.776 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.776 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.776 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.776 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.776 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.776 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.776 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.776 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.776 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.776 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.776 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.776 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.776 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.776 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.776 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.776 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.776 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.776 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.776 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.776 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.776 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.776 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.776 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.776 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.776 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.776 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.776 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.776 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.776 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.776 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.776 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.776 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.776 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.776 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.776 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.776 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.776 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.776 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.776 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.776 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.776 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.776 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.776 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.776 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.776 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.776 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.776 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.776 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.776 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.776 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.776 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.776 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.776 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.776 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.776 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:14.776 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:14.776 13:59:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:14.776 13:59:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:14.776 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:14.776 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:14.776 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:14.776 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:14.776 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:14.776 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:14.776 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:14.776 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:14.776 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:14.776 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.776 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.776 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 47179956 kB' 'MemAvailable: 50560300 kB' 'Buffers: 11392 kB' 'Cached: 8767488 kB' 'SwapCached: 0 kB' 'Active: 6162940 kB' 'Inactive: 3424408 kB' 'Active(anon): 5790092 kB' 'Inactive(anon): 0 kB' 'Active(file): 372848 kB' 'Inactive(file): 3424408 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 811688 kB' 'Mapped: 147088 kB' 'Shmem: 4981624 kB' 'KReclaimable: 153420 kB' 'Slab: 433864 kB' 'SReclaimable: 153420 kB' 'SUnreclaim: 280444 kB' 'KernelStack: 12736 kB' 'PageTables: 8016 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609864 kB' 'Committed_AS: 7382980 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193076 kB' 'VmallocChunk: 0 kB' 'Percpu: 30720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 425564 kB' 'DirectMap2M: 8931328 kB' 'DirectMap1G: 59768832 kB' 00:05:14.776 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.777 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.777 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.777 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.777 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.777 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.777 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.777 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.777 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.777 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.777 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.777 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.777 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.777 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.777 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.777 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.777 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.777 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.777 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.777 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.777 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.777 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.777 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.777 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.777 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.777 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.777 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.777 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.777 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.777 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.777 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.777 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.777 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.777 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.777 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.777 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.777 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.777 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.777 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.777 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.777 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.777 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.777 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.777 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.777 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.777 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.777 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.777 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.777 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.777 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.777 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.777 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.777 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.777 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.777 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.777 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.777 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.777 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.777 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.777 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.777 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.777 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.777 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.777 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.777 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.777 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.777 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.777 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.777 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.777 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.777 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.777 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.777 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.777 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.777 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.777 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.777 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.777 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.777 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.777 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.777 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.777 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.777 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.777 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.777 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.777 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.777 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.777 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.777 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.777 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.777 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.777 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.777 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.777 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.778 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.778 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.778 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.778 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.778 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.778 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.778 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.778 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.778 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.778 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.778 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.778 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.778 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.778 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.778 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.778 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.778 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.778 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.778 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.778 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.778 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.778 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.778 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.778 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.778 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.778 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.778 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.778 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.778 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.778 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.778 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.778 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.778 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.778 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.778 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.778 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.778 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.778 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.778 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.778 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.778 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.778 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.778 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.778 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.778 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.778 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.778 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.778 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.778 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.778 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.778 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.778 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.778 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.778 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.778 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.778 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.778 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.778 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.778 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.778 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.778 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.778 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.778 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.778 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.778 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.778 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.778 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.778 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.778 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.778 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.778 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.778 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.778 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.778 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.778 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.778 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.778 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.778 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.779 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.779 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.779 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.779 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.779 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.779 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.779 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.779 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.779 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.779 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.779 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.779 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.779 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.779 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.779 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.779 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.779 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.779 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.779 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.779 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.779 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.779 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.779 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.779 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.779 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.779 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.779 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.779 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.779 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.779 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:14.779 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:14.779 13:59:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:14.779 13:59:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:05:14.779 nr_hugepages=1025 00:05:14.779 13:59:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:14.779 resv_hugepages=0 00:05:14.779 13:59:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:14.779 surplus_hugepages=0 00:05:14.779 13:59:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:14.779 anon_hugepages=0 00:05:14.779 13:59:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:14.779 13:59:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:05:14.779 13:59:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:14.779 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:14.779 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:14.779 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:14.779 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:14.779 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:14.779 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:14.779 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:14.779 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:14.779 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:14.779 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.779 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.779 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 47180228 kB' 'MemAvailable: 50560572 kB' 'Buffers: 11392 kB' 'Cached: 8767488 kB' 'SwapCached: 0 kB' 'Active: 6162420 kB' 'Inactive: 3424408 kB' 'Active(anon): 5789572 kB' 'Inactive(anon): 0 kB' 'Active(file): 372848 kB' 'Inactive(file): 3424408 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 811224 kB' 'Mapped: 147088 kB' 'Shmem: 4981624 kB' 'KReclaimable: 153420 kB' 'Slab: 433840 kB' 'SReclaimable: 153420 kB' 'SUnreclaim: 280420 kB' 'KernelStack: 12624 kB' 'PageTables: 7612 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609864 kB' 'Committed_AS: 7383000 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193076 kB' 'VmallocChunk: 0 kB' 'Percpu: 30720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 425564 kB' 'DirectMap2M: 8931328 kB' 'DirectMap1G: 59768832 kB' 00:05:14.779 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.779 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.779 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.779 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.779 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.779 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.779 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.779 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.779 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.779 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.779 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.779 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.779 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.779 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.779 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.779 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.780 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.780 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.780 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.780 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.780 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.780 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.780 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.780 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.780 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.780 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.780 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.780 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.780 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.780 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.780 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.780 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.780 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.780 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.780 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.780 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.780 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.780 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.780 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.780 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.780 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.780 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.780 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.780 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.780 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.780 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.780 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.780 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.780 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.780 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.780 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.780 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.780 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.780 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.780 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.780 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.780 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.780 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.780 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.780 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.780 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.780 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.780 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.780 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.780 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.780 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.780 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.780 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.780 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.780 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.780 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.780 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.780 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.780 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.780 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.780 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.780 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.780 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.780 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.780 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.780 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.780 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.780 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.780 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.780 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.780 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.780 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.780 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.780 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.780 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.780 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.780 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.780 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.780 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.780 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.780 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.780 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.780 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.780 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.781 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.781 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.781 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.781 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.781 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.781 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.781 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.781 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.781 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.781 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.781 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.781 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.781 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.781 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.781 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.781 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.781 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.781 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.781 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.781 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.781 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.781 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.781 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.781 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.781 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.781 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.781 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.781 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.781 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.781 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.781 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.781 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.781 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.781 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.781 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.781 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.781 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.781 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.781 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.781 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.781 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.781 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.781 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.781 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.781 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.781 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.781 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.781 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.781 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.781 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.781 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.781 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.781 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.781 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.781 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.781 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.781 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.781 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.781 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.781 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.781 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.781 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.781 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.781 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.781 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.781 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.781 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.781 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.781 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.781 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.781 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.781 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.781 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.781 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.781 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.781 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.781 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.781 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.781 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.781 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.781 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.781 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.781 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.782 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.782 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.782 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.782 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.782 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.782 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.782 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.782 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.782 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.782 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.782 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.782 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:05:14.782 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:14.782 13:59:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:14.782 13:59:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:14.782 13:59:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:05:14.782 13:59:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:14.782 13:59:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:14.782 13:59:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:14.782 13:59:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:05:14.782 13:59:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:14.782 13:59:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:14.782 13:59:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:14.782 13:59:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:14.782 13:59:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:14.782 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:14.782 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:05:14.782 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:14.782 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:14.782 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:14.782 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:14.782 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:14.782 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:14.782 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:14.782 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.782 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.782 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 28836720 kB' 'MemUsed: 3993164 kB' 'SwapCached: 0 kB' 'Active: 1780612 kB' 'Inactive: 171572 kB' 'Active(anon): 1595236 kB' 'Inactive(anon): 0 kB' 'Active(file): 185376 kB' 'Inactive(file): 171572 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1512572 kB' 'Mapped: 106692 kB' 'AnonPages: 442784 kB' 'Shmem: 1155624 kB' 'KernelStack: 6744 kB' 'PageTables: 3944 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 83124 kB' 'Slab: 214504 kB' 'SReclaimable: 83124 kB' 'SUnreclaim: 131380 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:14.782 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.782 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.782 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.782 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.782 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.782 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.782 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.782 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.782 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.782 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.782 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.782 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.782 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.782 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.782 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.782 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.782 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.782 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.782 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.782 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.782 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.782 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.782 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.782 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.782 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.782 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.782 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.782 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.782 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.782 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.782 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.782 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.782 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.782 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.782 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.782 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.783 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.783 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.783 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.783 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.783 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.783 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.783 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.783 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.783 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.783 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.783 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.783 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.783 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.783 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.783 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.783 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.783 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.783 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.783 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.783 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.783 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.783 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.783 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.783 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.783 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.783 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.783 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.783 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.783 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.783 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.783 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.783 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.783 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.783 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.783 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.783 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.783 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.783 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.783 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.783 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.783 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.783 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.783 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.783 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.783 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.783 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.783 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.783 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.783 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.783 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.783 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.783 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.783 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.783 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.783 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.783 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.783 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.783 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.783 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.783 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.783 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.783 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.783 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.783 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.783 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.783 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.783 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.783 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.783 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.783 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.783 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.783 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.783 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.783 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.783 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.783 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.783 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.783 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.784 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.784 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.784 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.784 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.784 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.784 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.784 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.784 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.784 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.784 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.784 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.784 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.784 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.784 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.784 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.784 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.784 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.784 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.784 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.784 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.784 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.784 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.784 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.784 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.784 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.784 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.784 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.784 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.784 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.784 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.784 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.784 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:14.784 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:14.784 13:59:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:14.784 13:59:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:14.784 13:59:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:14.784 13:59:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:05:14.784 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:14.784 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:05:14.784 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:14.784 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:14.784 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:14.784 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:05:14.784 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:05:14.784 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:14.784 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:14.784 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.784 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.784 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711836 kB' 'MemFree: 18343852 kB' 'MemUsed: 9367984 kB' 'SwapCached: 0 kB' 'Active: 4381824 kB' 'Inactive: 3252836 kB' 'Active(anon): 4194352 kB' 'Inactive(anon): 0 kB' 'Active(file): 187472 kB' 'Inactive(file): 3252836 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7266372 kB' 'Mapped: 40396 kB' 'AnonPages: 368440 kB' 'Shmem: 3826064 kB' 'KernelStack: 5880 kB' 'PageTables: 3668 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 70296 kB' 'Slab: 219336 kB' 'SReclaimable: 70296 kB' 'SUnreclaim: 149040 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:05:14.784 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.784 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.784 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.784 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.784 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.784 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.784 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.784 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.784 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.784 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.784 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.784 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.784 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.784 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.784 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.784 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.784 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.784 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.784 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.785 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.785 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.785 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.785 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.785 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.785 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.785 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.785 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.785 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.785 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.785 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.785 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.785 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.785 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.785 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.785 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.785 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.785 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.785 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.785 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.785 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.785 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.785 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.785 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.785 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.785 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.785 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.785 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.785 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.785 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.785 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.785 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.785 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.785 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.785 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.785 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.785 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.785 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.785 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.785 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.785 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.785 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.785 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.785 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.785 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.785 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.785 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.785 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.785 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.785 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.785 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.785 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.785 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.785 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.785 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.785 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.785 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.785 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.785 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.785 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.785 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.785 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.785 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.785 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.785 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.785 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.785 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.785 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.785 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.785 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.786 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.786 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.786 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.786 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.786 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.786 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.786 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.786 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.786 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.786 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.786 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.786 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.786 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.786 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.786 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.786 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.786 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.786 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.786 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.786 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.786 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.786 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.786 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.786 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.786 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.786 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.786 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.786 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.786 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.786 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.786 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.786 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.786 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.786 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.786 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.786 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.786 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.786 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.786 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.786 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.786 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.786 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.786 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.786 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.786 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.786 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.786 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.786 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.786 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.786 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.786 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.786 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.786 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:14.786 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.786 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.786 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.786 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:14.786 13:59:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:14.786 13:59:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:14.786 13:59:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:14.786 13:59:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:14.786 13:59:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:14.786 13:59:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:05:14.786 node0=512 expecting 513 00:05:14.786 13:59:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:14.786 13:59:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:14.786 13:59:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:14.786 13:59:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:05:14.786 node1=513 expecting 512 00:05:14.786 13:59:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:05:14.786 00:05:14.786 real 0m1.472s 00:05:14.786 user 0m0.629s 00:05:14.786 sys 0m0.806s 00:05:14.786 13:59:22 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:14.786 13:59:22 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:14.786 ************************************ 00:05:14.786 END TEST odd_alloc 00:05:14.786 ************************************ 00:05:14.786 13:59:22 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:05:14.786 13:59:22 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:14.786 13:59:22 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:14.786 13:59:22 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:14.786 ************************************ 00:05:14.786 START TEST custom_alloc 00:05:14.786 ************************************ 00:05:14.786 13:59:22 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1125 -- # custom_alloc 00:05:14.786 13:59:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:05:14.786 13:59:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:05:14.786 13:59:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:05:14.787 13:59:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:05:14.787 13:59:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:05:14.787 13:59:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:05:14.787 13:59:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:05:14.787 13:59:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:14.787 13:59:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:14.787 13:59:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:14.787 13:59:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:14.787 13:59:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:14.787 13:59:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:14.787 13:59:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:14.787 13:59:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:14.787 13:59:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:14.787 13:59:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:14.787 13:59:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:14.787 13:59:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:14.787 13:59:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:14.787 13:59:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:05:14.787 13:59:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:05:14.787 13:59:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:05:14.787 13:59:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:14.787 13:59:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:05:14.787 13:59:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:14.787 13:59:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:14.787 13:59:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:14.787 13:59:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:05:14.787 13:59:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:05:14.787 13:59:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:05:14.787 13:59:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:14.787 13:59:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:14.787 13:59:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:14.787 13:59:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:14.787 13:59:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:14.787 13:59:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:14.787 13:59:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:14.787 13:59:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:14.787 13:59:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:14.787 13:59:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:14.787 13:59:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:14.787 13:59:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:14.787 13:59:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:05:14.787 13:59:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:14.787 13:59:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:05:14.787 13:59:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:05:14.787 13:59:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:05:14.787 13:59:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:05:14.787 13:59:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:05:14.787 13:59:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:05:14.787 13:59:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:05:14.787 13:59:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:05:14.787 13:59:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:05:14.787 13:59:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:05:14.787 13:59:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:14.787 13:59:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:14.787 13:59:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:14.787 13:59:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:14.787 13:59:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:14.787 13:59:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:14.787 13:59:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:14.787 13:59:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:05:14.787 13:59:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:14.787 13:59:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:05:14.787 13:59:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:14.787 13:59:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:05:14.787 13:59:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:05:14.787 13:59:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:05:14.787 13:59:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:05:14.787 13:59:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:14.787 13:59:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:16.165 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:16.165 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:16.165 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:16.165 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:16.165 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:16.165 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:16.165 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:16.165 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:16.165 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:16.165 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:16.165 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:16.165 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:16.165 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:16.165 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:16.165 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:16.165 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:16.165 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:16.165 13:59:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:05:16.165 13:59:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:05:16.165 13:59:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:05:16.165 13:59:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:16.165 13:59:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:16.165 13:59:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:16.165 13:59:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:16.165 13:59:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:16.165 13:59:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:16.165 13:59:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:16.165 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:16.165 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:16.165 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:16.165 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:16.165 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:16.165 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:16.165 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:16.165 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:16.165 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:16.433 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.433 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.433 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 46122720 kB' 'MemAvailable: 49503064 kB' 'Buffers: 11392 kB' 'Cached: 8767596 kB' 'SwapCached: 0 kB' 'Active: 6164556 kB' 'Inactive: 3424408 kB' 'Active(anon): 5791708 kB' 'Inactive(anon): 0 kB' 'Active(file): 372848 kB' 'Inactive(file): 3424408 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 813284 kB' 'Mapped: 147200 kB' 'Shmem: 4981732 kB' 'KReclaimable: 153420 kB' 'Slab: 433748 kB' 'SReclaimable: 153420 kB' 'SUnreclaim: 280328 kB' 'KernelStack: 12640 kB' 'PageTables: 7672 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086600 kB' 'Committed_AS: 7382836 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193108 kB' 'VmallocChunk: 0 kB' 'Percpu: 30720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 425564 kB' 'DirectMap2M: 8931328 kB' 'DirectMap1G: 59768832 kB' 00:05:16.433 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.433 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.433 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.433 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.433 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.433 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.433 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.433 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.433 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.433 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.434 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.434 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.434 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.434 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.434 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.434 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.434 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.434 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.434 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.434 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.434 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.434 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.434 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.434 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.434 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.434 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.434 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.434 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.434 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.434 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.434 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.434 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.434 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.434 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.434 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.434 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.434 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.434 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.434 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.434 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.434 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.434 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.434 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.434 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.434 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.434 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.434 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.434 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.434 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.434 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.434 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.434 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.434 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.434 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.434 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.434 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.434 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.434 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.434 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.434 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.434 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.434 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.434 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.434 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.434 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.434 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.434 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.434 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.434 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.434 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.434 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.434 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.434 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.434 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.434 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.434 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.434 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.434 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.434 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.434 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.434 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.434 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.434 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.434 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.434 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.434 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.434 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.434 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.434 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.434 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.434 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.434 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.434 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.434 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.434 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.434 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.434 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.434 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.434 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.434 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.434 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.434 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.434 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.434 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.434 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.434 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.434 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.434 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.434 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.434 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.434 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.434 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.434 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.434 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.434 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.434 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.434 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.434 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.434 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.434 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.434 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.434 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.434 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.434 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.434 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.434 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.434 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.434 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.434 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.435 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.435 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.435 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.435 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.435 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.435 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.435 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.435 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.435 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.435 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.435 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.435 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.435 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.435 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.435 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.435 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.435 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.435 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.435 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.435 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.435 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.435 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.435 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.435 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.435 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.435 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.435 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.435 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.435 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.435 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.435 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.435 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.435 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:16.435 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:16.435 13:59:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:16.435 13:59:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:16.435 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:16.435 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:16.435 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:16.435 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:16.435 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:16.435 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:16.435 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:16.435 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:16.435 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:16.435 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.435 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.435 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 46122796 kB' 'MemAvailable: 49503140 kB' 'Buffers: 11392 kB' 'Cached: 8767600 kB' 'SwapCached: 0 kB' 'Active: 6164848 kB' 'Inactive: 3424408 kB' 'Active(anon): 5792000 kB' 'Inactive(anon): 0 kB' 'Active(file): 372848 kB' 'Inactive(file): 3424408 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 813504 kB' 'Mapped: 147176 kB' 'Shmem: 4981736 kB' 'KReclaimable: 153420 kB' 'Slab: 433716 kB' 'SReclaimable: 153420 kB' 'SUnreclaim: 280296 kB' 'KernelStack: 12624 kB' 'PageTables: 7564 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086600 kB' 'Committed_AS: 7383224 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193060 kB' 'VmallocChunk: 0 kB' 'Percpu: 30720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 425564 kB' 'DirectMap2M: 8931328 kB' 'DirectMap1G: 59768832 kB' 00:05:16.435 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.435 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.435 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.435 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.435 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.435 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.435 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.435 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.435 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.435 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.435 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.435 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.435 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.435 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.435 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.435 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.435 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.435 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.435 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.435 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.435 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.435 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.435 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.435 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.435 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.435 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.435 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.435 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.435 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.435 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.435 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.435 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.435 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.435 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.435 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.435 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.435 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.435 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.435 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.435 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.435 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.435 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.435 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.435 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.435 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.435 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.435 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.435 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.435 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.435 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.435 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.435 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.435 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.435 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.435 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.435 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.435 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.435 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.435 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.435 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.436 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.436 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.436 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.436 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.436 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.436 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.436 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.436 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.436 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.436 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.436 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.436 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.436 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.436 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.436 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.436 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.436 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.436 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.436 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.436 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.436 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.436 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.436 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.436 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.436 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.436 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.436 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.436 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.436 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.436 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.436 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.436 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.436 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.436 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.436 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.436 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.436 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.436 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.436 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.436 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.436 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.436 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.436 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.436 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.436 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.436 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.436 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.436 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.436 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.436 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.436 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.436 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.436 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.436 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.436 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.436 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.436 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.436 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.436 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.436 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.436 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.436 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.436 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.436 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.436 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.436 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.436 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.436 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.436 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.436 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.436 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.436 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.436 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.436 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.436 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.436 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.436 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.436 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.436 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.436 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.436 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.436 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.436 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.436 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.436 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.436 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.436 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.436 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.436 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.436 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.436 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.436 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.436 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.436 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.436 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.436 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.436 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.436 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.436 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.436 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.436 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.436 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.436 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.436 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.436 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.436 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.436 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.436 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.436 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.436 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.436 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.436 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.436 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.436 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.436 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.436 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.436 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.436 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.436 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.436 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.436 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.437 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.437 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.437 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.437 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.437 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.437 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.437 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.437 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.437 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.437 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.437 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.437 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.437 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.437 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.437 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.437 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.437 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.437 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.437 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.437 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.437 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.437 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.437 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.437 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.437 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:16.437 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:16.437 13:59:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:16.437 13:59:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:16.437 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:16.437 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:16.437 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:16.437 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:16.437 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:16.437 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:16.437 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:16.437 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:16.437 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:16.437 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.437 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.437 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 46123076 kB' 'MemAvailable: 49503420 kB' 'Buffers: 11392 kB' 'Cached: 8767620 kB' 'SwapCached: 0 kB' 'Active: 6164928 kB' 'Inactive: 3424408 kB' 'Active(anon): 5792080 kB' 'Inactive(anon): 0 kB' 'Active(file): 372848 kB' 'Inactive(file): 3424408 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 813544 kB' 'Mapped: 147092 kB' 'Shmem: 4981756 kB' 'KReclaimable: 153420 kB' 'Slab: 433724 kB' 'SReclaimable: 153420 kB' 'SUnreclaim: 280304 kB' 'KernelStack: 12640 kB' 'PageTables: 7612 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086600 kB' 'Committed_AS: 7383244 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193060 kB' 'VmallocChunk: 0 kB' 'Percpu: 30720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 425564 kB' 'DirectMap2M: 8931328 kB' 'DirectMap1G: 59768832 kB' 00:05:16.437 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.437 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.437 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.437 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.437 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.437 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.437 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.437 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.437 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.437 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.437 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.437 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.437 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.437 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.437 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.437 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.437 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.437 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.437 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.437 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.437 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.437 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.437 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.437 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.437 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.437 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.437 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.437 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.437 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.437 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.437 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.437 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.437 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.437 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.437 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.437 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.437 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.437 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.437 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.437 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.437 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.437 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.437 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.437 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.437 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.437 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.437 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.437 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.437 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.437 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.437 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.437 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.437 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.437 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.437 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.437 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.437 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.437 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.437 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.437 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.437 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.437 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.437 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.437 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.437 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.437 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.437 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.437 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.437 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.438 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.438 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.438 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.438 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.438 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.438 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.438 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.438 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.438 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.438 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.438 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.438 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.438 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.438 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.438 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.438 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.438 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.438 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.438 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.438 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.438 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.438 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.438 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.438 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.438 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.438 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.438 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.438 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.438 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.438 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.438 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.438 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.438 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.438 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.438 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.438 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.438 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.438 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.438 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.438 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.438 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.438 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.438 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.438 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.438 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.438 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.438 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.438 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.438 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.438 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.438 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.438 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.438 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.438 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.438 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.438 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.438 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.438 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.438 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.438 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.438 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.438 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.438 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.438 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.438 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.438 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.438 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.438 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.438 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.438 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.438 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.438 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.438 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.438 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.438 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.438 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.438 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.438 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.438 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.438 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.438 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.438 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.438 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.438 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.438 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.438 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.438 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.438 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.438 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.438 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.438 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.438 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.438 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.438 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.438 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.438 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.438 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.438 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.438 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.439 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.439 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.439 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.439 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.439 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.439 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.439 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.439 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.439 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.439 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.439 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.439 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.439 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.439 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.439 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.439 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.439 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.439 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.439 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.439 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.439 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.439 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.439 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.439 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.439 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.439 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.439 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.439 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.439 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.439 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.439 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.439 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.439 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.439 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:16.439 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:16.439 13:59:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:16.439 13:59:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:05:16.439 nr_hugepages=1536 00:05:16.439 13:59:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:16.439 resv_hugepages=0 00:05:16.439 13:59:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:16.439 surplus_hugepages=0 00:05:16.439 13:59:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:16.439 anon_hugepages=0 00:05:16.439 13:59:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:05:16.439 13:59:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:05:16.439 13:59:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:16.439 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:16.439 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:16.439 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:16.439 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:16.439 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:16.439 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:16.439 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:16.439 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:16.439 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:16.439 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.439 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.439 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 46123076 kB' 'MemAvailable: 49503420 kB' 'Buffers: 11392 kB' 'Cached: 8767620 kB' 'SwapCached: 0 kB' 'Active: 6164928 kB' 'Inactive: 3424408 kB' 'Active(anon): 5792080 kB' 'Inactive(anon): 0 kB' 'Active(file): 372848 kB' 'Inactive(file): 3424408 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 813544 kB' 'Mapped: 147092 kB' 'Shmem: 4981756 kB' 'KReclaimable: 153420 kB' 'Slab: 433724 kB' 'SReclaimable: 153420 kB' 'SUnreclaim: 280304 kB' 'KernelStack: 12640 kB' 'PageTables: 7612 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086600 kB' 'Committed_AS: 7383264 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193076 kB' 'VmallocChunk: 0 kB' 'Percpu: 30720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 425564 kB' 'DirectMap2M: 8931328 kB' 'DirectMap1G: 59768832 kB' 00:05:16.439 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.439 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.439 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.439 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.439 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.439 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.439 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.439 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.439 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.439 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.439 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.439 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.439 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.439 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.439 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.439 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.439 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.439 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.439 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.439 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.439 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.439 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.439 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.439 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.439 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.439 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.439 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.439 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.439 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.439 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.439 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.439 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.439 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.439 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.439 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.439 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.439 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.439 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.439 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.439 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.439 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.439 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.439 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.439 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.439 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.439 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.439 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.439 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.439 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.439 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.439 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.440 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.440 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.440 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.440 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.440 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.440 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.440 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.440 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.440 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.440 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.440 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.440 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.440 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.440 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.440 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.440 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.440 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.440 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.440 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.440 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.440 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.440 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.440 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.440 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.440 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.440 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.440 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.440 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.440 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.440 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.440 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.440 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.440 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.440 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.440 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.440 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.440 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.440 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.440 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.440 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.440 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.440 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.440 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.440 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.440 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.440 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.440 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.440 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.440 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.440 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.440 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.440 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.440 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.440 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.440 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.440 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.440 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.440 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.440 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.440 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.440 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.440 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.440 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.440 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.440 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.440 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.440 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.440 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.440 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.440 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.440 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.440 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.440 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.440 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.440 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.440 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.440 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.440 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.440 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.440 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.440 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.440 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.440 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.440 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.440 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.440 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.440 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.440 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.440 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.440 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.440 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.440 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.440 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.440 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.440 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.440 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.440 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.440 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.440 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.440 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.440 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.440 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.440 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.440 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.440 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.440 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.440 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.440 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.440 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.440 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.440 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.440 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.440 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.440 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.440 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.440 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.440 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.440 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.440 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.440 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.441 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.441 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.441 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.441 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.441 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.441 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.441 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.441 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.441 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.441 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.441 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.441 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.441 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.441 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.441 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.441 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.441 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.441 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.441 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.441 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.441 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.441 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.441 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:05:16.441 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:16.441 13:59:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:05:16.441 13:59:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:16.441 13:59:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:05:16.441 13:59:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:16.441 13:59:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:16.441 13:59:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:16.441 13:59:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:16.441 13:59:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:16.441 13:59:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:16.441 13:59:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:16.441 13:59:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:16.441 13:59:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:16.441 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:16.441 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:05:16.441 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:16.441 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:16.441 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:16.441 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:16.441 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:16.441 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:16.441 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:16.441 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.441 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.441 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 28829712 kB' 'MemUsed: 4000172 kB' 'SwapCached: 0 kB' 'Active: 1782628 kB' 'Inactive: 171572 kB' 'Active(anon): 1597252 kB' 'Inactive(anon): 0 kB' 'Active(file): 185376 kB' 'Inactive(file): 171572 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1512580 kB' 'Mapped: 106696 kB' 'AnonPages: 444796 kB' 'Shmem: 1155632 kB' 'KernelStack: 6728 kB' 'PageTables: 3900 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 83124 kB' 'Slab: 214436 kB' 'SReclaimable: 83124 kB' 'SUnreclaim: 131312 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:16.441 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.441 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.441 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.441 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.441 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.441 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.441 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.441 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.441 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.441 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.441 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.441 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.441 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.441 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.441 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.441 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.441 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.441 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.441 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.441 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.441 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.441 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.441 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.441 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.441 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.441 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.441 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.441 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.441 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.441 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.441 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.441 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.441 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.441 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.441 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.441 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.441 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.441 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.441 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.441 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.441 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.441 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.441 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.441 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.441 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.441 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.441 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.441 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.441 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.441 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.441 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.441 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.441 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.441 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.441 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.441 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.441 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.441 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.441 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.441 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.441 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.441 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.441 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.441 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.441 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.442 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.442 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.442 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.442 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.442 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.442 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.442 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.442 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.442 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.442 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.442 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.442 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.442 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.442 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.442 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.442 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.442 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.442 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.442 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.442 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.442 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.442 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.442 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.442 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.442 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.442 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.442 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.442 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.442 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.442 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.442 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.442 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.442 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.442 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.442 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.442 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.442 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.442 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.442 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.442 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.442 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.442 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.442 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.442 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.442 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.442 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.442 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.442 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.442 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.442 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.442 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.442 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.442 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.442 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.442 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.442 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.442 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.442 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.442 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.442 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.442 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.442 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.442 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.442 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.442 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.442 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.442 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.442 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.442 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.442 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.442 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.442 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.442 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.442 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.442 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.442 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.442 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.442 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.442 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.442 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.442 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:16.442 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:16.442 13:59:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:16.442 13:59:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:16.442 13:59:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:16.442 13:59:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:05:16.442 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:16.442 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:05:16.442 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:16.442 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:16.442 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:16.442 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:05:16.442 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:05:16.442 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:16.442 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:16.442 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.442 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.442 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711836 kB' 'MemFree: 17292608 kB' 'MemUsed: 10419228 kB' 'SwapCached: 0 kB' 'Active: 4382148 kB' 'Inactive: 3252836 kB' 'Active(anon): 4194676 kB' 'Inactive(anon): 0 kB' 'Active(file): 187472 kB' 'Inactive(file): 3252836 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7266492 kB' 'Mapped: 40396 kB' 'AnonPages: 368536 kB' 'Shmem: 3826184 kB' 'KernelStack: 5896 kB' 'PageTables: 3656 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 70296 kB' 'Slab: 219288 kB' 'SReclaimable: 70296 kB' 'SUnreclaim: 148992 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:16.442 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.442 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.442 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.442 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.442 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.442 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.442 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.442 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.442 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.442 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.442 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.442 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.442 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.443 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.443 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.443 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.443 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.443 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.443 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.443 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.443 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.443 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.443 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.443 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.443 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.443 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.443 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.443 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.443 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.443 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.443 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.443 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.443 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.443 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.443 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.443 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.443 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.443 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.443 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.443 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.443 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.443 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.443 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.443 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.443 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.443 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.443 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.443 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.443 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.443 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.443 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.443 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.443 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.443 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.443 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.443 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.443 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.443 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.443 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.443 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.443 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.443 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.443 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.443 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.443 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.443 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.443 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.443 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.443 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.443 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.443 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.443 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.443 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.443 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.443 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.443 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.443 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.443 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.443 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.443 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.443 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.443 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.443 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.443 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.443 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.443 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.443 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.443 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.443 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.443 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.443 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.443 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.443 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.443 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.443 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.443 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.443 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.443 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.443 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.443 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.443 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.443 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.443 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.443 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.443 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.443 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.443 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.443 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.443 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.443 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.443 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.443 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.443 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.443 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.443 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.443 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.443 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.443 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.444 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.444 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.444 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.444 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.444 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.444 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.444 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.444 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.444 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.444 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.444 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.444 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.444 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.444 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.444 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.444 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.444 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.444 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.444 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.444 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.444 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.444 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.444 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.444 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:16.444 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.444 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.444 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.444 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:16.444 13:59:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:16.444 13:59:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:16.444 13:59:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:16.444 13:59:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:16.444 13:59:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:16.444 13:59:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:16.444 node0=512 expecting 512 00:05:16.444 13:59:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:16.444 13:59:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:16.444 13:59:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:16.444 13:59:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:05:16.444 node1=1024 expecting 1024 00:05:16.444 13:59:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:05:16.444 00:05:16.444 real 0m1.571s 00:05:16.444 user 0m0.678s 00:05:16.444 sys 0m0.857s 00:05:16.444 13:59:24 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:16.444 13:59:24 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:16.444 ************************************ 00:05:16.444 END TEST custom_alloc 00:05:16.444 ************************************ 00:05:16.444 13:59:24 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:05:16.444 13:59:24 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:16.444 13:59:24 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:16.444 13:59:24 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:16.444 ************************************ 00:05:16.444 START TEST no_shrink_alloc 00:05:16.444 ************************************ 00:05:16.444 13:59:24 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1125 -- # no_shrink_alloc 00:05:16.444 13:59:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:05:16.444 13:59:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:16.444 13:59:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:16.444 13:59:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:05:16.444 13:59:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:16.444 13:59:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:05:16.444 13:59:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:16.444 13:59:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:16.444 13:59:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:16.444 13:59:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:16.444 13:59:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:16.444 13:59:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:16.444 13:59:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:16.444 13:59:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:16.444 13:59:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:16.444 13:59:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:16.444 13:59:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:16.444 13:59:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:16.444 13:59:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:05:16.444 13:59:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:05:16.444 13:59:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:16.444 13:59:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:17.837 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:17.837 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:17.837 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:17.837 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:17.837 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:17.837 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:17.837 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:17.837 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:17.837 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:17.837 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:17.837 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:17.837 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:17.837 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:17.837 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:17.837 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:17.837 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:17.837 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:17.837 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:05:17.837 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:05:17.837 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:17.837 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:17.837 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:17.837 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:17.837 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:17.837 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:17.837 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:17.837 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:17.837 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:17.838 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:17.838 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:17.838 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:17.838 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:17.838 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:17.838 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:17.838 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:17.838 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.838 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.838 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 47119360 kB' 'MemAvailable: 50499704 kB' 'Buffers: 11392 kB' 'Cached: 8767728 kB' 'SwapCached: 0 kB' 'Active: 6171960 kB' 'Inactive: 3424408 kB' 'Active(anon): 5799112 kB' 'Inactive(anon): 0 kB' 'Active(file): 372848 kB' 'Inactive(file): 3424408 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 820452 kB' 'Mapped: 147572 kB' 'Shmem: 4981864 kB' 'KReclaimable: 153420 kB' 'Slab: 433684 kB' 'SReclaimable: 153420 kB' 'SUnreclaim: 280264 kB' 'KernelStack: 12608 kB' 'PageTables: 7516 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 7389456 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193160 kB' 'VmallocChunk: 0 kB' 'Percpu: 30720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 425564 kB' 'DirectMap2M: 8931328 kB' 'DirectMap1G: 59768832 kB' 00:05:17.838 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.838 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.838 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.838 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.838 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.838 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.838 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.838 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.838 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.838 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.838 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.838 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.838 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.838 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.838 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.838 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.838 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.838 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.838 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.838 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.838 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.838 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.838 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.838 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.838 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.838 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.838 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.838 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.838 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.838 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.838 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.838 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.838 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.838 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.838 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.838 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.838 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.838 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.838 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.838 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.838 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.838 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.838 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.838 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.838 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.838 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.838 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.838 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.838 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.838 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.838 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.838 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.838 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.838 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.838 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.838 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.838 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.838 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.838 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.838 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.838 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.838 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.838 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.838 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.838 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.838 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.838 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.838 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.838 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.838 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.838 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.838 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.838 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.838 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.838 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.838 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.838 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.838 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.838 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.838 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.838 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.838 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.838 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.838 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.838 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.838 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.838 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.838 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.838 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.838 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.838 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.839 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.839 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.839 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.839 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.839 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.839 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.839 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.839 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.839 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.839 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.839 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.839 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.839 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.839 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.839 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.839 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.839 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.839 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.839 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.839 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.839 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.839 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.839 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.839 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.839 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.839 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.839 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.839 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.839 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.839 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.839 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.839 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.839 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.839 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.839 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.839 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.839 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.839 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.839 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.839 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.839 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.839 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.839 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.839 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.839 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.839 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.839 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.839 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.839 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.839 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.839 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.839 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.839 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.839 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.839 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.839 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.839 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.839 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.839 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.839 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.839 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.839 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.839 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.839 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.839 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.839 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.839 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.839 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.839 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.839 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.839 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:17.839 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:17.839 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:17.839 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:17.839 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:17.839 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:17.839 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:17.839 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:17.839 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:17.839 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:17.839 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:17.839 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:17.839 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:17.839 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.839 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.839 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 47120760 kB' 'MemAvailable: 50501104 kB' 'Buffers: 11392 kB' 'Cached: 8767728 kB' 'SwapCached: 0 kB' 'Active: 6172716 kB' 'Inactive: 3424408 kB' 'Active(anon): 5799868 kB' 'Inactive(anon): 0 kB' 'Active(file): 372848 kB' 'Inactive(file): 3424408 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 821200 kB' 'Mapped: 148056 kB' 'Shmem: 4981864 kB' 'KReclaimable: 153420 kB' 'Slab: 433776 kB' 'SReclaimable: 153420 kB' 'SUnreclaim: 280356 kB' 'KernelStack: 12640 kB' 'PageTables: 7628 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 7389472 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193128 kB' 'VmallocChunk: 0 kB' 'Percpu: 30720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 425564 kB' 'DirectMap2M: 8931328 kB' 'DirectMap1G: 59768832 kB' 00:05:17.839 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.839 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.839 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.839 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.839 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.839 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.839 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.839 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.839 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.839 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.839 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.839 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.839 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.839 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.839 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.839 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.839 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.839 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.839 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.839 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.840 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.840 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.840 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.840 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.840 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.840 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.840 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.840 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.840 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.840 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.840 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.840 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.840 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.840 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.840 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.840 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.840 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.840 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.840 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.840 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.840 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.840 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.840 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.840 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.840 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.840 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.840 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.840 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.840 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.840 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.840 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.840 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.840 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.840 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.840 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.840 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.840 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.840 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.840 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.840 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.840 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.840 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.840 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.840 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.840 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.840 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.840 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.840 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.840 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.840 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.840 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.840 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.840 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.840 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.840 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.840 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.840 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.840 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.840 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.840 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.840 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.840 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.840 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.840 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.840 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.840 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.840 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.840 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.840 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.840 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.840 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.840 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.840 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.840 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.840 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.840 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.840 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.840 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.840 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.840 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.840 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.840 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.840 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.840 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.840 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.840 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.840 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.840 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.840 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.840 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.840 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.840 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.840 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.840 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.840 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.840 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.840 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.840 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.840 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.840 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.840 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.840 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.840 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.840 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.840 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.840 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.840 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.840 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.840 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.840 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.840 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.840 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.840 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.840 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.841 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.841 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.841 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.841 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.841 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.841 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.841 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.841 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.841 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.841 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.841 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.841 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.841 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.841 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.841 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.841 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.841 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.841 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.841 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.841 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.841 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.841 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.841 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.841 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.841 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.841 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.841 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.841 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.841 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.841 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.841 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.841 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.841 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.841 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.841 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.841 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.841 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.841 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.841 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.841 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.841 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.841 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.841 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.841 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.841 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.841 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.841 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.841 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.841 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.841 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.841 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.841 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.841 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.841 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.841 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.841 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.841 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.841 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.841 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.841 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.841 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.841 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.841 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.841 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.841 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.841 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.841 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.841 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.841 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.841 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.841 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.841 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:17.841 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:17.841 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:17.841 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:17.841 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:17.841 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:17.841 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:17.841 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:17.841 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:17.841 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:17.841 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:17.841 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:17.841 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:17.841 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.841 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.841 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 47120976 kB' 'MemAvailable: 50501320 kB' 'Buffers: 11392 kB' 'Cached: 8767736 kB' 'SwapCached: 0 kB' 'Active: 6166616 kB' 'Inactive: 3424408 kB' 'Active(anon): 5793768 kB' 'Inactive(anon): 0 kB' 'Active(file): 372848 kB' 'Inactive(file): 3424408 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 815112 kB' 'Mapped: 147540 kB' 'Shmem: 4981872 kB' 'KReclaimable: 153420 kB' 'Slab: 433772 kB' 'SReclaimable: 153420 kB' 'SUnreclaim: 280352 kB' 'KernelStack: 12640 kB' 'PageTables: 7624 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 7383376 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193140 kB' 'VmallocChunk: 0 kB' 'Percpu: 30720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 425564 kB' 'DirectMap2M: 8931328 kB' 'DirectMap1G: 59768832 kB' 00:05:17.841 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.841 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.841 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.841 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.841 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.841 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.841 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.841 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.841 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.841 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.841 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.841 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.841 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.841 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.841 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.841 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.842 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.842 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.842 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.842 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.842 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.842 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.842 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.842 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.842 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.842 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.842 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.842 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.842 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.842 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.842 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.842 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.842 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.842 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.842 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.842 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.842 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.842 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.842 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.842 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.842 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.842 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.842 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.842 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.842 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.842 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.842 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.842 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.842 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.842 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.842 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.842 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.842 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.842 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.842 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.842 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.842 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.842 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.842 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.842 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.842 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.842 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.842 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.842 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.842 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.842 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.842 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.842 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.842 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.842 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.842 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.842 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.842 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.842 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.842 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.842 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.842 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.842 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.842 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.842 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.842 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.842 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.842 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.842 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.842 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.842 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.842 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.842 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.842 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.842 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.842 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.842 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.842 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.842 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.842 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.842 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.842 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.842 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.842 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.842 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.842 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.842 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.842 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.842 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.842 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.842 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.842 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.842 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.843 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.843 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.843 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.843 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.843 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.843 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.843 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.843 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.843 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.843 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.843 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.843 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.843 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.843 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.843 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.843 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.843 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.843 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.843 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.843 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.843 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.843 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.843 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.843 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.843 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.843 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.843 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.843 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.843 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.843 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.843 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.843 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.843 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.843 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.843 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.843 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.843 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.843 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.843 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.843 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.843 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.843 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.843 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.843 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.843 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.843 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.843 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.843 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.843 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.843 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.843 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.843 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.843 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.843 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.843 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.843 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.843 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.843 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.843 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.843 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.843 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.843 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.843 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.843 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.843 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.843 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.843 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.843 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.843 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.843 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.843 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.843 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.843 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.843 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.843 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.843 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.843 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.843 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.843 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.843 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.843 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.843 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.843 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.843 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.843 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.843 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.843 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.843 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.843 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.843 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.843 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.843 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.843 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.843 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:17.843 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:17.843 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:17.843 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:17.843 nr_hugepages=1024 00:05:17.843 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:17.843 resv_hugepages=0 00:05:17.843 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:17.843 surplus_hugepages=0 00:05:17.843 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:17.843 anon_hugepages=0 00:05:17.843 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:17.843 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:17.843 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:17.843 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:17.843 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:17.843 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:17.843 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:17.843 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:17.843 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:17.843 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:17.843 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:17.843 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:17.843 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.843 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.844 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 47121192 kB' 'MemAvailable: 50501536 kB' 'Buffers: 11392 kB' 'Cached: 8767772 kB' 'SwapCached: 0 kB' 'Active: 6166924 kB' 'Inactive: 3424408 kB' 'Active(anon): 5794076 kB' 'Inactive(anon): 0 kB' 'Active(file): 372848 kB' 'Inactive(file): 3424408 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 815424 kB' 'Mapped: 147124 kB' 'Shmem: 4981908 kB' 'KReclaimable: 153420 kB' 'Slab: 433760 kB' 'SReclaimable: 153420 kB' 'SUnreclaim: 280340 kB' 'KernelStack: 12640 kB' 'PageTables: 7600 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 7383396 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193140 kB' 'VmallocChunk: 0 kB' 'Percpu: 30720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 425564 kB' 'DirectMap2M: 8931328 kB' 'DirectMap1G: 59768832 kB' 00:05:17.844 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.844 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.844 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.844 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.844 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.844 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.844 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.844 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.844 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.844 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.844 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.844 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.844 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.844 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.844 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.844 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.844 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.844 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.844 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.844 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.844 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.844 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.844 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.844 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.844 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.844 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.844 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.844 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.844 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.844 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.844 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.844 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.844 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.844 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.844 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.844 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.844 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.844 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.844 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.844 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.844 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.844 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.844 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.844 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.844 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.844 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.844 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.844 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.844 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.844 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.844 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.844 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.844 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.844 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.844 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.844 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.844 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.844 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.844 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.844 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.844 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.844 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.844 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.844 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.844 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.844 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.844 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.844 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.844 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.844 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.844 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.844 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.844 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.844 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.844 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.844 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.844 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.844 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.844 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.844 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.844 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.844 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.844 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.844 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.844 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.844 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.844 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.844 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.844 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.844 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.844 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.844 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.844 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.844 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.844 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.844 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.844 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.844 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.844 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.844 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.844 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.844 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.844 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.844 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.844 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.845 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.845 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.845 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.845 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.845 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.845 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.845 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.845 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.845 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.845 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.845 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.845 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.845 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.845 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.845 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.845 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.845 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.845 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.845 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.845 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.845 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.845 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.845 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.845 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.845 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.845 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.845 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.845 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.845 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.845 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.845 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.845 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.845 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.845 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.845 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.845 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.845 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.845 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.845 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.845 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.845 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.845 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.845 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.845 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.845 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.845 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.845 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.845 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.845 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.845 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.845 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.845 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.845 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.845 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.845 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.845 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.845 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.845 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.845 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.845 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.845 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.845 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.845 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.845 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.845 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.845 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.845 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.845 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.845 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.845 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.845 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.845 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.845 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.845 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.845 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.845 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.845 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.845 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.845 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.845 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.845 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.845 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.845 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.845 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.845 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.845 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.845 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.845 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.845 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:17.845 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:17.845 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:17.845 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:17.845 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:05:17.845 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:17.845 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:17.845 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:17.845 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:05:17.845 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:17.845 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:17.845 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:17.845 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:17.845 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:17.845 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:17.845 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:17.845 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:17.845 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:17.845 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:17.845 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:17.845 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:17.845 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:17.845 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:17.845 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.845 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.846 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 27763088 kB' 'MemUsed: 5066796 kB' 'SwapCached: 0 kB' 'Active: 1784764 kB' 'Inactive: 171572 kB' 'Active(anon): 1599388 kB' 'Inactive(anon): 0 kB' 'Active(file): 185376 kB' 'Inactive(file): 171572 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1512592 kB' 'Mapped: 106728 kB' 'AnonPages: 446908 kB' 'Shmem: 1155644 kB' 'KernelStack: 6744 kB' 'PageTables: 3940 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 83124 kB' 'Slab: 214460 kB' 'SReclaimable: 83124 kB' 'SUnreclaim: 131336 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:17.846 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.846 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.846 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.846 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.846 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.846 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.846 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.846 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.846 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.846 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.846 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.846 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.846 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.846 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.846 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.846 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.846 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.846 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.846 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.846 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.846 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.846 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.846 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.846 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.846 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.846 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.846 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.846 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.846 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.846 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.846 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.846 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.846 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.846 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.846 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.846 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.846 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.846 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.846 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.846 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.846 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.846 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.846 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.846 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.846 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.846 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.846 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.846 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.846 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.846 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.846 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.846 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.846 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.846 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.846 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.846 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.846 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.846 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.846 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.846 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.846 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.846 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.846 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.846 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.846 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.846 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.846 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.846 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.846 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.846 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.846 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.846 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.846 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.846 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.846 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.846 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.846 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.846 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.846 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.846 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.846 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.846 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.846 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.846 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.846 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.846 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.846 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.846 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.846 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.846 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.846 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.846 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.846 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.846 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.846 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.846 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.846 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.846 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:17.846 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.846 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.846 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.109 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.109 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.109 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.109 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.109 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.109 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.109 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.109 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.109 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.109 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.109 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.109 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.109 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.109 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.109 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.109 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.109 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.109 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.109 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.109 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.109 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.109 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.109 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.109 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.109 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.109 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.109 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.109 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.109 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.109 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.109 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.109 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.109 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.109 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.109 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.109 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.109 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.109 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.109 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.109 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.109 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.109 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.109 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.109 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.109 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:18.109 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:18.110 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:18.110 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:18.110 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:18.110 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:18.110 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:18.110 node0=1024 expecting 1024 00:05:18.110 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:18.110 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:05:18.110 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:05:18.110 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:05:18.110 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:18.110 13:59:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:19.051 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:19.051 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:19.051 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:19.051 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:19.051 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:19.051 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:19.051 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:19.051 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:19.051 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:19.051 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:19.052 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:19.052 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:19.052 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:19.052 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:19.052 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:19.052 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:19.052 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:19.317 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:05:19.317 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:05:19.317 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:05:19.317 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:19.317 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:19.317 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:19.317 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:19.317 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:19.317 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:19.317 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:19.317 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:19.317 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:19.317 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:19.317 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:19.317 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:19.317 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:19.317 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:19.317 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:19.318 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:19.318 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.318 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.318 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 47109460 kB' 'MemAvailable: 50489804 kB' 'Buffers: 11392 kB' 'Cached: 8767836 kB' 'SwapCached: 0 kB' 'Active: 6170412 kB' 'Inactive: 3424408 kB' 'Active(anon): 5797564 kB' 'Inactive(anon): 0 kB' 'Active(file): 372848 kB' 'Inactive(file): 3424408 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 818832 kB' 'Mapped: 147188 kB' 'Shmem: 4981972 kB' 'KReclaimable: 153420 kB' 'Slab: 433744 kB' 'SReclaimable: 153420 kB' 'SUnreclaim: 280324 kB' 'KernelStack: 12768 kB' 'PageTables: 7952 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 7385908 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193316 kB' 'VmallocChunk: 0 kB' 'Percpu: 30720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 425564 kB' 'DirectMap2M: 8931328 kB' 'DirectMap1G: 59768832 kB' 00:05:19.318 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.318 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.318 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.318 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.318 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.318 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.318 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.318 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.318 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.318 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.318 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.318 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.318 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.318 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.318 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.318 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.318 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.318 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.318 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.318 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.318 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.318 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.318 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.318 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.318 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.318 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.318 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.318 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.318 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.318 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.318 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.318 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.318 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.318 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.318 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.318 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.318 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.318 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.318 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.318 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.318 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.318 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.318 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.318 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.318 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.318 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.318 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.318 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.318 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.318 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.318 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.318 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.318 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.318 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.318 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.318 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.318 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.318 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.318 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.318 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.318 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.318 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.318 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.318 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.318 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.318 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.318 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.318 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.318 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.318 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.318 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.318 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.318 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.318 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.318 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.318 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.318 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.318 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.318 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.318 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.318 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.318 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.318 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.318 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.318 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.318 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.318 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.318 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.318 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.318 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.318 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.318 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.318 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.318 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.318 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.318 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.318 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.318 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.318 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.318 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.318 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.318 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.319 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.319 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.319 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.319 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.319 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.319 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.319 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.319 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.319 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.319 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.319 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.319 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.319 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.319 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.319 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.319 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.319 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.319 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.319 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.319 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.319 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.319 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.319 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.319 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.319 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.319 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.319 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.319 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.319 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.319 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.319 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.319 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.319 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.319 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.319 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.319 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.319 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.319 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.319 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.319 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.319 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.319 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.319 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.319 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.319 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.319 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.319 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.319 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.319 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.319 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.319 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.319 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.319 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.319 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.319 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.319 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.319 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.319 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.319 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.319 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:19.319 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:19.319 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:19.319 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:19.319 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:19.319 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:19.319 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:19.319 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:19.319 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:19.319 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:19.319 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:19.319 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:19.319 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:19.319 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.319 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.319 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 47107224 kB' 'MemAvailable: 50487568 kB' 'Buffers: 11392 kB' 'Cached: 8767840 kB' 'SwapCached: 0 kB' 'Active: 6171192 kB' 'Inactive: 3424408 kB' 'Active(anon): 5798344 kB' 'Inactive(anon): 0 kB' 'Active(file): 372848 kB' 'Inactive(file): 3424408 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 819528 kB' 'Mapped: 147188 kB' 'Shmem: 4981976 kB' 'KReclaimable: 153420 kB' 'Slab: 433744 kB' 'SReclaimable: 153420 kB' 'SUnreclaim: 280324 kB' 'KernelStack: 13184 kB' 'PageTables: 9156 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 7386076 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193412 kB' 'VmallocChunk: 0 kB' 'Percpu: 30720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 425564 kB' 'DirectMap2M: 8931328 kB' 'DirectMap1G: 59768832 kB' 00:05:19.319 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.319 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.319 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.319 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.319 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.319 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.319 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.319 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.319 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.319 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.319 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.319 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.319 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.319 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.319 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.319 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.319 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.319 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.319 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.319 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.319 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.319 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.319 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.319 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.319 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.319 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.319 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.319 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.319 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.319 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.319 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.320 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.320 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.320 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.320 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.320 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.320 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.320 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.320 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.320 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.320 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.320 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.320 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.320 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.320 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.320 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.320 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.320 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.320 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.320 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.320 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.320 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.320 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.320 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.320 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.320 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.320 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.320 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.320 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.320 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.320 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.320 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.320 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.320 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.320 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.320 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.320 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.320 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.320 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.320 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.320 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.320 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.320 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.320 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.320 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.320 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.320 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.320 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.320 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.320 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.320 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.320 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.320 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.320 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.320 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.320 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.320 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.320 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.320 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.320 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.320 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.320 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.320 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.320 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.320 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.320 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.320 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.320 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.320 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.320 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.320 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.320 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.320 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.320 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.320 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.320 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.320 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.320 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.320 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.320 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.320 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.320 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.320 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.320 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.320 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.320 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.320 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.320 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.320 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.320 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.320 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.320 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.320 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.320 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.320 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.320 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.320 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.320 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.320 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.320 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.320 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.320 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.320 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.320 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.320 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.320 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.320 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.320 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.320 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.320 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.320 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.320 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.320 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.320 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.320 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.320 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.320 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.320 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.320 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.320 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.321 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.321 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.321 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.321 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.321 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.321 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.321 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.321 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.321 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.321 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.321 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.321 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.321 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.321 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.321 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.321 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.321 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.321 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.321 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.321 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.321 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.321 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.321 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.321 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.321 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.321 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.321 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.321 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.321 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.321 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.321 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.321 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.321 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.321 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.321 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.321 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.321 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.321 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.321 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.321 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.321 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.321 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.321 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.321 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.321 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.321 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.321 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.321 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.321 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.321 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.321 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.321 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.321 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.321 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.321 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.321 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:19.321 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:19.321 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:19.321 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:19.321 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:19.321 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:19.321 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:19.321 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:19.321 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:19.321 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:19.321 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:19.321 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:19.321 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:19.321 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.321 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.321 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 47108672 kB' 'MemAvailable: 50489016 kB' 'Buffers: 11392 kB' 'Cached: 8767840 kB' 'SwapCached: 0 kB' 'Active: 6170968 kB' 'Inactive: 3424408 kB' 'Active(anon): 5798120 kB' 'Inactive(anon): 0 kB' 'Active(file): 372848 kB' 'Inactive(file): 3424408 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 819232 kB' 'Mapped: 147104 kB' 'Shmem: 4981976 kB' 'KReclaimable: 153420 kB' 'Slab: 433736 kB' 'SReclaimable: 153420 kB' 'SUnreclaim: 280316 kB' 'KernelStack: 13056 kB' 'PageTables: 8960 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 7383740 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193268 kB' 'VmallocChunk: 0 kB' 'Percpu: 30720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 425564 kB' 'DirectMap2M: 8931328 kB' 'DirectMap1G: 59768832 kB' 00:05:19.321 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.321 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.321 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.321 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.321 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.321 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.321 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.321 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.321 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.321 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.321 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.321 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.321 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.321 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.321 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.321 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.321 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.321 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.321 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.321 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.321 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.321 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.321 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.321 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.321 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.321 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.321 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.321 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.321 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.321 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.321 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.321 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.321 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.321 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.321 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.321 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.322 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.322 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.322 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.322 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.322 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.322 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.322 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.322 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.322 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.322 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.322 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.322 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.322 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.322 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.322 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.322 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.322 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.322 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.322 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.322 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.322 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.322 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.322 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.322 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.322 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.322 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.322 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.322 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.322 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.322 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.322 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.322 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.322 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.322 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.322 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.322 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.322 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.322 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.322 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.322 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.322 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.322 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.322 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.322 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.322 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.322 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.322 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.322 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.322 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.322 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.322 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.322 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.322 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.322 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.322 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.322 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.322 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.322 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.322 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.322 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.322 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.322 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.322 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.322 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.322 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.322 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.322 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.322 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.322 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.322 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.322 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.322 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.322 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.322 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.322 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.322 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.322 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.322 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.322 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.322 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.322 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.322 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.322 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.322 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.322 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.322 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.322 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.322 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.322 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.322 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.322 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.322 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.322 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.322 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.323 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.323 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.323 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.323 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.323 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.323 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.323 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.323 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.323 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.323 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.323 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.323 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.323 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.323 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.323 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.323 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.323 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.323 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.323 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.323 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.323 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.323 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.323 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.323 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.323 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.323 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.323 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.323 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.323 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.323 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.323 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.323 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.323 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.323 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.323 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.323 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.323 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.323 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.323 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.323 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.323 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.323 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.323 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.323 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.323 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.323 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.323 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.323 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.323 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.323 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.323 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.323 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.323 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.323 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.323 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.323 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.323 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.323 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.323 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.323 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.323 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.323 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.323 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.323 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.323 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.323 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.323 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.323 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.323 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.323 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.323 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.323 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:19.323 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:19.323 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:19.323 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:19.323 nr_hugepages=1024 00:05:19.323 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:19.323 resv_hugepages=0 00:05:19.323 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:19.323 surplus_hugepages=0 00:05:19.323 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:19.323 anon_hugepages=0 00:05:19.323 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:19.323 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:19.323 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:19.323 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:19.323 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:19.323 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:19.323 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:19.323 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:19.323 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:19.323 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:19.323 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:19.323 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:19.323 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.323 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.323 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 47109548 kB' 'MemAvailable: 50489892 kB' 'Buffers: 11392 kB' 'Cached: 8767880 kB' 'SwapCached: 0 kB' 'Active: 6169424 kB' 'Inactive: 3424408 kB' 'Active(anon): 5796576 kB' 'Inactive(anon): 0 kB' 'Active(file): 372848 kB' 'Inactive(file): 3424408 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 817732 kB' 'Mapped: 147100 kB' 'Shmem: 4982016 kB' 'KReclaimable: 153420 kB' 'Slab: 433776 kB' 'SReclaimable: 153420 kB' 'SUnreclaim: 280356 kB' 'KernelStack: 12576 kB' 'PageTables: 7240 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 7383760 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193140 kB' 'VmallocChunk: 0 kB' 'Percpu: 30720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 425564 kB' 'DirectMap2M: 8931328 kB' 'DirectMap1G: 59768832 kB' 00:05:19.323 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.323 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.323 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.323 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.323 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.323 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.323 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.323 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.323 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.323 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.323 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.324 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.324 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.324 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.324 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.324 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.324 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.324 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.324 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.324 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.324 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.324 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.324 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.324 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.324 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.324 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.324 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.324 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.324 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.324 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.324 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.324 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.324 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.324 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.324 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.324 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.324 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.324 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.324 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.324 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.324 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.324 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.324 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.324 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.324 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.324 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.324 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.324 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.324 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.324 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.324 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.324 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.324 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.324 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.324 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.324 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.324 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.324 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.324 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.324 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.324 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.324 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.324 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.324 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.324 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.324 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.324 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.324 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.324 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.324 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.324 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.324 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.324 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.324 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.324 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.324 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.324 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.324 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.324 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.324 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.324 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.324 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.324 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.324 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.324 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.324 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.324 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.324 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.324 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.324 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.324 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.324 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.324 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.324 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.324 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.324 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.324 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.324 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.324 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.324 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.324 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.324 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.324 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.324 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.324 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.324 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.324 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.324 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.324 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.324 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.324 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.324 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.324 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.324 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.324 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.324 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.324 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.324 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.324 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.324 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.324 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.324 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.324 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.324 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.324 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.324 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.324 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.324 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.324 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.324 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.324 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.325 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.325 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.325 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.325 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.325 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.325 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.325 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.325 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.325 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.325 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.325 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.325 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.325 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.325 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.325 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.325 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.325 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.325 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.325 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.325 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.325 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.325 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.325 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.325 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.325 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.325 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.325 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.325 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.325 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.325 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.325 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.325 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.325 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.325 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.325 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.325 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.325 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.325 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.325 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.325 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.325 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.325 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.325 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.325 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.325 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.325 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.325 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.325 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.325 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.325 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.325 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.325 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.325 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.325 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.325 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.325 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.325 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.325 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.325 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.325 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.325 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.325 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.325 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:19.325 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:19.325 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:19.325 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:19.325 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:05:19.325 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:19.325 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:19.325 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:19.325 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:05:19.325 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:19.325 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:19.325 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:19.325 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:19.325 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:19.325 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:19.325 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:19.325 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:19.325 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:19.325 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:19.325 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:19.325 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:19.325 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:19.325 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:19.325 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.325 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.325 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 27756952 kB' 'MemUsed: 5072932 kB' 'SwapCached: 0 kB' 'Active: 1787216 kB' 'Inactive: 171572 kB' 'Active(anon): 1601840 kB' 'Inactive(anon): 0 kB' 'Active(file): 185376 kB' 'Inactive(file): 171572 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1512596 kB' 'Mapped: 106712 kB' 'AnonPages: 449420 kB' 'Shmem: 1155648 kB' 'KernelStack: 6744 kB' 'PageTables: 3948 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 83124 kB' 'Slab: 214432 kB' 'SReclaimable: 83124 kB' 'SUnreclaim: 131308 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:19.325 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.325 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.325 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.325 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.325 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.325 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.325 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.325 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.325 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.325 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.325 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.325 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.325 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.325 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.325 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.325 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.325 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.325 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.325 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.325 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.325 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.325 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.326 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.326 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.326 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.326 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.326 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.326 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.326 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.326 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.326 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.326 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.326 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.326 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.326 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.326 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.326 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.326 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.326 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.326 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.326 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.326 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.326 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.326 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.326 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.326 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.326 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.326 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.326 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.326 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.326 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.326 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.326 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.326 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.326 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.326 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.326 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.326 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.326 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.326 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.326 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.326 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.326 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.326 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.326 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.326 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.326 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.326 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.326 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.326 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.326 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.326 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.326 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.326 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.326 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.326 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.326 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.326 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.326 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.326 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.326 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.326 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.326 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.326 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.326 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.326 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.326 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.326 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.326 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.326 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.326 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.326 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.326 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.326 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.326 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.326 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.326 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.326 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.326 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.326 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.326 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.326 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.326 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.326 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.326 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.326 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.326 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.326 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.326 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.326 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.326 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.326 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.326 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.326 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.326 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.326 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.326 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.326 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.326 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.326 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.326 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.326 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.326 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.326 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.326 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.326 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.326 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.326 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.326 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.326 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.326 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.326 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.326 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.326 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.326 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.326 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.326 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.326 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.326 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.326 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.326 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.327 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.327 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.327 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.327 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.327 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:19.327 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:19.327 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:19.327 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:19.327 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:19.327 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:19.327 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:19.327 node0=1024 expecting 1024 00:05:19.327 13:59:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:19.327 00:05:19.327 real 0m2.881s 00:05:19.327 user 0m1.170s 00:05:19.327 sys 0m1.640s 00:05:19.327 13:59:27 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:19.327 13:59:27 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:19.327 ************************************ 00:05:19.327 END TEST no_shrink_alloc 00:05:19.327 ************************************ 00:05:19.327 13:59:27 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:05:19.327 13:59:27 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:05:19.327 13:59:27 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:19.327 13:59:27 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:19.327 13:59:27 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:19.327 13:59:27 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:19.327 13:59:27 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:19.327 13:59:27 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:19.327 13:59:27 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:19.327 13:59:27 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:19.327 13:59:27 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:19.327 13:59:27 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:19.327 13:59:27 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:19.327 13:59:27 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:19.327 00:05:19.327 real 0m11.832s 00:05:19.327 user 0m4.502s 00:05:19.327 sys 0m6.189s 00:05:19.327 13:59:27 setup.sh.hugepages -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:19.327 13:59:27 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:19.327 ************************************ 00:05:19.327 END TEST hugepages 00:05:19.327 ************************************ 00:05:19.327 13:59:27 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:05:19.327 13:59:27 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:19.327 13:59:27 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:19.327 13:59:27 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:19.586 ************************************ 00:05:19.586 START TEST driver 00:05:19.586 ************************************ 00:05:19.586 13:59:27 setup.sh.driver -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:05:19.586 * Looking for test storage... 00:05:19.586 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:05:19.586 13:59:27 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:05:19.586 13:59:27 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:19.586 13:59:27 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:22.130 13:59:29 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:05:22.130 13:59:29 setup.sh.driver -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:22.130 13:59:29 setup.sh.driver -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:22.130 13:59:29 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:22.130 ************************************ 00:05:22.130 START TEST guess_driver 00:05:22.130 ************************************ 00:05:22.130 13:59:30 setup.sh.driver.guess_driver -- common/autotest_common.sh@1125 -- # guess_driver 00:05:22.130 13:59:30 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:05:22.130 13:59:30 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:05:22.130 13:59:30 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:05:22.130 13:59:30 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:05:22.130 13:59:30 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:05:22.130 13:59:30 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:05:22.130 13:59:30 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:05:22.130 13:59:30 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:05:22.130 13:59:30 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:05:22.130 13:59:30 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 141 > 0 )) 00:05:22.130 13:59:30 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:05:22.130 13:59:30 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:05:22.130 13:59:30 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:05:22.130 13:59:30 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:05:22.130 13:59:30 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:05:22.130 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:05:22.130 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:05:22.130 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:05:22.130 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:05:22.130 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:05:22.130 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:05:22.130 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:05:22.130 13:59:30 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:05:22.130 13:59:30 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:05:22.130 13:59:30 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:05:22.130 13:59:30 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:05:22.130 13:59:30 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:05:22.130 Looking for driver=vfio-pci 00:05:22.130 13:59:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:22.130 13:59:30 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:05:22.130 13:59:30 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:05:22.130 13:59:30 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:23.509 13:59:31 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:23.509 13:59:31 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:23.509 13:59:31 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:23.509 13:59:31 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:23.509 13:59:31 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:23.509 13:59:31 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:23.509 13:59:31 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:23.509 13:59:31 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:23.509 13:59:31 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:23.509 13:59:31 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:23.509 13:59:31 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:23.509 13:59:31 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:23.509 13:59:31 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:23.509 13:59:31 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:23.509 13:59:31 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:23.509 13:59:31 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:23.509 13:59:31 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:23.509 13:59:31 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:23.509 13:59:31 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:23.509 13:59:31 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:23.509 13:59:31 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:23.510 13:59:31 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:23.510 13:59:31 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:23.510 13:59:31 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:23.510 13:59:31 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:23.510 13:59:31 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:23.510 13:59:31 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:23.510 13:59:31 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:23.510 13:59:31 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:23.510 13:59:31 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:23.510 13:59:31 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:23.510 13:59:31 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:23.510 13:59:31 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:23.510 13:59:31 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:23.510 13:59:31 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:23.510 13:59:31 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:23.510 13:59:31 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:23.510 13:59:31 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:23.510 13:59:31 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:23.510 13:59:31 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:23.510 13:59:31 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:23.510 13:59:31 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:23.510 13:59:31 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:23.510 13:59:31 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:23.510 13:59:31 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:23.510 13:59:31 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:23.510 13:59:31 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:23.510 13:59:31 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:24.451 13:59:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:24.451 13:59:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:24.451 13:59:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:24.711 13:59:32 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:05:24.711 13:59:32 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:05:24.711 13:59:32 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:24.711 13:59:32 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:27.247 00:05:27.247 real 0m5.068s 00:05:27.247 user 0m1.132s 00:05:27.247 sys 0m1.929s 00:05:27.247 13:59:35 setup.sh.driver.guess_driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:27.247 13:59:35 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:05:27.247 ************************************ 00:05:27.247 END TEST guess_driver 00:05:27.247 ************************************ 00:05:27.247 00:05:27.247 real 0m7.756s 00:05:27.247 user 0m1.747s 00:05:27.247 sys 0m2.954s 00:05:27.247 13:59:35 setup.sh.driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:27.247 13:59:35 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:27.247 ************************************ 00:05:27.247 END TEST driver 00:05:27.247 ************************************ 00:05:27.247 13:59:35 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:05:27.247 13:59:35 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:27.247 13:59:35 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:27.247 13:59:35 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:27.247 ************************************ 00:05:27.247 START TEST devices 00:05:27.247 ************************************ 00:05:27.247 13:59:35 setup.sh.devices -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:05:27.247 * Looking for test storage... 00:05:27.247 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:05:27.247 13:59:35 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:05:27.247 13:59:35 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:05:27.247 13:59:35 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:27.247 13:59:35 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:29.151 13:59:36 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:05:29.151 13:59:36 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:05:29.151 13:59:36 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:05:29.151 13:59:36 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:05:29.151 13:59:36 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:29.151 13:59:36 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:05:29.151 13:59:36 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:05:29.151 13:59:36 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:29.151 13:59:36 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:29.151 13:59:36 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:05:29.151 13:59:36 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:05:29.151 13:59:36 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:05:29.151 13:59:36 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:05:29.151 13:59:36 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:05:29.151 13:59:36 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:29.151 13:59:36 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:05:29.151 13:59:36 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:29.151 13:59:36 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:0b:00.0 00:05:29.151 13:59:36 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\b\:\0\0\.\0* ]] 00:05:29.151 13:59:36 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:05:29.151 13:59:36 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:05:29.151 13:59:36 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:05:29.151 No valid GPT data, bailing 00:05:29.152 13:59:36 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:29.152 13:59:36 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:29.152 13:59:36 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:29.152 13:59:36 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:05:29.152 13:59:36 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:29.152 13:59:36 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:29.152 13:59:36 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:05:29.152 13:59:36 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:05:29.152 13:59:36 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:29.152 13:59:36 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:0b:00.0 00:05:29.152 13:59:36 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:05:29.152 13:59:36 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:05:29.152 13:59:36 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:29.152 13:59:36 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:29.152 13:59:36 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:29.152 13:59:36 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:29.152 ************************************ 00:05:29.152 START TEST nvme_mount 00:05:29.152 ************************************ 00:05:29.152 13:59:36 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1125 -- # nvme_mount 00:05:29.152 13:59:36 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:05:29.152 13:59:36 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:05:29.152 13:59:36 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:29.152 13:59:36 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:29.152 13:59:36 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:05:29.152 13:59:36 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:29.152 13:59:36 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:05:29.152 13:59:36 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:29.152 13:59:36 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:29.152 13:59:36 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:05:29.152 13:59:36 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:05:29.152 13:59:36 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:29.152 13:59:36 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:29.152 13:59:36 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:29.152 13:59:36 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:29.152 13:59:36 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:29.152 13:59:36 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:29.152 13:59:36 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:29.152 13:59:36 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:05:30.092 Creating new GPT entries in memory. 00:05:30.092 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:30.092 other utilities. 00:05:30.092 13:59:37 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:30.092 13:59:37 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:30.092 13:59:37 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:30.092 13:59:37 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:30.092 13:59:37 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:31.027 Creating new GPT entries in memory. 00:05:31.027 The operation has completed successfully. 00:05:31.027 13:59:38 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:31.027 13:59:38 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:31.027 13:59:38 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 94738 00:05:31.027 13:59:38 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:31.027 13:59:38 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:05:31.027 13:59:38 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:31.027 13:59:38 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:31.027 13:59:38 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:31.027 13:59:38 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:31.027 13:59:38 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:0b:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:31.027 13:59:38 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:0b:00.0 00:05:31.027 13:59:38 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:31.027 13:59:38 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:31.027 13:59:38 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:31.027 13:59:38 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:31.027 13:59:38 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:31.027 13:59:38 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:31.027 13:59:38 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:31.027 13:59:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.027 13:59:38 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:0b:00.0 00:05:31.027 13:59:38 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:31.027 13:59:38 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:31.027 13:59:38 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:32.412 13:59:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:32.412 13:59:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.412 13:59:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:32.412 13:59:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.412 13:59:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:32.412 13:59:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.412 13:59:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:32.412 13:59:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.412 13:59:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:32.412 13:59:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.412 13:59:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:32.412 13:59:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.412 13:59:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:32.412 13:59:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.412 13:59:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:32.412 13:59:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.412 13:59:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:0b:00.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:32.412 13:59:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:32.412 13:59:40 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:32.412 13:59:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.412 13:59:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:32.412 13:59:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.412 13:59:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:32.412 13:59:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.412 13:59:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:32.412 13:59:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.412 13:59:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:32.412 13:59:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.412 13:59:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:32.412 13:59:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.412 13:59:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:32.412 13:59:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.412 13:59:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:32.412 13:59:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.412 13:59:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:32.412 13:59:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.412 13:59:40 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:32.412 13:59:40 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:32.412 13:59:40 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:32.412 13:59:40 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:32.412 13:59:40 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:32.412 13:59:40 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:05:32.412 13:59:40 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:32.412 13:59:40 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:32.412 13:59:40 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:32.412 13:59:40 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:32.412 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:32.412 13:59:40 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:32.412 13:59:40 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:32.672 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:32.672 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:05:32.672 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:32.673 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:32.673 13:59:40 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:05:32.673 13:59:40 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:05:32.673 13:59:40 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:32.673 13:59:40 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:32.673 13:59:40 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:32.673 13:59:40 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:32.673 13:59:40 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:0b:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:32.673 13:59:40 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:0b:00.0 00:05:32.673 13:59:40 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:32.673 13:59:40 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:32.673 13:59:40 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:32.673 13:59:40 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:32.673 13:59:40 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:32.673 13:59:40 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:32.673 13:59:40 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:32.673 13:59:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.673 13:59:40 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:0b:00.0 00:05:32.673 13:59:40 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:32.673 13:59:40 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:32.673 13:59:40 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:33.611 13:59:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:33.611 13:59:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.611 13:59:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:33.611 13:59:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.611 13:59:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:33.611 13:59:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.611 13:59:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:33.611 13:59:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.611 13:59:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:33.611 13:59:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.611 13:59:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:33.611 13:59:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.611 13:59:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:33.611 13:59:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.611 13:59:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:33.611 13:59:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.871 13:59:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:0b:00.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:33.871 13:59:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:33.871 13:59:41 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:33.871 13:59:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.871 13:59:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:33.871 13:59:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.871 13:59:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:33.871 13:59:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.871 13:59:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:33.871 13:59:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.871 13:59:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:33.871 13:59:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.871 13:59:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:33.871 13:59:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.871 13:59:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:33.871 13:59:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.871 13:59:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:33.871 13:59:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.871 13:59:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:33.871 13:59:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.871 13:59:41 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:33.871 13:59:41 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:33.871 13:59:41 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:33.871 13:59:41 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:33.871 13:59:41 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:33.871 13:59:41 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:34.132 13:59:41 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:0b:00.0 data@nvme0n1 '' '' 00:05:34.132 13:59:41 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:0b:00.0 00:05:34.132 13:59:41 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:34.132 13:59:41 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:34.132 13:59:41 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:05:34.132 13:59:41 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:34.132 13:59:41 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:34.132 13:59:41 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:34.132 13:59:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.132 13:59:41 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:0b:00.0 00:05:34.132 13:59:41 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:34.132 13:59:41 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:34.132 13:59:41 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:35.073 13:59:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:35.073 13:59:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.073 13:59:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:35.073 13:59:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.073 13:59:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:35.073 13:59:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.073 13:59:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:35.073 13:59:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.073 13:59:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:35.073 13:59:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.073 13:59:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:35.073 13:59:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.073 13:59:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:35.073 13:59:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.073 13:59:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:35.073 13:59:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.073 13:59:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:0b:00.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:35.073 13:59:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:35.073 13:59:43 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:35.073 13:59:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.073 13:59:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:35.073 13:59:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.073 13:59:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:35.073 13:59:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.073 13:59:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:35.073 13:59:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.073 13:59:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:35.073 13:59:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.073 13:59:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:35.073 13:59:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.073 13:59:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:35.073 13:59:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.073 13:59:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:35.073 13:59:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.073 13:59:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:35.073 13:59:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.336 13:59:43 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:35.336 13:59:43 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:35.336 13:59:43 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:05:35.336 13:59:43 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:05:35.336 13:59:43 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:35.336 13:59:43 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:35.336 13:59:43 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:35.336 13:59:43 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:35.336 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:35.336 00:05:35.336 real 0m6.472s 00:05:35.336 user 0m1.483s 00:05:35.336 sys 0m2.586s 00:05:35.336 13:59:43 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:35.336 13:59:43 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:05:35.336 ************************************ 00:05:35.336 END TEST nvme_mount 00:05:35.336 ************************************ 00:05:35.336 13:59:43 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:35.336 13:59:43 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:35.336 13:59:43 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:35.336 13:59:43 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:35.336 ************************************ 00:05:35.336 START TEST dm_mount 00:05:35.336 ************************************ 00:05:35.336 13:59:43 setup.sh.devices.dm_mount -- common/autotest_common.sh@1125 -- # dm_mount 00:05:35.336 13:59:43 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:35.336 13:59:43 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:35.336 13:59:43 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:35.336 13:59:43 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:35.336 13:59:43 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:35.336 13:59:43 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:05:35.336 13:59:43 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:35.336 13:59:43 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:35.336 13:59:43 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:05:35.336 13:59:43 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:05:35.336 13:59:43 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:35.336 13:59:43 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:35.336 13:59:43 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:35.336 13:59:43 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:35.336 13:59:43 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:35.336 13:59:43 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:35.336 13:59:43 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:35.336 13:59:43 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:35.336 13:59:43 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:35.336 13:59:43 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:35.336 13:59:43 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:36.277 Creating new GPT entries in memory. 00:05:36.277 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:36.277 other utilities. 00:05:36.277 13:59:44 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:36.537 13:59:44 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:36.537 13:59:44 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:36.537 13:59:44 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:36.537 13:59:44 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:37.489 Creating new GPT entries in memory. 00:05:37.489 The operation has completed successfully. 00:05:37.489 13:59:45 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:37.489 13:59:45 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:37.489 13:59:45 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:37.489 13:59:45 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:37.489 13:59:45 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:05:38.497 The operation has completed successfully. 00:05:38.497 13:59:46 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:38.497 13:59:46 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:38.497 13:59:46 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 97133 00:05:38.497 13:59:46 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:38.497 13:59:46 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:38.497 13:59:46 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:38.497 13:59:46 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:38.497 13:59:46 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:05:38.497 13:59:46 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:38.497 13:59:46 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:05:38.498 13:59:46 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:38.498 13:59:46 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:38.498 13:59:46 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:38.498 13:59:46 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:05:38.498 13:59:46 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:38.498 13:59:46 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:38.498 13:59:46 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:38.498 13:59:46 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:05:38.498 13:59:46 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:38.498 13:59:46 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:38.498 13:59:46 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:38.498 13:59:46 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:38.498 13:59:46 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:0b:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:38.498 13:59:46 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:0b:00.0 00:05:38.498 13:59:46 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:38.498 13:59:46 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:38.498 13:59:46 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:38.498 13:59:46 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:38.498 13:59:46 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:38.498 13:59:46 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:05:38.498 13:59:46 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:38.498 13:59:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:38.498 13:59:46 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:0b:00.0 00:05:38.498 13:59:46 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:38.498 13:59:46 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:38.498 13:59:46 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:39.444 13:59:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:39.444 13:59:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:39.444 13:59:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:39.444 13:59:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:39.444 13:59:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:39.444 13:59:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:39.444 13:59:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:39.444 13:59:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:39.444 13:59:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:39.444 13:59:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:39.444 13:59:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:39.444 13:59:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:39.444 13:59:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:39.444 13:59:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:39.444 13:59:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:39.444 13:59:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:39.703 13:59:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:0b:00.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:39.703 13:59:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:39.703 13:59:47 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:39.703 13:59:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:39.703 13:59:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:39.703 13:59:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:39.703 13:59:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:39.703 13:59:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:39.704 13:59:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:39.704 13:59:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:39.704 13:59:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:39.704 13:59:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:39.704 13:59:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:39.704 13:59:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:39.704 13:59:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:39.704 13:59:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:39.704 13:59:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:39.704 13:59:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:39.704 13:59:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:39.704 13:59:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:39.704 13:59:47 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:39.704 13:59:47 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:05:39.704 13:59:47 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:39.704 13:59:47 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:39.704 13:59:47 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:39.704 13:59:47 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:39.704 13:59:47 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:0b:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:39.704 13:59:47 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:0b:00.0 00:05:39.704 13:59:47 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:39.704 13:59:47 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:39.704 13:59:47 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:05:39.704 13:59:47 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:39.704 13:59:47 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:39.704 13:59:47 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:39.704 13:59:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:39.704 13:59:47 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:0b:00.0 00:05:39.704 13:59:47 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:39.704 13:59:47 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:39.704 13:59:47 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:41.088 13:59:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:41.088 13:59:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:41.088 13:59:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:41.088 13:59:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:41.088 13:59:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:41.088 13:59:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:41.088 13:59:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:41.088 13:59:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:41.088 13:59:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:41.088 13:59:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:41.088 13:59:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:41.088 13:59:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:41.088 13:59:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:41.088 13:59:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:41.088 13:59:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:41.088 13:59:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:41.088 13:59:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:0b:00.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:41.089 13:59:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:41.089 13:59:48 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:41.089 13:59:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:41.089 13:59:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:41.089 13:59:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:41.089 13:59:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:41.089 13:59:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:41.089 13:59:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:41.089 13:59:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:41.089 13:59:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:41.089 13:59:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:41.089 13:59:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:41.089 13:59:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:41.089 13:59:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:41.089 13:59:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:41.089 13:59:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:41.089 13:59:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:41.089 13:59:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:41.089 13:59:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:41.089 13:59:48 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:41.089 13:59:48 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:41.089 13:59:48 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:05:41.089 13:59:48 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:05:41.089 13:59:48 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:41.089 13:59:48 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:41.089 13:59:48 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:41.089 13:59:48 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:41.089 13:59:48 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:41.089 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:41.089 13:59:49 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:41.089 13:59:49 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:41.089 00:05:41.089 real 0m5.733s 00:05:41.089 user 0m0.975s 00:05:41.089 sys 0m1.637s 00:05:41.089 13:59:49 setup.sh.devices.dm_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:41.089 13:59:49 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:05:41.089 ************************************ 00:05:41.089 END TEST dm_mount 00:05:41.089 ************************************ 00:05:41.089 13:59:49 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:05:41.089 13:59:49 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:05:41.089 13:59:49 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:41.089 13:59:49 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:41.089 13:59:49 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:41.089 13:59:49 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:41.089 13:59:49 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:41.349 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:41.349 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:05:41.349 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:41.349 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:41.349 13:59:49 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:05:41.349 13:59:49 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:41.349 13:59:49 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:41.349 13:59:49 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:41.349 13:59:49 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:41.349 13:59:49 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:41.349 13:59:49 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:41.349 00:05:41.349 real 0m14.171s 00:05:41.349 user 0m3.146s 00:05:41.349 sys 0m5.270s 00:05:41.349 13:59:49 setup.sh.devices -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:41.349 13:59:49 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:41.349 ************************************ 00:05:41.349 END TEST devices 00:05:41.349 ************************************ 00:05:41.349 00:05:41.349 real 0m44.644s 00:05:41.349 user 0m12.701s 00:05:41.349 sys 0m20.014s 00:05:41.349 13:59:49 setup.sh -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:41.349 13:59:49 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:41.349 ************************************ 00:05:41.349 END TEST setup.sh 00:05:41.349 ************************************ 00:05:41.349 13:59:49 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:05:42.728 Hugepages 00:05:42.728 node hugesize free / total 00:05:42.728 node0 1048576kB 0 / 0 00:05:42.728 node0 2048kB 2048 / 2048 00:05:42.728 node1 1048576kB 0 / 0 00:05:42.728 node1 2048kB 0 / 0 00:05:42.728 00:05:42.728 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:42.728 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:05:42.728 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:05:42.728 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:05:42.728 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:05:42.728 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:05:42.728 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:05:42.728 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:05:42.728 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:05:42.728 NVMe 0000:0b:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:05:42.728 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:05:42.728 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:05:42.728 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:05:42.728 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:05:42.728 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:05:42.728 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:05:42.728 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:05:42.728 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:05:42.728 13:59:50 -- spdk/autotest.sh@130 -- # uname -s 00:05:42.728 13:59:50 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:05:42.728 13:59:50 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:05:42.728 13:59:50 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:44.110 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:44.110 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:44.110 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:44.110 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:44.110 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:44.110 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:44.110 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:44.110 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:44.110 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:44.110 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:44.110 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:44.110 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:44.110 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:44.110 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:44.110 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:44.110 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:45.051 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:05:45.311 13:59:53 -- common/autotest_common.sh@1532 -- # sleep 1 00:05:46.253 13:59:54 -- common/autotest_common.sh@1533 -- # bdfs=() 00:05:46.253 13:59:54 -- common/autotest_common.sh@1533 -- # local bdfs 00:05:46.253 13:59:54 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:05:46.253 13:59:54 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:05:46.253 13:59:54 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:46.253 13:59:54 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:46.253 13:59:54 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:46.253 13:59:54 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:46.253 13:59:54 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:46.253 13:59:54 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:05:46.253 13:59:54 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:0b:00.0 00:05:46.253 13:59:54 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:47.634 Waiting for block devices as requested 00:05:47.634 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:05:47.634 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:05:47.634 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:05:47.634 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:05:47.895 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:05:47.895 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:05:47.895 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:05:47.895 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:05:48.157 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:05:48.157 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:05:48.416 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:05:48.416 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:05:48.416 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:05:48.416 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:05:48.677 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:05:48.677 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:05:48.677 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:05:48.936 13:59:56 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:05:48.936 13:59:56 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:0b:00.0 00:05:48.936 13:59:56 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:05:48.936 13:59:56 -- common/autotest_common.sh@1502 -- # grep 0000:0b:00.0/nvme/nvme 00:05:48.936 13:59:56 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:03.2/0000:0b:00.0/nvme/nvme0 00:05:48.936 13:59:56 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:03.2/0000:0b:00.0/nvme/nvme0 ]] 00:05:48.936 13:59:56 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:03.2/0000:0b:00.0/nvme/nvme0 00:05:48.936 13:59:56 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:05:48.936 13:59:56 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:05:48.936 13:59:56 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:05:48.936 13:59:56 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:05:48.936 13:59:56 -- common/autotest_common.sh@1545 -- # grep oacs 00:05:48.936 13:59:56 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:05:48.936 13:59:56 -- common/autotest_common.sh@1545 -- # oacs=' 0xf' 00:05:48.936 13:59:56 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:05:48.936 13:59:56 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:05:48.936 13:59:56 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:05:48.936 13:59:56 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:05:48.936 13:59:56 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:05:48.936 13:59:56 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:05:48.936 13:59:56 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:05:48.936 13:59:56 -- common/autotest_common.sh@1557 -- # continue 00:05:48.936 13:59:56 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:48.936 13:59:56 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:48.936 13:59:56 -- common/autotest_common.sh@10 -- # set +x 00:05:48.936 13:59:56 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:48.936 13:59:56 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:48.936 13:59:56 -- common/autotest_common.sh@10 -- # set +x 00:05:48.936 13:59:56 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:50.334 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:50.334 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:50.334 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:50.334 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:50.334 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:50.334 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:50.334 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:50.334 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:50.334 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:50.334 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:50.334 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:50.334 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:50.334 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:50.334 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:50.334 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:50.334 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:51.276 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:05:51.276 13:59:59 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:51.276 13:59:59 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:51.276 13:59:59 -- common/autotest_common.sh@10 -- # set +x 00:05:51.276 13:59:59 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:51.276 13:59:59 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:05:51.276 13:59:59 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:05:51.276 13:59:59 -- common/autotest_common.sh@1577 -- # bdfs=() 00:05:51.276 13:59:59 -- common/autotest_common.sh@1577 -- # local bdfs 00:05:51.276 13:59:59 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:05:51.276 13:59:59 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:51.276 13:59:59 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:51.276 13:59:59 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:51.276 13:59:59 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:51.276 13:59:59 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:51.534 13:59:59 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:05:51.534 13:59:59 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:0b:00.0 00:05:51.534 13:59:59 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:05:51.534 13:59:59 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:0b:00.0/device 00:05:51.534 13:59:59 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:05:51.534 13:59:59 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:51.534 13:59:59 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:05:51.534 13:59:59 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:0b:00.0 00:05:51.534 13:59:59 -- common/autotest_common.sh@1592 -- # [[ -z 0000:0b:00.0 ]] 00:05:51.534 13:59:59 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=102443 00:05:51.534 13:59:59 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:51.534 13:59:59 -- common/autotest_common.sh@1598 -- # waitforlisten 102443 00:05:51.534 13:59:59 -- common/autotest_common.sh@831 -- # '[' -z 102443 ']' 00:05:51.534 13:59:59 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:51.534 13:59:59 -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:51.534 13:59:59 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:51.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:51.534 13:59:59 -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:51.534 13:59:59 -- common/autotest_common.sh@10 -- # set +x 00:05:51.534 [2024-07-26 13:59:59.406646] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:05:51.534 [2024-07-26 13:59:59.406734] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid102443 ] 00:05:51.534 EAL: No free 2048 kB hugepages reported on node 1 00:05:51.534 [2024-07-26 13:59:59.466074] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.793 [2024-07-26 13:59:59.567734] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.361 14:00:00 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:52.361 14:00:00 -- common/autotest_common.sh@864 -- # return 0 00:05:52.361 14:00:00 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:05:52.361 14:00:00 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:05:52.361 14:00:00 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:0b:00.0 00:05:55.649 nvme0n1 00:05:55.649 14:00:03 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:55.649 [2024-07-26 14:00:03.637430] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:05:55.649 [2024-07-26 14:00:03.637477] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:05:55.649 request: 00:05:55.649 { 00:05:55.649 "nvme_ctrlr_name": "nvme0", 00:05:55.649 "password": "test", 00:05:55.649 "method": "bdev_nvme_opal_revert", 00:05:55.649 "req_id": 1 00:05:55.649 } 00:05:55.649 Got JSON-RPC error response 00:05:55.649 response: 00:05:55.649 { 00:05:55.649 "code": -32603, 00:05:55.649 "message": "Internal error" 00:05:55.649 } 00:05:55.649 14:00:03 -- common/autotest_common.sh@1604 -- # true 00:05:55.649 14:00:03 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:05:55.649 14:00:03 -- common/autotest_common.sh@1608 -- # killprocess 102443 00:05:55.649 14:00:03 -- common/autotest_common.sh@950 -- # '[' -z 102443 ']' 00:05:55.649 14:00:03 -- common/autotest_common.sh@954 -- # kill -0 102443 00:05:55.649 14:00:03 -- common/autotest_common.sh@955 -- # uname 00:05:55.649 14:00:03 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:55.649 14:00:03 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 102443 00:05:55.906 14:00:03 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:55.906 14:00:03 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:55.906 14:00:03 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 102443' 00:05:55.906 killing process with pid 102443 00:05:55.906 14:00:03 -- common/autotest_common.sh@969 -- # kill 102443 00:05:55.906 14:00:03 -- common/autotest_common.sh@974 -- # wait 102443 00:05:57.805 14:00:05 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:57.805 14:00:05 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:57.805 14:00:05 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:57.805 14:00:05 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:57.805 14:00:05 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:57.805 14:00:05 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:57.805 14:00:05 -- common/autotest_common.sh@10 -- # set +x 00:05:57.805 14:00:05 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:05:57.805 14:00:05 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:57.805 14:00:05 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:57.805 14:00:05 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:57.805 14:00:05 -- common/autotest_common.sh@10 -- # set +x 00:05:57.805 ************************************ 00:05:57.805 START TEST env 00:05:57.805 ************************************ 00:05:57.805 14:00:05 env -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:57.805 * Looking for test storage... 00:05:57.805 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:05:57.805 14:00:05 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:57.806 14:00:05 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:57.806 14:00:05 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:57.806 14:00:05 env -- common/autotest_common.sh@10 -- # set +x 00:05:57.806 ************************************ 00:05:57.806 START TEST env_memory 00:05:57.806 ************************************ 00:05:57.806 14:00:05 env.env_memory -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:57.806 00:05:57.806 00:05:57.806 CUnit - A unit testing framework for C - Version 2.1-3 00:05:57.806 http://cunit.sourceforge.net/ 00:05:57.806 00:05:57.806 00:05:57.806 Suite: memory 00:05:57.806 Test: alloc and free memory map ...[2024-07-26 14:00:05.599073] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:57.806 passed 00:05:57.806 Test: mem map translation ...[2024-07-26 14:00:05.619841] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:57.806 [2024-07-26 14:00:05.619862] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:57.806 [2024-07-26 14:00:05.619912] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:57.806 [2024-07-26 14:00:05.619923] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:57.806 passed 00:05:57.806 Test: mem map registration ...[2024-07-26 14:00:05.661931] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:57.806 [2024-07-26 14:00:05.661951] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:57.806 passed 00:05:57.806 Test: mem map adjacent registrations ...passed 00:05:57.806 00:05:57.806 Run Summary: Type Total Ran Passed Failed Inactive 00:05:57.806 suites 1 1 n/a 0 0 00:05:57.806 tests 4 4 4 0 0 00:05:57.806 asserts 152 152 152 0 n/a 00:05:57.806 00:05:57.806 Elapsed time = 0.141 seconds 00:05:57.806 00:05:57.806 real 0m0.149s 00:05:57.806 user 0m0.141s 00:05:57.806 sys 0m0.007s 00:05:57.806 14:00:05 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:57.806 14:00:05 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:57.806 ************************************ 00:05:57.806 END TEST env_memory 00:05:57.806 ************************************ 00:05:57.806 14:00:05 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:57.806 14:00:05 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:57.806 14:00:05 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:57.806 14:00:05 env -- common/autotest_common.sh@10 -- # set +x 00:05:57.806 ************************************ 00:05:57.806 START TEST env_vtophys 00:05:57.806 ************************************ 00:05:57.806 14:00:05 env.env_vtophys -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:57.806 EAL: lib.eal log level changed from notice to debug 00:05:57.806 EAL: Detected lcore 0 as core 0 on socket 0 00:05:57.806 EAL: Detected lcore 1 as core 1 on socket 0 00:05:57.806 EAL: Detected lcore 2 as core 2 on socket 0 00:05:57.806 EAL: Detected lcore 3 as core 3 on socket 0 00:05:57.806 EAL: Detected lcore 4 as core 4 on socket 0 00:05:57.806 EAL: Detected lcore 5 as core 5 on socket 0 00:05:57.806 EAL: Detected lcore 6 as core 8 on socket 0 00:05:57.806 EAL: Detected lcore 7 as core 9 on socket 0 00:05:57.806 EAL: Detected lcore 8 as core 10 on socket 0 00:05:57.806 EAL: Detected lcore 9 as core 11 on socket 0 00:05:57.806 EAL: Detected lcore 10 as core 12 on socket 0 00:05:57.806 EAL: Detected lcore 11 as core 13 on socket 0 00:05:57.806 EAL: Detected lcore 12 as core 0 on socket 1 00:05:57.806 EAL: Detected lcore 13 as core 1 on socket 1 00:05:57.806 EAL: Detected lcore 14 as core 2 on socket 1 00:05:57.806 EAL: Detected lcore 15 as core 3 on socket 1 00:05:57.806 EAL: Detected lcore 16 as core 4 on socket 1 00:05:57.806 EAL: Detected lcore 17 as core 5 on socket 1 00:05:57.806 EAL: Detected lcore 18 as core 8 on socket 1 00:05:57.806 EAL: Detected lcore 19 as core 9 on socket 1 00:05:57.806 EAL: Detected lcore 20 as core 10 on socket 1 00:05:57.806 EAL: Detected lcore 21 as core 11 on socket 1 00:05:57.806 EAL: Detected lcore 22 as core 12 on socket 1 00:05:57.806 EAL: Detected lcore 23 as core 13 on socket 1 00:05:57.806 EAL: Detected lcore 24 as core 0 on socket 0 00:05:57.806 EAL: Detected lcore 25 as core 1 on socket 0 00:05:57.806 EAL: Detected lcore 26 as core 2 on socket 0 00:05:57.806 EAL: Detected lcore 27 as core 3 on socket 0 00:05:57.806 EAL: Detected lcore 28 as core 4 on socket 0 00:05:57.806 EAL: Detected lcore 29 as core 5 on socket 0 00:05:57.806 EAL: Detected lcore 30 as core 8 on socket 0 00:05:57.806 EAL: Detected lcore 31 as core 9 on socket 0 00:05:57.806 EAL: Detected lcore 32 as core 10 on socket 0 00:05:57.806 EAL: Detected lcore 33 as core 11 on socket 0 00:05:57.806 EAL: Detected lcore 34 as core 12 on socket 0 00:05:57.806 EAL: Detected lcore 35 as core 13 on socket 0 00:05:57.806 EAL: Detected lcore 36 as core 0 on socket 1 00:05:57.806 EAL: Detected lcore 37 as core 1 on socket 1 00:05:57.806 EAL: Detected lcore 38 as core 2 on socket 1 00:05:57.806 EAL: Detected lcore 39 as core 3 on socket 1 00:05:57.806 EAL: Detected lcore 40 as core 4 on socket 1 00:05:57.806 EAL: Detected lcore 41 as core 5 on socket 1 00:05:57.806 EAL: Detected lcore 42 as core 8 on socket 1 00:05:57.806 EAL: Detected lcore 43 as core 9 on socket 1 00:05:57.806 EAL: Detected lcore 44 as core 10 on socket 1 00:05:57.806 EAL: Detected lcore 45 as core 11 on socket 1 00:05:57.806 EAL: Detected lcore 46 as core 12 on socket 1 00:05:57.806 EAL: Detected lcore 47 as core 13 on socket 1 00:05:57.806 EAL: Maximum logical cores by configuration: 128 00:05:57.806 EAL: Detected CPU lcores: 48 00:05:57.806 EAL: Detected NUMA nodes: 2 00:05:57.806 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:57.806 EAL: Detected shared linkage of DPDK 00:05:57.806 EAL: No shared files mode enabled, IPC will be disabled 00:05:57.806 EAL: Bus pci wants IOVA as 'DC' 00:05:57.806 EAL: Buses did not request a specific IOVA mode. 00:05:57.806 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:57.806 EAL: Selected IOVA mode 'VA' 00:05:57.806 EAL: No free 2048 kB hugepages reported on node 1 00:05:57.806 EAL: Probing VFIO support... 00:05:57.806 EAL: IOMMU type 1 (Type 1) is supported 00:05:57.806 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:57.806 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:57.806 EAL: VFIO support initialized 00:05:57.806 EAL: Ask a virtual area of 0x2e000 bytes 00:05:57.806 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:57.806 EAL: Setting up physically contiguous memory... 00:05:57.806 EAL: Setting maximum number of open files to 524288 00:05:57.806 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:57.806 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:57.806 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:57.806 EAL: Ask a virtual area of 0x61000 bytes 00:05:57.806 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:57.806 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:57.806 EAL: Ask a virtual area of 0x400000000 bytes 00:05:57.806 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:57.806 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:57.806 EAL: Ask a virtual area of 0x61000 bytes 00:05:57.806 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:57.806 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:57.806 EAL: Ask a virtual area of 0x400000000 bytes 00:05:57.806 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:57.806 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:57.806 EAL: Ask a virtual area of 0x61000 bytes 00:05:57.806 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:57.806 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:57.806 EAL: Ask a virtual area of 0x400000000 bytes 00:05:57.806 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:57.806 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:57.806 EAL: Ask a virtual area of 0x61000 bytes 00:05:57.806 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:57.806 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:57.806 EAL: Ask a virtual area of 0x400000000 bytes 00:05:57.806 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:57.806 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:57.806 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:57.806 EAL: Ask a virtual area of 0x61000 bytes 00:05:57.806 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:57.806 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:57.806 EAL: Ask a virtual area of 0x400000000 bytes 00:05:57.806 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:57.806 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:57.806 EAL: Ask a virtual area of 0x61000 bytes 00:05:57.806 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:57.806 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:57.806 EAL: Ask a virtual area of 0x400000000 bytes 00:05:57.806 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:57.806 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:57.806 EAL: Ask a virtual area of 0x61000 bytes 00:05:57.806 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:57.806 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:57.806 EAL: Ask a virtual area of 0x400000000 bytes 00:05:57.806 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:57.806 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:57.806 EAL: Ask a virtual area of 0x61000 bytes 00:05:57.806 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:57.806 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:57.807 EAL: Ask a virtual area of 0x400000000 bytes 00:05:57.807 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:57.807 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:57.807 EAL: Hugepages will be freed exactly as allocated. 00:05:57.807 EAL: No shared files mode enabled, IPC is disabled 00:05:57.807 EAL: No shared files mode enabled, IPC is disabled 00:05:57.807 EAL: TSC frequency is ~2700000 KHz 00:05:57.807 EAL: Main lcore 0 is ready (tid=7f407da3aa00;cpuset=[0]) 00:05:57.807 EAL: Trying to obtain current memory policy. 00:05:57.807 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:57.807 EAL: Restoring previous memory policy: 0 00:05:57.807 EAL: request: mp_malloc_sync 00:05:57.807 EAL: No shared files mode enabled, IPC is disabled 00:05:57.807 EAL: Heap on socket 0 was expanded by 2MB 00:05:57.807 EAL: No shared files mode enabled, IPC is disabled 00:05:58.065 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:58.065 EAL: Mem event callback 'spdk:(nil)' registered 00:05:58.065 00:05:58.065 00:05:58.065 CUnit - A unit testing framework for C - Version 2.1-3 00:05:58.065 http://cunit.sourceforge.net/ 00:05:58.065 00:05:58.065 00:05:58.065 Suite: components_suite 00:05:58.065 Test: vtophys_malloc_test ...passed 00:05:58.065 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:58.065 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:58.065 EAL: Restoring previous memory policy: 4 00:05:58.065 EAL: Calling mem event callback 'spdk:(nil)' 00:05:58.065 EAL: request: mp_malloc_sync 00:05:58.065 EAL: No shared files mode enabled, IPC is disabled 00:05:58.065 EAL: Heap on socket 0 was expanded by 4MB 00:05:58.066 EAL: Calling mem event callback 'spdk:(nil)' 00:05:58.066 EAL: request: mp_malloc_sync 00:05:58.066 EAL: No shared files mode enabled, IPC is disabled 00:05:58.066 EAL: Heap on socket 0 was shrunk by 4MB 00:05:58.066 EAL: Trying to obtain current memory policy. 00:05:58.066 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:58.066 EAL: Restoring previous memory policy: 4 00:05:58.066 EAL: Calling mem event callback 'spdk:(nil)' 00:05:58.066 EAL: request: mp_malloc_sync 00:05:58.066 EAL: No shared files mode enabled, IPC is disabled 00:05:58.066 EAL: Heap on socket 0 was expanded by 6MB 00:05:58.066 EAL: Calling mem event callback 'spdk:(nil)' 00:05:58.066 EAL: request: mp_malloc_sync 00:05:58.066 EAL: No shared files mode enabled, IPC is disabled 00:05:58.066 EAL: Heap on socket 0 was shrunk by 6MB 00:05:58.066 EAL: Trying to obtain current memory policy. 00:05:58.066 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:58.066 EAL: Restoring previous memory policy: 4 00:05:58.066 EAL: Calling mem event callback 'spdk:(nil)' 00:05:58.066 EAL: request: mp_malloc_sync 00:05:58.066 EAL: No shared files mode enabled, IPC is disabled 00:05:58.066 EAL: Heap on socket 0 was expanded by 10MB 00:05:58.066 EAL: Calling mem event callback 'spdk:(nil)' 00:05:58.066 EAL: request: mp_malloc_sync 00:05:58.066 EAL: No shared files mode enabled, IPC is disabled 00:05:58.066 EAL: Heap on socket 0 was shrunk by 10MB 00:05:58.066 EAL: Trying to obtain current memory policy. 00:05:58.066 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:58.066 EAL: Restoring previous memory policy: 4 00:05:58.066 EAL: Calling mem event callback 'spdk:(nil)' 00:05:58.066 EAL: request: mp_malloc_sync 00:05:58.066 EAL: No shared files mode enabled, IPC is disabled 00:05:58.066 EAL: Heap on socket 0 was expanded by 18MB 00:05:58.066 EAL: Calling mem event callback 'spdk:(nil)' 00:05:58.066 EAL: request: mp_malloc_sync 00:05:58.066 EAL: No shared files mode enabled, IPC is disabled 00:05:58.066 EAL: Heap on socket 0 was shrunk by 18MB 00:05:58.066 EAL: Trying to obtain current memory policy. 00:05:58.066 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:58.066 EAL: Restoring previous memory policy: 4 00:05:58.066 EAL: Calling mem event callback 'spdk:(nil)' 00:05:58.066 EAL: request: mp_malloc_sync 00:05:58.066 EAL: No shared files mode enabled, IPC is disabled 00:05:58.066 EAL: Heap on socket 0 was expanded by 34MB 00:05:58.066 EAL: Calling mem event callback 'spdk:(nil)' 00:05:58.066 EAL: request: mp_malloc_sync 00:05:58.066 EAL: No shared files mode enabled, IPC is disabled 00:05:58.066 EAL: Heap on socket 0 was shrunk by 34MB 00:05:58.066 EAL: Trying to obtain current memory policy. 00:05:58.066 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:58.066 EAL: Restoring previous memory policy: 4 00:05:58.066 EAL: Calling mem event callback 'spdk:(nil)' 00:05:58.066 EAL: request: mp_malloc_sync 00:05:58.066 EAL: No shared files mode enabled, IPC is disabled 00:05:58.066 EAL: Heap on socket 0 was expanded by 66MB 00:05:58.066 EAL: Calling mem event callback 'spdk:(nil)' 00:05:58.066 EAL: request: mp_malloc_sync 00:05:58.066 EAL: No shared files mode enabled, IPC is disabled 00:05:58.066 EAL: Heap on socket 0 was shrunk by 66MB 00:05:58.066 EAL: Trying to obtain current memory policy. 00:05:58.066 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:58.066 EAL: Restoring previous memory policy: 4 00:05:58.066 EAL: Calling mem event callback 'spdk:(nil)' 00:05:58.066 EAL: request: mp_malloc_sync 00:05:58.066 EAL: No shared files mode enabled, IPC is disabled 00:05:58.066 EAL: Heap on socket 0 was expanded by 130MB 00:05:58.066 EAL: Calling mem event callback 'spdk:(nil)' 00:05:58.066 EAL: request: mp_malloc_sync 00:05:58.066 EAL: No shared files mode enabled, IPC is disabled 00:05:58.066 EAL: Heap on socket 0 was shrunk by 130MB 00:05:58.066 EAL: Trying to obtain current memory policy. 00:05:58.066 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:58.066 EAL: Restoring previous memory policy: 4 00:05:58.066 EAL: Calling mem event callback 'spdk:(nil)' 00:05:58.066 EAL: request: mp_malloc_sync 00:05:58.066 EAL: No shared files mode enabled, IPC is disabled 00:05:58.066 EAL: Heap on socket 0 was expanded by 258MB 00:05:58.066 EAL: Calling mem event callback 'spdk:(nil)' 00:05:58.324 EAL: request: mp_malloc_sync 00:05:58.325 EAL: No shared files mode enabled, IPC is disabled 00:05:58.325 EAL: Heap on socket 0 was shrunk by 258MB 00:05:58.325 EAL: Trying to obtain current memory policy. 00:05:58.325 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:58.325 EAL: Restoring previous memory policy: 4 00:05:58.325 EAL: Calling mem event callback 'spdk:(nil)' 00:05:58.325 EAL: request: mp_malloc_sync 00:05:58.325 EAL: No shared files mode enabled, IPC is disabled 00:05:58.325 EAL: Heap on socket 0 was expanded by 514MB 00:05:58.583 EAL: Calling mem event callback 'spdk:(nil)' 00:05:58.583 EAL: request: mp_malloc_sync 00:05:58.583 EAL: No shared files mode enabled, IPC is disabled 00:05:58.583 EAL: Heap on socket 0 was shrunk by 514MB 00:05:58.583 EAL: Trying to obtain current memory policy. 00:05:58.583 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:58.842 EAL: Restoring previous memory policy: 4 00:05:58.842 EAL: Calling mem event callback 'spdk:(nil)' 00:05:58.842 EAL: request: mp_malloc_sync 00:05:58.842 EAL: No shared files mode enabled, IPC is disabled 00:05:58.842 EAL: Heap on socket 0 was expanded by 1026MB 00:05:59.099 EAL: Calling mem event callback 'spdk:(nil)' 00:05:59.358 EAL: request: mp_malloc_sync 00:05:59.358 EAL: No shared files mode enabled, IPC is disabled 00:05:59.359 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:59.359 passed 00:05:59.359 00:05:59.359 Run Summary: Type Total Ran Passed Failed Inactive 00:05:59.359 suites 1 1 n/a 0 0 00:05:59.359 tests 2 2 2 0 0 00:05:59.359 asserts 497 497 497 0 n/a 00:05:59.359 00:05:59.359 Elapsed time = 1.288 seconds 00:05:59.359 EAL: Calling mem event callback 'spdk:(nil)' 00:05:59.359 EAL: request: mp_malloc_sync 00:05:59.359 EAL: No shared files mode enabled, IPC is disabled 00:05:59.359 EAL: Heap on socket 0 was shrunk by 2MB 00:05:59.359 EAL: No shared files mode enabled, IPC is disabled 00:05:59.359 EAL: No shared files mode enabled, IPC is disabled 00:05:59.359 EAL: No shared files mode enabled, IPC is disabled 00:05:59.359 00:05:59.359 real 0m1.404s 00:05:59.359 user 0m0.820s 00:05:59.359 sys 0m0.546s 00:05:59.359 14:00:07 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:59.359 14:00:07 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:59.359 ************************************ 00:05:59.359 END TEST env_vtophys 00:05:59.359 ************************************ 00:05:59.359 14:00:07 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:59.359 14:00:07 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:59.359 14:00:07 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:59.359 14:00:07 env -- common/autotest_common.sh@10 -- # set +x 00:05:59.359 ************************************ 00:05:59.359 START TEST env_pci 00:05:59.359 ************************************ 00:05:59.359 14:00:07 env.env_pci -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:59.359 00:05:59.359 00:05:59.359 CUnit - A unit testing framework for C - Version 2.1-3 00:05:59.359 http://cunit.sourceforge.net/ 00:05:59.359 00:05:59.359 00:05:59.359 Suite: pci 00:05:59.359 Test: pci_hook ...[2024-07-26 14:00:07.226048] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 103565 has claimed it 00:05:59.359 EAL: Cannot find device (10000:00:01.0) 00:05:59.359 EAL: Failed to attach device on primary process 00:05:59.359 passed 00:05:59.359 00:05:59.359 Run Summary: Type Total Ran Passed Failed Inactive 00:05:59.359 suites 1 1 n/a 0 0 00:05:59.359 tests 1 1 1 0 0 00:05:59.359 asserts 25 25 25 0 n/a 00:05:59.359 00:05:59.359 Elapsed time = 0.022 seconds 00:05:59.359 00:05:59.359 real 0m0.034s 00:05:59.359 user 0m0.011s 00:05:59.359 sys 0m0.023s 00:05:59.359 14:00:07 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:59.359 14:00:07 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:59.359 ************************************ 00:05:59.359 END TEST env_pci 00:05:59.359 ************************************ 00:05:59.359 14:00:07 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:59.359 14:00:07 env -- env/env.sh@15 -- # uname 00:05:59.359 14:00:07 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:59.359 14:00:07 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:59.359 14:00:07 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:59.359 14:00:07 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:05:59.359 14:00:07 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:59.359 14:00:07 env -- common/autotest_common.sh@10 -- # set +x 00:05:59.359 ************************************ 00:05:59.359 START TEST env_dpdk_post_init 00:05:59.359 ************************************ 00:05:59.359 14:00:07 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:59.359 EAL: Detected CPU lcores: 48 00:05:59.359 EAL: Detected NUMA nodes: 2 00:05:59.359 EAL: Detected shared linkage of DPDK 00:05:59.359 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:59.359 EAL: Selected IOVA mode 'VA' 00:05:59.359 EAL: No free 2048 kB hugepages reported on node 1 00:05:59.359 EAL: VFIO support initialized 00:05:59.359 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:59.619 EAL: Using IOMMU type 1 (Type 1) 00:05:59.619 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:05:59.619 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:05:59.619 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:05:59.619 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:05:59.619 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:05:59.619 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:05:59.619 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:05:59.619 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:06:00.557 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:0b:00.0 (socket 0) 00:06:00.557 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:06:00.557 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:06:00.557 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:06:00.557 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:06:00.557 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:06:00.557 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:06:00.557 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:06:00.557 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:06:03.859 EAL: Releasing PCI mapped resource for 0000:0b:00.0 00:06:03.859 EAL: Calling pci_unmap_resource for 0000:0b:00.0 at 0x202001020000 00:06:03.859 Starting DPDK initialization... 00:06:03.859 Starting SPDK post initialization... 00:06:03.859 SPDK NVMe probe 00:06:03.859 Attaching to 0000:0b:00.0 00:06:03.859 Attached to 0000:0b:00.0 00:06:03.859 Cleaning up... 00:06:03.859 00:06:03.859 real 0m4.365s 00:06:03.859 user 0m3.237s 00:06:03.859 sys 0m0.188s 00:06:03.859 14:00:11 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:03.859 14:00:11 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:06:03.859 ************************************ 00:06:03.859 END TEST env_dpdk_post_init 00:06:03.859 ************************************ 00:06:03.859 14:00:11 env -- env/env.sh@26 -- # uname 00:06:03.859 14:00:11 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:03.859 14:00:11 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:03.859 14:00:11 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:03.859 14:00:11 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:03.859 14:00:11 env -- common/autotest_common.sh@10 -- # set +x 00:06:03.859 ************************************ 00:06:03.859 START TEST env_mem_callbacks 00:06:03.859 ************************************ 00:06:03.859 14:00:11 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:03.859 EAL: Detected CPU lcores: 48 00:06:03.859 EAL: Detected NUMA nodes: 2 00:06:03.859 EAL: Detected shared linkage of DPDK 00:06:03.859 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:03.859 EAL: Selected IOVA mode 'VA' 00:06:03.859 EAL: No free 2048 kB hugepages reported on node 1 00:06:03.859 EAL: VFIO support initialized 00:06:03.859 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:03.859 00:06:03.859 00:06:03.859 CUnit - A unit testing framework for C - Version 2.1-3 00:06:03.859 http://cunit.sourceforge.net/ 00:06:03.859 00:06:03.859 00:06:03.859 Suite: memory 00:06:03.859 Test: test ... 00:06:03.859 register 0x200000200000 2097152 00:06:03.859 malloc 3145728 00:06:03.859 register 0x200000400000 4194304 00:06:03.859 buf 0x200000500000 len 3145728 PASSED 00:06:03.859 malloc 64 00:06:03.859 buf 0x2000004fff40 len 64 PASSED 00:06:03.859 malloc 4194304 00:06:03.859 register 0x200000800000 6291456 00:06:03.859 buf 0x200000a00000 len 4194304 PASSED 00:06:03.860 free 0x200000500000 3145728 00:06:03.860 free 0x2000004fff40 64 00:06:03.860 unregister 0x200000400000 4194304 PASSED 00:06:03.860 free 0x200000a00000 4194304 00:06:03.860 unregister 0x200000800000 6291456 PASSED 00:06:03.860 malloc 8388608 00:06:03.860 register 0x200000400000 10485760 00:06:03.860 buf 0x200000600000 len 8388608 PASSED 00:06:03.860 free 0x200000600000 8388608 00:06:03.860 unregister 0x200000400000 10485760 PASSED 00:06:03.860 passed 00:06:03.860 00:06:03.860 Run Summary: Type Total Ran Passed Failed Inactive 00:06:03.860 suites 1 1 n/a 0 0 00:06:03.860 tests 1 1 1 0 0 00:06:03.860 asserts 15 15 15 0 n/a 00:06:03.860 00:06:03.860 Elapsed time = 0.005 seconds 00:06:03.860 00:06:03.860 real 0m0.048s 00:06:03.860 user 0m0.013s 00:06:03.860 sys 0m0.035s 00:06:03.860 14:00:11 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:03.860 14:00:11 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:06:03.860 ************************************ 00:06:03.860 END TEST env_mem_callbacks 00:06:03.860 ************************************ 00:06:03.860 00:06:03.860 real 0m6.293s 00:06:03.860 user 0m4.324s 00:06:03.860 sys 0m1.008s 00:06:03.860 14:00:11 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:03.860 14:00:11 env -- common/autotest_common.sh@10 -- # set +x 00:06:03.860 ************************************ 00:06:03.860 END TEST env 00:06:03.860 ************************************ 00:06:03.860 14:00:11 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:06:03.860 14:00:11 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:03.860 14:00:11 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:03.860 14:00:11 -- common/autotest_common.sh@10 -- # set +x 00:06:03.860 ************************************ 00:06:03.860 START TEST rpc 00:06:03.860 ************************************ 00:06:03.860 14:00:11 rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:06:03.860 * Looking for test storage... 00:06:03.860 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:04.117 14:00:11 rpc -- rpc/rpc.sh@65 -- # spdk_pid=104726 00:06:04.117 14:00:11 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:06:04.117 14:00:11 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:04.117 14:00:11 rpc -- rpc/rpc.sh@67 -- # waitforlisten 104726 00:06:04.117 14:00:11 rpc -- common/autotest_common.sh@831 -- # '[' -z 104726 ']' 00:06:04.117 14:00:11 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.117 14:00:11 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:04.117 14:00:11 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.117 14:00:11 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:04.117 14:00:11 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:04.117 [2024-07-26 14:00:11.928155] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:06:04.117 [2024-07-26 14:00:11.928254] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104726 ] 00:06:04.117 EAL: No free 2048 kB hugepages reported on node 1 00:06:04.117 [2024-07-26 14:00:11.985372] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.117 [2024-07-26 14:00:12.090235] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:04.117 [2024-07-26 14:00:12.090300] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 104726' to capture a snapshot of events at runtime. 00:06:04.117 [2024-07-26 14:00:12.090329] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:04.118 [2024-07-26 14:00:12.090340] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:04.118 [2024-07-26 14:00:12.090349] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid104726 for offline analysis/debug. 00:06:04.118 [2024-07-26 14:00:12.090382] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.376 14:00:12 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:04.376 14:00:12 rpc -- common/autotest_common.sh@864 -- # return 0 00:06:04.376 14:00:12 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:04.376 14:00:12 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:04.376 14:00:12 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:04.376 14:00:12 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:04.376 14:00:12 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:04.376 14:00:12 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:04.376 14:00:12 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:04.376 ************************************ 00:06:04.376 START TEST rpc_integrity 00:06:04.376 ************************************ 00:06:04.376 14:00:12 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:06:04.376 14:00:12 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:04.376 14:00:12 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:04.376 14:00:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:04.376 14:00:12 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:04.376 14:00:12 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:04.376 14:00:12 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:04.376 14:00:12 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:04.376 14:00:12 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:04.376 14:00:12 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:04.376 14:00:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:04.376 14:00:12 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:04.376 14:00:12 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:04.376 14:00:12 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:04.376 14:00:12 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:04.377 14:00:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:04.635 14:00:12 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:04.635 14:00:12 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:04.635 { 00:06:04.635 "name": "Malloc0", 00:06:04.635 "aliases": [ 00:06:04.635 "76bbb0e3-5971-4518-b25c-9c5270e31f59" 00:06:04.635 ], 00:06:04.635 "product_name": "Malloc disk", 00:06:04.635 "block_size": 512, 00:06:04.635 "num_blocks": 16384, 00:06:04.635 "uuid": "76bbb0e3-5971-4518-b25c-9c5270e31f59", 00:06:04.635 "assigned_rate_limits": { 00:06:04.635 "rw_ios_per_sec": 0, 00:06:04.635 "rw_mbytes_per_sec": 0, 00:06:04.635 "r_mbytes_per_sec": 0, 00:06:04.635 "w_mbytes_per_sec": 0 00:06:04.635 }, 00:06:04.635 "claimed": false, 00:06:04.635 "zoned": false, 00:06:04.635 "supported_io_types": { 00:06:04.635 "read": true, 00:06:04.635 "write": true, 00:06:04.635 "unmap": true, 00:06:04.635 "flush": true, 00:06:04.635 "reset": true, 00:06:04.635 "nvme_admin": false, 00:06:04.635 "nvme_io": false, 00:06:04.635 "nvme_io_md": false, 00:06:04.635 "write_zeroes": true, 00:06:04.635 "zcopy": true, 00:06:04.635 "get_zone_info": false, 00:06:04.635 "zone_management": false, 00:06:04.635 "zone_append": false, 00:06:04.635 "compare": false, 00:06:04.635 "compare_and_write": false, 00:06:04.635 "abort": true, 00:06:04.635 "seek_hole": false, 00:06:04.635 "seek_data": false, 00:06:04.635 "copy": true, 00:06:04.635 "nvme_iov_md": false 00:06:04.635 }, 00:06:04.635 "memory_domains": [ 00:06:04.635 { 00:06:04.635 "dma_device_id": "system", 00:06:04.635 "dma_device_type": 1 00:06:04.635 }, 00:06:04.635 { 00:06:04.635 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:04.635 "dma_device_type": 2 00:06:04.635 } 00:06:04.635 ], 00:06:04.635 "driver_specific": {} 00:06:04.635 } 00:06:04.635 ]' 00:06:04.635 14:00:12 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:04.635 14:00:12 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:04.635 14:00:12 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:04.635 14:00:12 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:04.635 14:00:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:04.635 [2024-07-26 14:00:12.440089] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:04.635 [2024-07-26 14:00:12.440124] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:04.635 [2024-07-26 14:00:12.440160] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1b43d50 00:06:04.635 [2024-07-26 14:00:12.440173] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:04.635 [2024-07-26 14:00:12.441439] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:04.635 [2024-07-26 14:00:12.441460] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:04.635 Passthru0 00:06:04.635 14:00:12 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:04.635 14:00:12 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:04.635 14:00:12 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:04.635 14:00:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:04.635 14:00:12 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:04.635 14:00:12 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:04.635 { 00:06:04.635 "name": "Malloc0", 00:06:04.635 "aliases": [ 00:06:04.635 "76bbb0e3-5971-4518-b25c-9c5270e31f59" 00:06:04.635 ], 00:06:04.635 "product_name": "Malloc disk", 00:06:04.635 "block_size": 512, 00:06:04.635 "num_blocks": 16384, 00:06:04.636 "uuid": "76bbb0e3-5971-4518-b25c-9c5270e31f59", 00:06:04.636 "assigned_rate_limits": { 00:06:04.636 "rw_ios_per_sec": 0, 00:06:04.636 "rw_mbytes_per_sec": 0, 00:06:04.636 "r_mbytes_per_sec": 0, 00:06:04.636 "w_mbytes_per_sec": 0 00:06:04.636 }, 00:06:04.636 "claimed": true, 00:06:04.636 "claim_type": "exclusive_write", 00:06:04.636 "zoned": false, 00:06:04.636 "supported_io_types": { 00:06:04.636 "read": true, 00:06:04.636 "write": true, 00:06:04.636 "unmap": true, 00:06:04.636 "flush": true, 00:06:04.636 "reset": true, 00:06:04.636 "nvme_admin": false, 00:06:04.636 "nvme_io": false, 00:06:04.636 "nvme_io_md": false, 00:06:04.636 "write_zeroes": true, 00:06:04.636 "zcopy": true, 00:06:04.636 "get_zone_info": false, 00:06:04.636 "zone_management": false, 00:06:04.636 "zone_append": false, 00:06:04.636 "compare": false, 00:06:04.636 "compare_and_write": false, 00:06:04.636 "abort": true, 00:06:04.636 "seek_hole": false, 00:06:04.636 "seek_data": false, 00:06:04.636 "copy": true, 00:06:04.636 "nvme_iov_md": false 00:06:04.636 }, 00:06:04.636 "memory_domains": [ 00:06:04.636 { 00:06:04.636 "dma_device_id": "system", 00:06:04.636 "dma_device_type": 1 00:06:04.636 }, 00:06:04.636 { 00:06:04.636 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:04.636 "dma_device_type": 2 00:06:04.636 } 00:06:04.636 ], 00:06:04.636 "driver_specific": {} 00:06:04.636 }, 00:06:04.636 { 00:06:04.636 "name": "Passthru0", 00:06:04.636 "aliases": [ 00:06:04.636 "b97eb515-029e-5e63-bead-0d88a26e0587" 00:06:04.636 ], 00:06:04.636 "product_name": "passthru", 00:06:04.636 "block_size": 512, 00:06:04.636 "num_blocks": 16384, 00:06:04.636 "uuid": "b97eb515-029e-5e63-bead-0d88a26e0587", 00:06:04.636 "assigned_rate_limits": { 00:06:04.636 "rw_ios_per_sec": 0, 00:06:04.636 "rw_mbytes_per_sec": 0, 00:06:04.636 "r_mbytes_per_sec": 0, 00:06:04.636 "w_mbytes_per_sec": 0 00:06:04.636 }, 00:06:04.636 "claimed": false, 00:06:04.636 "zoned": false, 00:06:04.636 "supported_io_types": { 00:06:04.636 "read": true, 00:06:04.636 "write": true, 00:06:04.636 "unmap": true, 00:06:04.636 "flush": true, 00:06:04.636 "reset": true, 00:06:04.636 "nvme_admin": false, 00:06:04.636 "nvme_io": false, 00:06:04.636 "nvme_io_md": false, 00:06:04.636 "write_zeroes": true, 00:06:04.636 "zcopy": true, 00:06:04.636 "get_zone_info": false, 00:06:04.636 "zone_management": false, 00:06:04.636 "zone_append": false, 00:06:04.636 "compare": false, 00:06:04.636 "compare_and_write": false, 00:06:04.636 "abort": true, 00:06:04.636 "seek_hole": false, 00:06:04.636 "seek_data": false, 00:06:04.636 "copy": true, 00:06:04.636 "nvme_iov_md": false 00:06:04.636 }, 00:06:04.636 "memory_domains": [ 00:06:04.636 { 00:06:04.636 "dma_device_id": "system", 00:06:04.636 "dma_device_type": 1 00:06:04.636 }, 00:06:04.636 { 00:06:04.636 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:04.636 "dma_device_type": 2 00:06:04.636 } 00:06:04.636 ], 00:06:04.636 "driver_specific": { 00:06:04.636 "passthru": { 00:06:04.636 "name": "Passthru0", 00:06:04.636 "base_bdev_name": "Malloc0" 00:06:04.636 } 00:06:04.636 } 00:06:04.636 } 00:06:04.636 ]' 00:06:04.636 14:00:12 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:04.636 14:00:12 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:04.636 14:00:12 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:04.636 14:00:12 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:04.636 14:00:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:04.636 14:00:12 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:04.636 14:00:12 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:04.636 14:00:12 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:04.636 14:00:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:04.636 14:00:12 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:04.636 14:00:12 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:04.636 14:00:12 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:04.636 14:00:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:04.636 14:00:12 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:04.636 14:00:12 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:04.636 14:00:12 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:04.636 14:00:12 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:04.636 00:06:04.636 real 0m0.209s 00:06:04.636 user 0m0.134s 00:06:04.636 sys 0m0.019s 00:06:04.636 14:00:12 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:04.636 14:00:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:04.636 ************************************ 00:06:04.636 END TEST rpc_integrity 00:06:04.636 ************************************ 00:06:04.636 14:00:12 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:04.636 14:00:12 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:04.636 14:00:12 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:04.636 14:00:12 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:04.636 ************************************ 00:06:04.636 START TEST rpc_plugins 00:06:04.636 ************************************ 00:06:04.636 14:00:12 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:06:04.636 14:00:12 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:04.636 14:00:12 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:04.636 14:00:12 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:04.636 14:00:12 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:04.636 14:00:12 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:04.636 14:00:12 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:04.636 14:00:12 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:04.636 14:00:12 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:04.636 14:00:12 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:04.636 14:00:12 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:04.636 { 00:06:04.636 "name": "Malloc1", 00:06:04.636 "aliases": [ 00:06:04.636 "0a5a22ad-9f30-4488-8f13-d7fa8713f04e" 00:06:04.636 ], 00:06:04.636 "product_name": "Malloc disk", 00:06:04.636 "block_size": 4096, 00:06:04.636 "num_blocks": 256, 00:06:04.636 "uuid": "0a5a22ad-9f30-4488-8f13-d7fa8713f04e", 00:06:04.636 "assigned_rate_limits": { 00:06:04.636 "rw_ios_per_sec": 0, 00:06:04.636 "rw_mbytes_per_sec": 0, 00:06:04.636 "r_mbytes_per_sec": 0, 00:06:04.636 "w_mbytes_per_sec": 0 00:06:04.636 }, 00:06:04.636 "claimed": false, 00:06:04.636 "zoned": false, 00:06:04.636 "supported_io_types": { 00:06:04.636 "read": true, 00:06:04.636 "write": true, 00:06:04.636 "unmap": true, 00:06:04.636 "flush": true, 00:06:04.636 "reset": true, 00:06:04.636 "nvme_admin": false, 00:06:04.636 "nvme_io": false, 00:06:04.636 "nvme_io_md": false, 00:06:04.636 "write_zeroes": true, 00:06:04.636 "zcopy": true, 00:06:04.636 "get_zone_info": false, 00:06:04.636 "zone_management": false, 00:06:04.636 "zone_append": false, 00:06:04.636 "compare": false, 00:06:04.636 "compare_and_write": false, 00:06:04.636 "abort": true, 00:06:04.636 "seek_hole": false, 00:06:04.636 "seek_data": false, 00:06:04.636 "copy": true, 00:06:04.636 "nvme_iov_md": false 00:06:04.636 }, 00:06:04.636 "memory_domains": [ 00:06:04.636 { 00:06:04.636 "dma_device_id": "system", 00:06:04.636 "dma_device_type": 1 00:06:04.636 }, 00:06:04.636 { 00:06:04.636 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:04.636 "dma_device_type": 2 00:06:04.636 } 00:06:04.636 ], 00:06:04.636 "driver_specific": {} 00:06:04.636 } 00:06:04.636 ]' 00:06:04.636 14:00:12 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:06:04.894 14:00:12 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:04.894 14:00:12 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:04.894 14:00:12 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:04.894 14:00:12 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:04.894 14:00:12 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:04.894 14:00:12 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:04.894 14:00:12 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:04.894 14:00:12 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:04.894 14:00:12 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:04.894 14:00:12 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:04.894 14:00:12 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:06:04.894 14:00:12 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:04.894 00:06:04.894 real 0m0.106s 00:06:04.894 user 0m0.070s 00:06:04.894 sys 0m0.008s 00:06:04.894 14:00:12 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:04.895 14:00:12 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:04.895 ************************************ 00:06:04.895 END TEST rpc_plugins 00:06:04.895 ************************************ 00:06:04.895 14:00:12 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:04.895 14:00:12 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:04.895 14:00:12 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:04.895 14:00:12 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:04.895 ************************************ 00:06:04.895 START TEST rpc_trace_cmd_test 00:06:04.895 ************************************ 00:06:04.895 14:00:12 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:06:04.895 14:00:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:06:04.895 14:00:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:04.895 14:00:12 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:04.895 14:00:12 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:04.895 14:00:12 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:04.895 14:00:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:06:04.895 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid104726", 00:06:04.895 "tpoint_group_mask": "0x8", 00:06:04.895 "iscsi_conn": { 00:06:04.895 "mask": "0x2", 00:06:04.895 "tpoint_mask": "0x0" 00:06:04.895 }, 00:06:04.895 "scsi": { 00:06:04.895 "mask": "0x4", 00:06:04.895 "tpoint_mask": "0x0" 00:06:04.895 }, 00:06:04.895 "bdev": { 00:06:04.895 "mask": "0x8", 00:06:04.895 "tpoint_mask": "0xffffffffffffffff" 00:06:04.895 }, 00:06:04.895 "nvmf_rdma": { 00:06:04.895 "mask": "0x10", 00:06:04.895 "tpoint_mask": "0x0" 00:06:04.895 }, 00:06:04.895 "nvmf_tcp": { 00:06:04.895 "mask": "0x20", 00:06:04.895 "tpoint_mask": "0x0" 00:06:04.895 }, 00:06:04.895 "ftl": { 00:06:04.895 "mask": "0x40", 00:06:04.895 "tpoint_mask": "0x0" 00:06:04.895 }, 00:06:04.895 "blobfs": { 00:06:04.895 "mask": "0x80", 00:06:04.895 "tpoint_mask": "0x0" 00:06:04.895 }, 00:06:04.895 "dsa": { 00:06:04.895 "mask": "0x200", 00:06:04.895 "tpoint_mask": "0x0" 00:06:04.895 }, 00:06:04.895 "thread": { 00:06:04.895 "mask": "0x400", 00:06:04.895 "tpoint_mask": "0x0" 00:06:04.895 }, 00:06:04.895 "nvme_pcie": { 00:06:04.895 "mask": "0x800", 00:06:04.895 "tpoint_mask": "0x0" 00:06:04.895 }, 00:06:04.895 "iaa": { 00:06:04.895 "mask": "0x1000", 00:06:04.895 "tpoint_mask": "0x0" 00:06:04.895 }, 00:06:04.895 "nvme_tcp": { 00:06:04.895 "mask": "0x2000", 00:06:04.895 "tpoint_mask": "0x0" 00:06:04.895 }, 00:06:04.895 "bdev_nvme": { 00:06:04.895 "mask": "0x4000", 00:06:04.895 "tpoint_mask": "0x0" 00:06:04.895 }, 00:06:04.895 "sock": { 00:06:04.895 "mask": "0x8000", 00:06:04.895 "tpoint_mask": "0x0" 00:06:04.895 } 00:06:04.895 }' 00:06:04.895 14:00:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:06:04.895 14:00:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:06:04.895 14:00:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:04.895 14:00:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:04.895 14:00:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:04.895 14:00:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:04.895 14:00:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:04.895 14:00:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:04.895 14:00:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:05.154 14:00:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:05.154 00:06:05.154 real 0m0.181s 00:06:05.154 user 0m0.161s 00:06:05.154 sys 0m0.011s 00:06:05.154 14:00:12 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:05.154 14:00:12 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:05.154 ************************************ 00:06:05.154 END TEST rpc_trace_cmd_test 00:06:05.154 ************************************ 00:06:05.154 14:00:12 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:05.154 14:00:12 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:05.154 14:00:12 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:05.154 14:00:12 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:05.154 14:00:12 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:05.154 14:00:12 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.154 ************************************ 00:06:05.154 START TEST rpc_daemon_integrity 00:06:05.154 ************************************ 00:06:05.154 14:00:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:06:05.154 14:00:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:05.154 14:00:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.154 14:00:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:05.154 14:00:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.154 14:00:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:05.154 14:00:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:05.154 14:00:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:05.154 14:00:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:05.154 14:00:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.154 14:00:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:05.154 14:00:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.154 14:00:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:05.154 14:00:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:05.154 14:00:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.154 14:00:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:05.154 14:00:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.154 14:00:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:05.154 { 00:06:05.154 "name": "Malloc2", 00:06:05.154 "aliases": [ 00:06:05.154 "aae6159a-6e20-4787-a92e-64d59ff60cad" 00:06:05.154 ], 00:06:05.154 "product_name": "Malloc disk", 00:06:05.154 "block_size": 512, 00:06:05.154 "num_blocks": 16384, 00:06:05.154 "uuid": "aae6159a-6e20-4787-a92e-64d59ff60cad", 00:06:05.154 "assigned_rate_limits": { 00:06:05.154 "rw_ios_per_sec": 0, 00:06:05.154 "rw_mbytes_per_sec": 0, 00:06:05.154 "r_mbytes_per_sec": 0, 00:06:05.154 "w_mbytes_per_sec": 0 00:06:05.154 }, 00:06:05.154 "claimed": false, 00:06:05.154 "zoned": false, 00:06:05.154 "supported_io_types": { 00:06:05.154 "read": true, 00:06:05.154 "write": true, 00:06:05.154 "unmap": true, 00:06:05.154 "flush": true, 00:06:05.154 "reset": true, 00:06:05.154 "nvme_admin": false, 00:06:05.154 "nvme_io": false, 00:06:05.154 "nvme_io_md": false, 00:06:05.154 "write_zeroes": true, 00:06:05.154 "zcopy": true, 00:06:05.154 "get_zone_info": false, 00:06:05.154 "zone_management": false, 00:06:05.154 "zone_append": false, 00:06:05.154 "compare": false, 00:06:05.154 "compare_and_write": false, 00:06:05.154 "abort": true, 00:06:05.154 "seek_hole": false, 00:06:05.154 "seek_data": false, 00:06:05.154 "copy": true, 00:06:05.154 "nvme_iov_md": false 00:06:05.154 }, 00:06:05.154 "memory_domains": [ 00:06:05.154 { 00:06:05.154 "dma_device_id": "system", 00:06:05.154 "dma_device_type": 1 00:06:05.154 }, 00:06:05.154 { 00:06:05.154 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:05.154 "dma_device_type": 2 00:06:05.154 } 00:06:05.154 ], 00:06:05.154 "driver_specific": {} 00:06:05.154 } 00:06:05.154 ]' 00:06:05.154 14:00:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:05.154 14:00:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:05.154 14:00:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:05.154 14:00:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.154 14:00:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:05.154 [2024-07-26 14:00:13.065918] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:05.154 [2024-07-26 14:00:13.065968] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:05.154 [2024-07-26 14:00:13.065995] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1b43980 00:06:05.154 [2024-07-26 14:00:13.066014] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:05.154 [2024-07-26 14:00:13.067203] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:05.154 [2024-07-26 14:00:13.067226] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:05.154 Passthru0 00:06:05.154 14:00:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.154 14:00:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:05.154 14:00:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.154 14:00:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:05.154 14:00:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.154 14:00:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:05.154 { 00:06:05.154 "name": "Malloc2", 00:06:05.154 "aliases": [ 00:06:05.154 "aae6159a-6e20-4787-a92e-64d59ff60cad" 00:06:05.154 ], 00:06:05.154 "product_name": "Malloc disk", 00:06:05.154 "block_size": 512, 00:06:05.154 "num_blocks": 16384, 00:06:05.154 "uuid": "aae6159a-6e20-4787-a92e-64d59ff60cad", 00:06:05.154 "assigned_rate_limits": { 00:06:05.154 "rw_ios_per_sec": 0, 00:06:05.154 "rw_mbytes_per_sec": 0, 00:06:05.154 "r_mbytes_per_sec": 0, 00:06:05.154 "w_mbytes_per_sec": 0 00:06:05.154 }, 00:06:05.154 "claimed": true, 00:06:05.154 "claim_type": "exclusive_write", 00:06:05.154 "zoned": false, 00:06:05.154 "supported_io_types": { 00:06:05.154 "read": true, 00:06:05.154 "write": true, 00:06:05.154 "unmap": true, 00:06:05.154 "flush": true, 00:06:05.154 "reset": true, 00:06:05.154 "nvme_admin": false, 00:06:05.154 "nvme_io": false, 00:06:05.154 "nvme_io_md": false, 00:06:05.154 "write_zeroes": true, 00:06:05.154 "zcopy": true, 00:06:05.154 "get_zone_info": false, 00:06:05.154 "zone_management": false, 00:06:05.154 "zone_append": false, 00:06:05.154 "compare": false, 00:06:05.154 "compare_and_write": false, 00:06:05.154 "abort": true, 00:06:05.154 "seek_hole": false, 00:06:05.154 "seek_data": false, 00:06:05.154 "copy": true, 00:06:05.154 "nvme_iov_md": false 00:06:05.154 }, 00:06:05.154 "memory_domains": [ 00:06:05.154 { 00:06:05.154 "dma_device_id": "system", 00:06:05.154 "dma_device_type": 1 00:06:05.154 }, 00:06:05.154 { 00:06:05.154 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:05.154 "dma_device_type": 2 00:06:05.154 } 00:06:05.154 ], 00:06:05.154 "driver_specific": {} 00:06:05.154 }, 00:06:05.154 { 00:06:05.154 "name": "Passthru0", 00:06:05.154 "aliases": [ 00:06:05.154 "1c9c3e14-c752-57b2-b3f8-85cab155c93a" 00:06:05.154 ], 00:06:05.154 "product_name": "passthru", 00:06:05.154 "block_size": 512, 00:06:05.154 "num_blocks": 16384, 00:06:05.154 "uuid": "1c9c3e14-c752-57b2-b3f8-85cab155c93a", 00:06:05.154 "assigned_rate_limits": { 00:06:05.154 "rw_ios_per_sec": 0, 00:06:05.154 "rw_mbytes_per_sec": 0, 00:06:05.154 "r_mbytes_per_sec": 0, 00:06:05.154 "w_mbytes_per_sec": 0 00:06:05.154 }, 00:06:05.154 "claimed": false, 00:06:05.154 "zoned": false, 00:06:05.154 "supported_io_types": { 00:06:05.154 "read": true, 00:06:05.154 "write": true, 00:06:05.154 "unmap": true, 00:06:05.154 "flush": true, 00:06:05.154 "reset": true, 00:06:05.154 "nvme_admin": false, 00:06:05.154 "nvme_io": false, 00:06:05.154 "nvme_io_md": false, 00:06:05.154 "write_zeroes": true, 00:06:05.154 "zcopy": true, 00:06:05.154 "get_zone_info": false, 00:06:05.154 "zone_management": false, 00:06:05.154 "zone_append": false, 00:06:05.154 "compare": false, 00:06:05.154 "compare_and_write": false, 00:06:05.154 "abort": true, 00:06:05.154 "seek_hole": false, 00:06:05.154 "seek_data": false, 00:06:05.154 "copy": true, 00:06:05.154 "nvme_iov_md": false 00:06:05.154 }, 00:06:05.154 "memory_domains": [ 00:06:05.154 { 00:06:05.154 "dma_device_id": "system", 00:06:05.154 "dma_device_type": 1 00:06:05.154 }, 00:06:05.154 { 00:06:05.154 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:05.154 "dma_device_type": 2 00:06:05.154 } 00:06:05.154 ], 00:06:05.154 "driver_specific": { 00:06:05.154 "passthru": { 00:06:05.154 "name": "Passthru0", 00:06:05.154 "base_bdev_name": "Malloc2" 00:06:05.154 } 00:06:05.154 } 00:06:05.154 } 00:06:05.154 ]' 00:06:05.154 14:00:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:05.154 14:00:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:05.154 14:00:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:05.154 14:00:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.154 14:00:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:05.154 14:00:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.154 14:00:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:05.154 14:00:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.154 14:00:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:05.154 14:00:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.154 14:00:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:05.154 14:00:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.154 14:00:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:05.154 14:00:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.154 14:00:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:05.154 14:00:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:05.412 14:00:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:05.412 00:06:05.412 real 0m0.209s 00:06:05.412 user 0m0.138s 00:06:05.412 sys 0m0.018s 00:06:05.412 14:00:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:05.412 14:00:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:05.412 ************************************ 00:06:05.412 END TEST rpc_daemon_integrity 00:06:05.412 ************************************ 00:06:05.412 14:00:13 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:05.412 14:00:13 rpc -- rpc/rpc.sh@84 -- # killprocess 104726 00:06:05.412 14:00:13 rpc -- common/autotest_common.sh@950 -- # '[' -z 104726 ']' 00:06:05.412 14:00:13 rpc -- common/autotest_common.sh@954 -- # kill -0 104726 00:06:05.412 14:00:13 rpc -- common/autotest_common.sh@955 -- # uname 00:06:05.412 14:00:13 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:05.412 14:00:13 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 104726 00:06:05.412 14:00:13 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:05.412 14:00:13 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:05.412 14:00:13 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 104726' 00:06:05.412 killing process with pid 104726 00:06:05.412 14:00:13 rpc -- common/autotest_common.sh@969 -- # kill 104726 00:06:05.412 14:00:13 rpc -- common/autotest_common.sh@974 -- # wait 104726 00:06:05.669 00:06:05.669 real 0m1.814s 00:06:05.669 user 0m2.279s 00:06:05.669 sys 0m0.541s 00:06:05.669 14:00:13 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:05.669 14:00:13 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.669 ************************************ 00:06:05.669 END TEST rpc 00:06:05.669 ************************************ 00:06:05.669 14:00:13 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:06:05.669 14:00:13 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:05.669 14:00:13 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:05.669 14:00:13 -- common/autotest_common.sh@10 -- # set +x 00:06:05.926 ************************************ 00:06:05.926 START TEST skip_rpc 00:06:05.926 ************************************ 00:06:05.926 14:00:13 skip_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:06:05.926 * Looking for test storage... 00:06:05.926 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:05.926 14:00:13 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:05.926 14:00:13 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:05.926 14:00:13 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:06:05.926 14:00:13 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:05.926 14:00:13 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:05.926 14:00:13 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.926 ************************************ 00:06:05.926 START TEST skip_rpc 00:06:05.927 ************************************ 00:06:05.927 14:00:13 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:06:05.927 14:00:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=105063 00:06:05.927 14:00:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:06:05.927 14:00:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:05.927 14:00:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:06:05.927 [2024-07-26 14:00:13.820989] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:06:05.927 [2024-07-26 14:00:13.821071] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105063 ] 00:06:05.927 EAL: No free 2048 kB hugepages reported on node 1 00:06:05.927 [2024-07-26 14:00:13.879375] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.184 [2024-07-26 14:00:13.982369] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.448 14:00:18 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:11.448 14:00:18 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:11.448 14:00:18 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:11.448 14:00:18 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:11.448 14:00:18 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:11.448 14:00:18 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:11.448 14:00:18 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:11.448 14:00:18 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:06:11.448 14:00:18 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:11.448 14:00:18 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:11.448 14:00:18 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:11.448 14:00:18 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:11.448 14:00:18 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:11.448 14:00:18 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:11.448 14:00:18 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:11.448 14:00:18 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:11.448 14:00:18 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 105063 00:06:11.448 14:00:18 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 105063 ']' 00:06:11.448 14:00:18 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 105063 00:06:11.448 14:00:18 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:06:11.448 14:00:18 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:11.448 14:00:18 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 105063 00:06:11.448 14:00:18 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:11.448 14:00:18 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:11.448 14:00:18 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 105063' 00:06:11.448 killing process with pid 105063 00:06:11.448 14:00:18 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 105063 00:06:11.448 14:00:18 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 105063 00:06:11.448 00:06:11.448 real 0m5.461s 00:06:11.448 user 0m5.178s 00:06:11.448 sys 0m0.285s 00:06:11.448 14:00:19 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:11.448 14:00:19 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:11.448 ************************************ 00:06:11.448 END TEST skip_rpc 00:06:11.448 ************************************ 00:06:11.448 14:00:19 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:11.448 14:00:19 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:11.448 14:00:19 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:11.448 14:00:19 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:11.448 ************************************ 00:06:11.448 START TEST skip_rpc_with_json 00:06:11.448 ************************************ 00:06:11.448 14:00:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:06:11.448 14:00:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:11.448 14:00:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=105735 00:06:11.448 14:00:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:11.448 14:00:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:11.448 14:00:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 105735 00:06:11.448 14:00:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 105735 ']' 00:06:11.448 14:00:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:11.448 14:00:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:11.448 14:00:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:11.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:11.448 14:00:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:11.448 14:00:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:11.448 [2024-07-26 14:00:19.329591] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:06:11.448 [2024-07-26 14:00:19.329695] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105735 ] 00:06:11.448 EAL: No free 2048 kB hugepages reported on node 1 00:06:11.448 [2024-07-26 14:00:19.386250] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.707 [2024-07-26 14:00:19.496994] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.965 14:00:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:11.965 14:00:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:06:11.965 14:00:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:11.965 14:00:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:11.965 14:00:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:11.965 [2024-07-26 14:00:19.736680] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:11.965 request: 00:06:11.965 { 00:06:11.965 "trtype": "tcp", 00:06:11.965 "method": "nvmf_get_transports", 00:06:11.965 "req_id": 1 00:06:11.965 } 00:06:11.965 Got JSON-RPC error response 00:06:11.965 response: 00:06:11.965 { 00:06:11.965 "code": -19, 00:06:11.965 "message": "No such device" 00:06:11.965 } 00:06:11.965 14:00:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:11.965 14:00:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:11.965 14:00:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:11.965 14:00:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:11.965 [2024-07-26 14:00:19.744804] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:11.965 14:00:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:11.965 14:00:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:11.965 14:00:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:11.965 14:00:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:11.965 14:00:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:11.965 14:00:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:11.965 { 00:06:11.965 "subsystems": [ 00:06:11.965 { 00:06:11.965 "subsystem": "vfio_user_target", 00:06:11.965 "config": null 00:06:11.965 }, 00:06:11.965 { 00:06:11.965 "subsystem": "keyring", 00:06:11.965 "config": [] 00:06:11.965 }, 00:06:11.965 { 00:06:11.965 "subsystem": "iobuf", 00:06:11.965 "config": [ 00:06:11.965 { 00:06:11.965 "method": "iobuf_set_options", 00:06:11.965 "params": { 00:06:11.965 "small_pool_count": 8192, 00:06:11.965 "large_pool_count": 1024, 00:06:11.965 "small_bufsize": 8192, 00:06:11.965 "large_bufsize": 135168 00:06:11.965 } 00:06:11.965 } 00:06:11.965 ] 00:06:11.965 }, 00:06:11.965 { 00:06:11.965 "subsystem": "sock", 00:06:11.965 "config": [ 00:06:11.965 { 00:06:11.965 "method": "sock_set_default_impl", 00:06:11.965 "params": { 00:06:11.965 "impl_name": "posix" 00:06:11.965 } 00:06:11.965 }, 00:06:11.965 { 00:06:11.965 "method": "sock_impl_set_options", 00:06:11.965 "params": { 00:06:11.965 "impl_name": "ssl", 00:06:11.965 "recv_buf_size": 4096, 00:06:11.965 "send_buf_size": 4096, 00:06:11.965 "enable_recv_pipe": true, 00:06:11.965 "enable_quickack": false, 00:06:11.965 "enable_placement_id": 0, 00:06:11.965 "enable_zerocopy_send_server": true, 00:06:11.965 "enable_zerocopy_send_client": false, 00:06:11.965 "zerocopy_threshold": 0, 00:06:11.965 "tls_version": 0, 00:06:11.965 "enable_ktls": false 00:06:11.965 } 00:06:11.965 }, 00:06:11.965 { 00:06:11.965 "method": "sock_impl_set_options", 00:06:11.965 "params": { 00:06:11.965 "impl_name": "posix", 00:06:11.965 "recv_buf_size": 2097152, 00:06:11.965 "send_buf_size": 2097152, 00:06:11.965 "enable_recv_pipe": true, 00:06:11.965 "enable_quickack": false, 00:06:11.965 "enable_placement_id": 0, 00:06:11.965 "enable_zerocopy_send_server": true, 00:06:11.965 "enable_zerocopy_send_client": false, 00:06:11.965 "zerocopy_threshold": 0, 00:06:11.965 "tls_version": 0, 00:06:11.965 "enable_ktls": false 00:06:11.965 } 00:06:11.965 } 00:06:11.965 ] 00:06:11.965 }, 00:06:11.965 { 00:06:11.965 "subsystem": "vmd", 00:06:11.965 "config": [] 00:06:11.965 }, 00:06:11.965 { 00:06:11.965 "subsystem": "accel", 00:06:11.965 "config": [ 00:06:11.965 { 00:06:11.965 "method": "accel_set_options", 00:06:11.965 "params": { 00:06:11.965 "small_cache_size": 128, 00:06:11.965 "large_cache_size": 16, 00:06:11.965 "task_count": 2048, 00:06:11.965 "sequence_count": 2048, 00:06:11.965 "buf_count": 2048 00:06:11.965 } 00:06:11.965 } 00:06:11.965 ] 00:06:11.965 }, 00:06:11.965 { 00:06:11.965 "subsystem": "bdev", 00:06:11.965 "config": [ 00:06:11.965 { 00:06:11.965 "method": "bdev_set_options", 00:06:11.965 "params": { 00:06:11.965 "bdev_io_pool_size": 65535, 00:06:11.965 "bdev_io_cache_size": 256, 00:06:11.965 "bdev_auto_examine": true, 00:06:11.965 "iobuf_small_cache_size": 128, 00:06:11.965 "iobuf_large_cache_size": 16 00:06:11.965 } 00:06:11.965 }, 00:06:11.965 { 00:06:11.965 "method": "bdev_raid_set_options", 00:06:11.965 "params": { 00:06:11.965 "process_window_size_kb": 1024, 00:06:11.965 "process_max_bandwidth_mb_sec": 0 00:06:11.965 } 00:06:11.965 }, 00:06:11.965 { 00:06:11.965 "method": "bdev_iscsi_set_options", 00:06:11.965 "params": { 00:06:11.965 "timeout_sec": 30 00:06:11.965 } 00:06:11.965 }, 00:06:11.965 { 00:06:11.965 "method": "bdev_nvme_set_options", 00:06:11.965 "params": { 00:06:11.965 "action_on_timeout": "none", 00:06:11.965 "timeout_us": 0, 00:06:11.965 "timeout_admin_us": 0, 00:06:11.965 "keep_alive_timeout_ms": 10000, 00:06:11.965 "arbitration_burst": 0, 00:06:11.965 "low_priority_weight": 0, 00:06:11.965 "medium_priority_weight": 0, 00:06:11.965 "high_priority_weight": 0, 00:06:11.965 "nvme_adminq_poll_period_us": 10000, 00:06:11.965 "nvme_ioq_poll_period_us": 0, 00:06:11.965 "io_queue_requests": 0, 00:06:11.965 "delay_cmd_submit": true, 00:06:11.965 "transport_retry_count": 4, 00:06:11.965 "bdev_retry_count": 3, 00:06:11.965 "transport_ack_timeout": 0, 00:06:11.966 "ctrlr_loss_timeout_sec": 0, 00:06:11.966 "reconnect_delay_sec": 0, 00:06:11.966 "fast_io_fail_timeout_sec": 0, 00:06:11.966 "disable_auto_failback": false, 00:06:11.966 "generate_uuids": false, 00:06:11.966 "transport_tos": 0, 00:06:11.966 "nvme_error_stat": false, 00:06:11.966 "rdma_srq_size": 0, 00:06:11.966 "io_path_stat": false, 00:06:11.966 "allow_accel_sequence": false, 00:06:11.966 "rdma_max_cq_size": 0, 00:06:11.966 "rdma_cm_event_timeout_ms": 0, 00:06:11.966 "dhchap_digests": [ 00:06:11.966 "sha256", 00:06:11.966 "sha384", 00:06:11.966 "sha512" 00:06:11.966 ], 00:06:11.966 "dhchap_dhgroups": [ 00:06:11.966 "null", 00:06:11.966 "ffdhe2048", 00:06:11.966 "ffdhe3072", 00:06:11.966 "ffdhe4096", 00:06:11.966 "ffdhe6144", 00:06:11.966 "ffdhe8192" 00:06:11.966 ] 00:06:11.966 } 00:06:11.966 }, 00:06:11.966 { 00:06:11.966 "method": "bdev_nvme_set_hotplug", 00:06:11.966 "params": { 00:06:11.966 "period_us": 100000, 00:06:11.966 "enable": false 00:06:11.966 } 00:06:11.966 }, 00:06:11.966 { 00:06:11.966 "method": "bdev_wait_for_examine" 00:06:11.966 } 00:06:11.966 ] 00:06:11.966 }, 00:06:11.966 { 00:06:11.966 "subsystem": "scsi", 00:06:11.966 "config": null 00:06:11.966 }, 00:06:11.966 { 00:06:11.966 "subsystem": "scheduler", 00:06:11.966 "config": [ 00:06:11.966 { 00:06:11.966 "method": "framework_set_scheduler", 00:06:11.966 "params": { 00:06:11.966 "name": "static" 00:06:11.966 } 00:06:11.966 } 00:06:11.966 ] 00:06:11.966 }, 00:06:11.966 { 00:06:11.966 "subsystem": "vhost_scsi", 00:06:11.966 "config": [] 00:06:11.966 }, 00:06:11.966 { 00:06:11.966 "subsystem": "vhost_blk", 00:06:11.966 "config": [] 00:06:11.966 }, 00:06:11.966 { 00:06:11.966 "subsystem": "ublk", 00:06:11.966 "config": [] 00:06:11.966 }, 00:06:11.966 { 00:06:11.966 "subsystem": "nbd", 00:06:11.966 "config": [] 00:06:11.966 }, 00:06:11.966 { 00:06:11.966 "subsystem": "nvmf", 00:06:11.966 "config": [ 00:06:11.966 { 00:06:11.966 "method": "nvmf_set_config", 00:06:11.966 "params": { 00:06:11.966 "discovery_filter": "match_any", 00:06:11.966 "admin_cmd_passthru": { 00:06:11.966 "identify_ctrlr": false 00:06:11.966 } 00:06:11.966 } 00:06:11.966 }, 00:06:11.966 { 00:06:11.966 "method": "nvmf_set_max_subsystems", 00:06:11.966 "params": { 00:06:11.966 "max_subsystems": 1024 00:06:11.966 } 00:06:11.966 }, 00:06:11.966 { 00:06:11.966 "method": "nvmf_set_crdt", 00:06:11.966 "params": { 00:06:11.966 "crdt1": 0, 00:06:11.966 "crdt2": 0, 00:06:11.966 "crdt3": 0 00:06:11.966 } 00:06:11.966 }, 00:06:11.966 { 00:06:11.966 "method": "nvmf_create_transport", 00:06:11.966 "params": { 00:06:11.966 "trtype": "TCP", 00:06:11.966 "max_queue_depth": 128, 00:06:11.966 "max_io_qpairs_per_ctrlr": 127, 00:06:11.966 "in_capsule_data_size": 4096, 00:06:11.966 "max_io_size": 131072, 00:06:11.966 "io_unit_size": 131072, 00:06:11.966 "max_aq_depth": 128, 00:06:11.966 "num_shared_buffers": 511, 00:06:11.966 "buf_cache_size": 4294967295, 00:06:11.966 "dif_insert_or_strip": false, 00:06:11.966 "zcopy": false, 00:06:11.966 "c2h_success": true, 00:06:11.966 "sock_priority": 0, 00:06:11.966 "abort_timeout_sec": 1, 00:06:11.966 "ack_timeout": 0, 00:06:11.966 "data_wr_pool_size": 0 00:06:11.966 } 00:06:11.966 } 00:06:11.966 ] 00:06:11.966 }, 00:06:11.966 { 00:06:11.966 "subsystem": "iscsi", 00:06:11.966 "config": [ 00:06:11.966 { 00:06:11.966 "method": "iscsi_set_options", 00:06:11.966 "params": { 00:06:11.966 "node_base": "iqn.2016-06.io.spdk", 00:06:11.966 "max_sessions": 128, 00:06:11.966 "max_connections_per_session": 2, 00:06:11.966 "max_queue_depth": 64, 00:06:11.966 "default_time2wait": 2, 00:06:11.966 "default_time2retain": 20, 00:06:11.966 "first_burst_length": 8192, 00:06:11.966 "immediate_data": true, 00:06:11.966 "allow_duplicated_isid": false, 00:06:11.966 "error_recovery_level": 0, 00:06:11.966 "nop_timeout": 60, 00:06:11.966 "nop_in_interval": 30, 00:06:11.966 "disable_chap": false, 00:06:11.966 "require_chap": false, 00:06:11.966 "mutual_chap": false, 00:06:11.966 "chap_group": 0, 00:06:11.966 "max_large_datain_per_connection": 64, 00:06:11.966 "max_r2t_per_connection": 4, 00:06:11.966 "pdu_pool_size": 36864, 00:06:11.966 "immediate_data_pool_size": 16384, 00:06:11.966 "data_out_pool_size": 2048 00:06:11.966 } 00:06:11.966 } 00:06:11.966 ] 00:06:11.966 } 00:06:11.966 ] 00:06:11.966 } 00:06:11.966 14:00:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:11.966 14:00:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 105735 00:06:11.966 14:00:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 105735 ']' 00:06:11.966 14:00:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 105735 00:06:11.966 14:00:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:06:11.966 14:00:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:11.966 14:00:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 105735 00:06:11.966 14:00:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:11.966 14:00:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:11.966 14:00:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 105735' 00:06:11.966 killing process with pid 105735 00:06:11.966 14:00:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 105735 00:06:11.966 14:00:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 105735 00:06:12.540 14:00:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=105875 00:06:12.540 14:00:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:12.540 14:00:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:17.834 14:00:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 105875 00:06:17.834 14:00:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 105875 ']' 00:06:17.834 14:00:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 105875 00:06:17.834 14:00:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:06:17.834 14:00:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:17.834 14:00:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 105875 00:06:17.834 14:00:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:17.834 14:00:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:17.834 14:00:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 105875' 00:06:17.834 killing process with pid 105875 00:06:17.834 14:00:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 105875 00:06:17.834 14:00:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 105875 00:06:17.834 14:00:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:17.834 14:00:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:17.834 00:06:17.834 real 0m6.534s 00:06:17.834 user 0m6.165s 00:06:17.834 sys 0m0.646s 00:06:17.834 14:00:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:17.834 14:00:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:17.834 ************************************ 00:06:17.834 END TEST skip_rpc_with_json 00:06:17.834 ************************************ 00:06:17.834 14:00:25 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:17.834 14:00:25 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:17.834 14:00:25 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:17.834 14:00:25 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:18.094 ************************************ 00:06:18.094 START TEST skip_rpc_with_delay 00:06:18.094 ************************************ 00:06:18.094 14:00:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:06:18.094 14:00:25 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:18.094 14:00:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:06:18.094 14:00:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:18.094 14:00:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:18.094 14:00:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:18.094 14:00:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:18.094 14:00:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:18.094 14:00:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:18.094 14:00:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:18.094 14:00:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:18.094 14:00:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:18.094 14:00:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:18.094 [2024-07-26 14:00:25.920188] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:18.094 [2024-07-26 14:00:25.920286] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:06:18.094 14:00:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:06:18.094 14:00:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:18.094 14:00:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:18.094 14:00:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:18.094 00:06:18.094 real 0m0.069s 00:06:18.094 user 0m0.045s 00:06:18.094 sys 0m0.024s 00:06:18.094 14:00:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:18.094 14:00:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:18.094 ************************************ 00:06:18.094 END TEST skip_rpc_with_delay 00:06:18.094 ************************************ 00:06:18.094 14:00:25 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:18.094 14:00:25 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:18.094 14:00:25 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:18.094 14:00:25 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:18.094 14:00:25 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:18.094 14:00:25 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:18.094 ************************************ 00:06:18.094 START TEST exit_on_failed_rpc_init 00:06:18.094 ************************************ 00:06:18.094 14:00:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:06:18.094 14:00:25 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=106593 00:06:18.094 14:00:25 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:18.094 14:00:25 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 106593 00:06:18.094 14:00:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 106593 ']' 00:06:18.094 14:00:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:18.094 14:00:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:18.094 14:00:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:18.094 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:18.094 14:00:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:18.094 14:00:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:18.094 [2024-07-26 14:00:26.036417] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:06:18.094 [2024-07-26 14:00:26.036506] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106593 ] 00:06:18.094 EAL: No free 2048 kB hugepages reported on node 1 00:06:18.094 [2024-07-26 14:00:26.094307] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.353 [2024-07-26 14:00:26.195161] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.612 14:00:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:18.612 14:00:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:06:18.612 14:00:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:18.612 14:00:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:18.612 14:00:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:06:18.612 14:00:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:18.612 14:00:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:18.612 14:00:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:18.612 14:00:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:18.612 14:00:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:18.612 14:00:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:18.612 14:00:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:18.612 14:00:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:18.612 14:00:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:18.612 14:00:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:18.612 [2024-07-26 14:00:26.474037] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:06:18.612 [2024-07-26 14:00:26.474123] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106717 ] 00:06:18.612 EAL: No free 2048 kB hugepages reported on node 1 00:06:18.612 [2024-07-26 14:00:26.530844] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.870 [2024-07-26 14:00:26.642341] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:18.870 [2024-07-26 14:00:26.642465] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:18.871 [2024-07-26 14:00:26.642483] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:18.871 [2024-07-26 14:00:26.642495] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:18.871 14:00:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:06:18.871 14:00:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:18.871 14:00:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:06:18.871 14:00:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:06:18.871 14:00:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:06:18.871 14:00:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:18.871 14:00:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:18.871 14:00:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 106593 00:06:18.871 14:00:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 106593 ']' 00:06:18.871 14:00:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 106593 00:06:18.871 14:00:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:06:18.871 14:00:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:18.871 14:00:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 106593 00:06:18.871 14:00:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:18.871 14:00:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:18.871 14:00:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 106593' 00:06:18.871 killing process with pid 106593 00:06:18.871 14:00:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 106593 00:06:18.871 14:00:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 106593 00:06:19.439 00:06:19.439 real 0m1.241s 00:06:19.439 user 0m1.415s 00:06:19.439 sys 0m0.404s 00:06:19.439 14:00:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:19.439 14:00:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:19.439 ************************************ 00:06:19.439 END TEST exit_on_failed_rpc_init 00:06:19.439 ************************************ 00:06:19.439 14:00:27 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:19.439 00:06:19.439 real 0m13.562s 00:06:19.439 user 0m12.919s 00:06:19.439 sys 0m1.518s 00:06:19.439 14:00:27 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:19.439 14:00:27 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:19.439 ************************************ 00:06:19.439 END TEST skip_rpc 00:06:19.439 ************************************ 00:06:19.439 14:00:27 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:19.439 14:00:27 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:19.439 14:00:27 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:19.439 14:00:27 -- common/autotest_common.sh@10 -- # set +x 00:06:19.439 ************************************ 00:06:19.439 START TEST rpc_client 00:06:19.439 ************************************ 00:06:19.439 14:00:27 rpc_client -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:19.439 * Looking for test storage... 00:06:19.439 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:06:19.439 14:00:27 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:06:19.439 OK 00:06:19.439 14:00:27 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:19.439 00:06:19.439 real 0m0.072s 00:06:19.439 user 0m0.029s 00:06:19.439 sys 0m0.048s 00:06:19.439 14:00:27 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:19.439 14:00:27 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:19.439 ************************************ 00:06:19.439 END TEST rpc_client 00:06:19.439 ************************************ 00:06:19.439 14:00:27 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:19.439 14:00:27 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:19.439 14:00:27 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:19.439 14:00:27 -- common/autotest_common.sh@10 -- # set +x 00:06:19.439 ************************************ 00:06:19.439 START TEST json_config 00:06:19.439 ************************************ 00:06:19.439 14:00:27 json_config -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:19.699 14:00:27 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:19.699 14:00:27 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:19.699 14:00:27 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:19.699 14:00:27 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:19.699 14:00:27 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:19.699 14:00:27 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:19.699 14:00:27 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:19.699 14:00:27 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:19.699 14:00:27 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:19.699 14:00:27 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:19.699 14:00:27 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:19.699 14:00:27 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:19.699 14:00:27 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:06:19.699 14:00:27 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:06:19.699 14:00:27 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:19.699 14:00:27 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:19.699 14:00:27 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:19.699 14:00:27 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:19.699 14:00:27 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:19.699 14:00:27 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:19.699 14:00:27 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:19.699 14:00:27 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:19.699 14:00:27 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:19.699 14:00:27 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:19.699 14:00:27 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:19.699 14:00:27 json_config -- paths/export.sh@5 -- # export PATH 00:06:19.699 14:00:27 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:19.699 14:00:27 json_config -- nvmf/common.sh@47 -- # : 0 00:06:19.699 14:00:27 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:19.699 14:00:27 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:19.699 14:00:27 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:19.699 14:00:27 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:19.699 14:00:27 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:19.699 14:00:27 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:19.699 14:00:27 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:19.699 14:00:27 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:19.699 14:00:27 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:19.699 14:00:27 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:19.699 14:00:27 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:19.699 14:00:27 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:19.699 14:00:27 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:19.699 14:00:27 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:06:19.699 14:00:27 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:06:19.699 14:00:27 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:06:19.700 14:00:27 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:06:19.700 14:00:27 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:06:19.700 14:00:27 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:06:19.700 14:00:27 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:06:19.700 14:00:27 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:06:19.700 14:00:27 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:06:19.700 14:00:27 json_config -- json_config/json_config.sh@359 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:19.700 14:00:27 json_config -- json_config/json_config.sh@360 -- # echo 'INFO: JSON configuration test init' 00:06:19.700 INFO: JSON configuration test init 00:06:19.700 14:00:27 json_config -- json_config/json_config.sh@361 -- # json_config_test_init 00:06:19.700 14:00:27 json_config -- json_config/json_config.sh@266 -- # timing_enter json_config_test_init 00:06:19.700 14:00:27 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:19.700 14:00:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:19.700 14:00:27 json_config -- json_config/json_config.sh@267 -- # timing_enter json_config_setup_target 00:06:19.700 14:00:27 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:19.700 14:00:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:19.700 14:00:27 json_config -- json_config/json_config.sh@269 -- # json_config_test_start_app target --wait-for-rpc 00:06:19.700 14:00:27 json_config -- json_config/common.sh@9 -- # local app=target 00:06:19.700 14:00:27 json_config -- json_config/common.sh@10 -- # shift 00:06:19.700 14:00:27 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:19.700 14:00:27 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:19.700 14:00:27 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:19.700 14:00:27 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:19.700 14:00:27 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:19.700 14:00:27 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=106959 00:06:19.700 14:00:27 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:06:19.700 14:00:27 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:19.700 Waiting for target to run... 00:06:19.700 14:00:27 json_config -- json_config/common.sh@25 -- # waitforlisten 106959 /var/tmp/spdk_tgt.sock 00:06:19.700 14:00:27 json_config -- common/autotest_common.sh@831 -- # '[' -z 106959 ']' 00:06:19.700 14:00:27 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:19.700 14:00:27 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:19.700 14:00:27 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:19.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:19.700 14:00:27 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:19.700 14:00:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:19.700 [2024-07-26 14:00:27.537088] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:06:19.700 [2024-07-26 14:00:27.537179] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106959 ] 00:06:19.700 EAL: No free 2048 kB hugepages reported on node 1 00:06:19.959 [2024-07-26 14:00:27.857504] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.959 [2024-07-26 14:00:27.935879] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.524 14:00:28 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:20.524 14:00:28 json_config -- common/autotest_common.sh@864 -- # return 0 00:06:20.524 14:00:28 json_config -- json_config/common.sh@26 -- # echo '' 00:06:20.524 00:06:20.524 14:00:28 json_config -- json_config/json_config.sh@273 -- # create_accel_config 00:06:20.524 14:00:28 json_config -- json_config/json_config.sh@97 -- # timing_enter create_accel_config 00:06:20.524 14:00:28 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:20.524 14:00:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:20.524 14:00:28 json_config -- json_config/json_config.sh@99 -- # [[ 0 -eq 1 ]] 00:06:20.524 14:00:28 json_config -- json_config/json_config.sh@105 -- # timing_exit create_accel_config 00:06:20.524 14:00:28 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:20.524 14:00:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:20.524 14:00:28 json_config -- json_config/json_config.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:06:20.524 14:00:28 json_config -- json_config/json_config.sh@278 -- # tgt_rpc load_config 00:06:20.524 14:00:28 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:06:23.812 14:00:31 json_config -- json_config/json_config.sh@280 -- # tgt_check_notification_types 00:06:23.812 14:00:31 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:06:23.813 14:00:31 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:23.813 14:00:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:23.813 14:00:31 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:06:23.813 14:00:31 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:06:23.813 14:00:31 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:06:23.813 14:00:31 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:06:23.813 14:00:31 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:06:23.813 14:00:31 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:06:24.072 14:00:31 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:06:24.072 14:00:31 json_config -- json_config/json_config.sh@48 -- # local get_types 00:06:24.072 14:00:31 json_config -- json_config/json_config.sh@50 -- # local type_diff 00:06:24.072 14:00:31 json_config -- json_config/json_config.sh@51 -- # echo bdev_register bdev_unregister bdev_register bdev_unregister 00:06:24.072 14:00:31 json_config -- json_config/json_config.sh@51 -- # tr ' ' '\n' 00:06:24.072 14:00:31 json_config -- json_config/json_config.sh@51 -- # sort 00:06:24.072 14:00:31 json_config -- json_config/json_config.sh@51 -- # uniq -u 00:06:24.072 14:00:31 json_config -- json_config/json_config.sh@51 -- # type_diff= 00:06:24.072 14:00:31 json_config -- json_config/json_config.sh@53 -- # [[ -n '' ]] 00:06:24.072 14:00:31 json_config -- json_config/json_config.sh@58 -- # timing_exit tgt_check_notification_types 00:06:24.072 14:00:31 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:24.072 14:00:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:24.072 14:00:31 json_config -- json_config/json_config.sh@59 -- # return 0 00:06:24.072 14:00:31 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:06:24.072 14:00:31 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:06:24.072 14:00:31 json_config -- json_config/json_config.sh@290 -- # [[ 0 -eq 1 ]] 00:06:24.072 14:00:31 json_config -- json_config/json_config.sh@294 -- # [[ 1 -eq 1 ]] 00:06:24.072 14:00:31 json_config -- json_config/json_config.sh@295 -- # create_nvmf_subsystem_config 00:06:24.072 14:00:31 json_config -- json_config/json_config.sh@234 -- # timing_enter create_nvmf_subsystem_config 00:06:24.072 14:00:31 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:24.072 14:00:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:24.072 14:00:31 json_config -- json_config/json_config.sh@236 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:06:24.072 14:00:31 json_config -- json_config/json_config.sh@237 -- # [[ tcp == \r\d\m\a ]] 00:06:24.072 14:00:31 json_config -- json_config/json_config.sh@241 -- # [[ -z 127.0.0.1 ]] 00:06:24.072 14:00:31 json_config -- json_config/json_config.sh@246 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:24.072 14:00:31 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:24.329 MallocForNvmf0 00:06:24.330 14:00:32 json_config -- json_config/json_config.sh@247 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:24.330 14:00:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:24.588 MallocForNvmf1 00:06:24.588 14:00:32 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:06:24.588 14:00:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:06:24.847 [2024-07-26 14:00:32.648917] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:24.847 14:00:32 json_config -- json_config/json_config.sh@250 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:24.847 14:00:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:25.105 14:00:32 json_config -- json_config/json_config.sh@251 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:25.105 14:00:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:25.364 14:00:33 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:25.364 14:00:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:25.622 14:00:33 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:25.622 14:00:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:25.622 [2024-07-26 14:00:33.628130] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:25.881 14:00:33 json_config -- json_config/json_config.sh@255 -- # timing_exit create_nvmf_subsystem_config 00:06:25.881 14:00:33 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:25.881 14:00:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:25.881 14:00:33 json_config -- json_config/json_config.sh@297 -- # timing_exit json_config_setup_target 00:06:25.881 14:00:33 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:25.881 14:00:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:25.881 14:00:33 json_config -- json_config/json_config.sh@299 -- # [[ 0 -eq 1 ]] 00:06:25.881 14:00:33 json_config -- json_config/json_config.sh@304 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:25.881 14:00:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:26.140 MallocBdevForConfigChangeCheck 00:06:26.140 14:00:33 json_config -- json_config/json_config.sh@306 -- # timing_exit json_config_test_init 00:06:26.140 14:00:33 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:26.140 14:00:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:26.140 14:00:33 json_config -- json_config/json_config.sh@363 -- # tgt_rpc save_config 00:06:26.140 14:00:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:26.398 14:00:34 json_config -- json_config/json_config.sh@365 -- # echo 'INFO: shutting down applications...' 00:06:26.398 INFO: shutting down applications... 00:06:26.398 14:00:34 json_config -- json_config/json_config.sh@366 -- # [[ 0 -eq 1 ]] 00:06:26.398 14:00:34 json_config -- json_config/json_config.sh@372 -- # json_config_clear target 00:06:26.398 14:00:34 json_config -- json_config/json_config.sh@336 -- # [[ -n 22 ]] 00:06:26.398 14:00:34 json_config -- json_config/json_config.sh@337 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:28.297 Calling clear_iscsi_subsystem 00:06:28.297 Calling clear_nvmf_subsystem 00:06:28.297 Calling clear_nbd_subsystem 00:06:28.297 Calling clear_ublk_subsystem 00:06:28.297 Calling clear_vhost_blk_subsystem 00:06:28.297 Calling clear_vhost_scsi_subsystem 00:06:28.297 Calling clear_bdev_subsystem 00:06:28.297 14:00:35 json_config -- json_config/json_config.sh@341 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:06:28.297 14:00:35 json_config -- json_config/json_config.sh@347 -- # count=100 00:06:28.297 14:00:35 json_config -- json_config/json_config.sh@348 -- # '[' 100 -gt 0 ']' 00:06:28.297 14:00:35 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:28.297 14:00:35 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:28.297 14:00:35 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:06:28.555 14:00:36 json_config -- json_config/json_config.sh@349 -- # break 00:06:28.555 14:00:36 json_config -- json_config/json_config.sh@354 -- # '[' 100 -eq 0 ']' 00:06:28.555 14:00:36 json_config -- json_config/json_config.sh@373 -- # json_config_test_shutdown_app target 00:06:28.555 14:00:36 json_config -- json_config/common.sh@31 -- # local app=target 00:06:28.555 14:00:36 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:28.555 14:00:36 json_config -- json_config/common.sh@35 -- # [[ -n 106959 ]] 00:06:28.555 14:00:36 json_config -- json_config/common.sh@38 -- # kill -SIGINT 106959 00:06:28.555 14:00:36 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:28.555 14:00:36 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:28.555 14:00:36 json_config -- json_config/common.sh@41 -- # kill -0 106959 00:06:28.555 14:00:36 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:29.124 14:00:36 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:29.124 14:00:36 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:29.124 14:00:36 json_config -- json_config/common.sh@41 -- # kill -0 106959 00:06:29.124 14:00:36 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:29.124 14:00:36 json_config -- json_config/common.sh@43 -- # break 00:06:29.124 14:00:36 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:29.124 14:00:36 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:29.124 SPDK target shutdown done 00:06:29.124 14:00:36 json_config -- json_config/json_config.sh@375 -- # echo 'INFO: relaunching applications...' 00:06:29.124 INFO: relaunching applications... 00:06:29.124 14:00:36 json_config -- json_config/json_config.sh@376 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:29.124 14:00:36 json_config -- json_config/common.sh@9 -- # local app=target 00:06:29.124 14:00:36 json_config -- json_config/common.sh@10 -- # shift 00:06:29.124 14:00:36 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:29.124 14:00:36 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:29.125 14:00:36 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:29.125 14:00:36 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:29.125 14:00:36 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:29.125 14:00:36 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=108160 00:06:29.125 14:00:36 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:29.125 14:00:36 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:29.125 Waiting for target to run... 00:06:29.125 14:00:36 json_config -- json_config/common.sh@25 -- # waitforlisten 108160 /var/tmp/spdk_tgt.sock 00:06:29.125 14:00:36 json_config -- common/autotest_common.sh@831 -- # '[' -z 108160 ']' 00:06:29.125 14:00:36 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:29.125 14:00:36 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:29.125 14:00:36 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:29.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:29.125 14:00:36 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:29.125 14:00:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:29.125 [2024-07-26 14:00:36.890743] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:06:29.125 [2024-07-26 14:00:36.890852] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108160 ] 00:06:29.125 EAL: No free 2048 kB hugepages reported on node 1 00:06:29.693 [2024-07-26 14:00:37.450147] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.693 [2024-07-26 14:00:37.539537] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.981 [2024-07-26 14:00:40.570025] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:32.981 [2024-07-26 14:00:40.602433] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:33.548 14:00:41 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:33.548 14:00:41 json_config -- common/autotest_common.sh@864 -- # return 0 00:06:33.548 14:00:41 json_config -- json_config/common.sh@26 -- # echo '' 00:06:33.548 00:06:33.548 14:00:41 json_config -- json_config/json_config.sh@377 -- # [[ 0 -eq 1 ]] 00:06:33.548 14:00:41 json_config -- json_config/json_config.sh@381 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:33.548 INFO: Checking if target configuration is the same... 00:06:33.548 14:00:41 json_config -- json_config/json_config.sh@382 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:33.548 14:00:41 json_config -- json_config/json_config.sh@382 -- # tgt_rpc save_config 00:06:33.548 14:00:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:33.548 + '[' 2 -ne 2 ']' 00:06:33.548 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:33.548 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:33.548 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:33.548 +++ basename /dev/fd/62 00:06:33.548 ++ mktemp /tmp/62.XXX 00:06:33.548 + tmp_file_1=/tmp/62.aK4 00:06:33.548 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:33.548 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:33.548 + tmp_file_2=/tmp/spdk_tgt_config.json.Xmd 00:06:33.548 + ret=0 00:06:33.548 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:33.806 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:33.806 + diff -u /tmp/62.aK4 /tmp/spdk_tgt_config.json.Xmd 00:06:33.806 + echo 'INFO: JSON config files are the same' 00:06:33.806 INFO: JSON config files are the same 00:06:33.806 + rm /tmp/62.aK4 /tmp/spdk_tgt_config.json.Xmd 00:06:33.806 + exit 0 00:06:33.806 14:00:41 json_config -- json_config/json_config.sh@383 -- # [[ 0 -eq 1 ]] 00:06:33.806 14:00:41 json_config -- json_config/json_config.sh@388 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:33.806 INFO: changing configuration and checking if this can be detected... 00:06:33.806 14:00:41 json_config -- json_config/json_config.sh@390 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:33.806 14:00:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:34.065 14:00:41 json_config -- json_config/json_config.sh@391 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:34.065 14:00:41 json_config -- json_config/json_config.sh@391 -- # tgt_rpc save_config 00:06:34.065 14:00:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:34.065 + '[' 2 -ne 2 ']' 00:06:34.065 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:34.065 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:34.066 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:34.066 +++ basename /dev/fd/62 00:06:34.066 ++ mktemp /tmp/62.XXX 00:06:34.066 + tmp_file_1=/tmp/62.qy7 00:06:34.066 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:34.066 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:34.066 + tmp_file_2=/tmp/spdk_tgt_config.json.x3v 00:06:34.066 + ret=0 00:06:34.066 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:34.325 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:34.584 + diff -u /tmp/62.qy7 /tmp/spdk_tgt_config.json.x3v 00:06:34.584 + ret=1 00:06:34.584 + echo '=== Start of file: /tmp/62.qy7 ===' 00:06:34.584 + cat /tmp/62.qy7 00:06:34.584 + echo '=== End of file: /tmp/62.qy7 ===' 00:06:34.584 + echo '' 00:06:34.584 + echo '=== Start of file: /tmp/spdk_tgt_config.json.x3v ===' 00:06:34.584 + cat /tmp/spdk_tgt_config.json.x3v 00:06:34.584 + echo '=== End of file: /tmp/spdk_tgt_config.json.x3v ===' 00:06:34.584 + echo '' 00:06:34.584 + rm /tmp/62.qy7 /tmp/spdk_tgt_config.json.x3v 00:06:34.584 + exit 1 00:06:34.584 14:00:42 json_config -- json_config/json_config.sh@395 -- # echo 'INFO: configuration change detected.' 00:06:34.584 INFO: configuration change detected. 00:06:34.584 14:00:42 json_config -- json_config/json_config.sh@398 -- # json_config_test_fini 00:06:34.584 14:00:42 json_config -- json_config/json_config.sh@310 -- # timing_enter json_config_test_fini 00:06:34.584 14:00:42 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:34.584 14:00:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:34.584 14:00:42 json_config -- json_config/json_config.sh@311 -- # local ret=0 00:06:34.584 14:00:42 json_config -- json_config/json_config.sh@313 -- # [[ -n '' ]] 00:06:34.584 14:00:42 json_config -- json_config/json_config.sh@321 -- # [[ -n 108160 ]] 00:06:34.584 14:00:42 json_config -- json_config/json_config.sh@324 -- # cleanup_bdev_subsystem_config 00:06:34.584 14:00:42 json_config -- json_config/json_config.sh@188 -- # timing_enter cleanup_bdev_subsystem_config 00:06:34.584 14:00:42 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:34.584 14:00:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:34.584 14:00:42 json_config -- json_config/json_config.sh@190 -- # [[ 0 -eq 1 ]] 00:06:34.584 14:00:42 json_config -- json_config/json_config.sh@197 -- # uname -s 00:06:34.584 14:00:42 json_config -- json_config/json_config.sh@197 -- # [[ Linux = Linux ]] 00:06:34.584 14:00:42 json_config -- json_config/json_config.sh@198 -- # rm -f /sample_aio 00:06:34.584 14:00:42 json_config -- json_config/json_config.sh@201 -- # [[ 0 -eq 1 ]] 00:06:34.584 14:00:42 json_config -- json_config/json_config.sh@205 -- # timing_exit cleanup_bdev_subsystem_config 00:06:34.584 14:00:42 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:34.584 14:00:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:34.584 14:00:42 json_config -- json_config/json_config.sh@327 -- # killprocess 108160 00:06:34.584 14:00:42 json_config -- common/autotest_common.sh@950 -- # '[' -z 108160 ']' 00:06:34.584 14:00:42 json_config -- common/autotest_common.sh@954 -- # kill -0 108160 00:06:34.584 14:00:42 json_config -- common/autotest_common.sh@955 -- # uname 00:06:34.584 14:00:42 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:34.584 14:00:42 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 108160 00:06:34.584 14:00:42 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:34.584 14:00:42 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:34.584 14:00:42 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 108160' 00:06:34.584 killing process with pid 108160 00:06:34.584 14:00:42 json_config -- common/autotest_common.sh@969 -- # kill 108160 00:06:34.584 14:00:42 json_config -- common/autotest_common.sh@974 -- # wait 108160 00:06:36.486 14:00:44 json_config -- json_config/json_config.sh@330 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:36.486 14:00:44 json_config -- json_config/json_config.sh@331 -- # timing_exit json_config_test_fini 00:06:36.486 14:00:44 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:36.486 14:00:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:36.486 14:00:44 json_config -- json_config/json_config.sh@332 -- # return 0 00:06:36.486 14:00:44 json_config -- json_config/json_config.sh@400 -- # echo 'INFO: Success' 00:06:36.486 INFO: Success 00:06:36.486 00:06:36.486 real 0m16.679s 00:06:36.486 user 0m18.600s 00:06:36.486 sys 0m2.045s 00:06:36.486 14:00:44 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:36.486 14:00:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:36.486 ************************************ 00:06:36.486 END TEST json_config 00:06:36.486 ************************************ 00:06:36.486 14:00:44 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:36.486 14:00:44 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:36.486 14:00:44 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:36.486 14:00:44 -- common/autotest_common.sh@10 -- # set +x 00:06:36.486 ************************************ 00:06:36.486 START TEST json_config_extra_key 00:06:36.486 ************************************ 00:06:36.487 14:00:44 json_config_extra_key -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:36.487 14:00:44 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:36.487 14:00:44 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:36.487 14:00:44 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:36.487 14:00:44 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:36.487 14:00:44 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:36.487 14:00:44 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:36.487 14:00:44 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:36.487 14:00:44 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:36.487 14:00:44 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:36.487 14:00:44 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:36.487 14:00:44 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:36.487 14:00:44 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:36.487 14:00:44 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:06:36.487 14:00:44 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:06:36.487 14:00:44 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:36.487 14:00:44 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:36.487 14:00:44 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:36.487 14:00:44 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:36.487 14:00:44 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:36.487 14:00:44 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:36.487 14:00:44 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:36.487 14:00:44 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:36.487 14:00:44 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.487 14:00:44 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.487 14:00:44 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.487 14:00:44 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:36.487 14:00:44 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.487 14:00:44 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:06:36.487 14:00:44 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:36.487 14:00:44 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:36.487 14:00:44 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:36.487 14:00:44 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:36.487 14:00:44 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:36.487 14:00:44 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:36.487 14:00:44 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:36.487 14:00:44 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:36.487 14:00:44 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:36.487 14:00:44 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:36.487 14:00:44 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:36.487 14:00:44 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:36.487 14:00:44 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:36.487 14:00:44 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:36.487 14:00:44 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:36.487 14:00:44 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:06:36.487 14:00:44 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:36.487 14:00:44 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:36.487 14:00:44 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:36.487 INFO: launching applications... 00:06:36.487 14:00:44 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:36.487 14:00:44 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:36.487 14:00:44 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:36.487 14:00:44 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:36.487 14:00:44 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:36.487 14:00:44 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:36.487 14:00:44 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:36.487 14:00:44 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:36.487 14:00:44 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=109200 00:06:36.487 14:00:44 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:36.487 Waiting for target to run... 00:06:36.487 14:00:44 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 109200 /var/tmp/spdk_tgt.sock 00:06:36.487 14:00:44 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:36.487 14:00:44 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 109200 ']' 00:06:36.487 14:00:44 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:36.487 14:00:44 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:36.487 14:00:44 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:36.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:36.487 14:00:44 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:36.487 14:00:44 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:36.487 [2024-07-26 14:00:44.248868] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:06:36.487 [2024-07-26 14:00:44.248949] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109200 ] 00:06:36.487 EAL: No free 2048 kB hugepages reported on node 1 00:06:37.054 [2024-07-26 14:00:44.764010] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.054 [2024-07-26 14:00:44.857270] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.313 14:00:45 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:37.313 14:00:45 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:06:37.313 14:00:45 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:37.313 00:06:37.313 14:00:45 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:37.313 INFO: shutting down applications... 00:06:37.313 14:00:45 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:37.313 14:00:45 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:37.313 14:00:45 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:37.313 14:00:45 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 109200 ]] 00:06:37.313 14:00:45 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 109200 00:06:37.313 14:00:45 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:37.313 14:00:45 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:37.313 14:00:45 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 109200 00:06:37.313 14:00:45 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:37.880 14:00:45 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:37.880 14:00:45 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:37.880 14:00:45 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 109200 00:06:37.880 14:00:45 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:37.880 14:00:45 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:37.880 14:00:45 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:37.880 14:00:45 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:37.880 SPDK target shutdown done 00:06:37.880 14:00:45 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:37.880 Success 00:06:37.880 00:06:37.880 real 0m1.538s 00:06:37.880 user 0m1.338s 00:06:37.880 sys 0m0.608s 00:06:37.880 14:00:45 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:37.880 14:00:45 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:37.880 ************************************ 00:06:37.880 END TEST json_config_extra_key 00:06:37.880 ************************************ 00:06:37.880 14:00:45 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:37.880 14:00:45 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:37.880 14:00:45 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:37.880 14:00:45 -- common/autotest_common.sh@10 -- # set +x 00:06:37.880 ************************************ 00:06:37.880 START TEST alias_rpc 00:06:37.880 ************************************ 00:06:37.880 14:00:45 alias_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:37.880 * Looking for test storage... 00:06:37.880 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:06:37.880 14:00:45 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:37.880 14:00:45 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=109383 00:06:37.881 14:00:45 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:37.881 14:00:45 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 109383 00:06:37.881 14:00:45 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 109383 ']' 00:06:37.881 14:00:45 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:37.881 14:00:45 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:37.881 14:00:45 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:37.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:37.881 14:00:45 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:37.881 14:00:45 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:37.881 [2024-07-26 14:00:45.838302] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:06:37.881 [2024-07-26 14:00:45.838392] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109383 ] 00:06:37.881 EAL: No free 2048 kB hugepages reported on node 1 00:06:37.881 [2024-07-26 14:00:45.897180] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.139 [2024-07-26 14:00:46.002020] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.397 14:00:46 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:38.397 14:00:46 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:38.397 14:00:46 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:38.656 14:00:46 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 109383 00:06:38.656 14:00:46 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 109383 ']' 00:06:38.656 14:00:46 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 109383 00:06:38.656 14:00:46 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:06:38.656 14:00:46 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:38.656 14:00:46 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 109383 00:06:38.656 14:00:46 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:38.656 14:00:46 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:38.656 14:00:46 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 109383' 00:06:38.656 killing process with pid 109383 00:06:38.656 14:00:46 alias_rpc -- common/autotest_common.sh@969 -- # kill 109383 00:06:38.656 14:00:46 alias_rpc -- common/autotest_common.sh@974 -- # wait 109383 00:06:39.223 00:06:39.223 real 0m1.249s 00:06:39.223 user 0m1.347s 00:06:39.223 sys 0m0.404s 00:06:39.223 14:00:46 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:39.223 14:00:46 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:39.223 ************************************ 00:06:39.223 END TEST alias_rpc 00:06:39.223 ************************************ 00:06:39.223 14:00:47 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:06:39.223 14:00:47 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:39.223 14:00:47 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:39.223 14:00:47 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:39.223 14:00:47 -- common/autotest_common.sh@10 -- # set +x 00:06:39.223 ************************************ 00:06:39.223 START TEST spdkcli_tcp 00:06:39.223 ************************************ 00:06:39.223 14:00:47 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:39.223 * Looking for test storage... 00:06:39.223 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:06:39.223 14:00:47 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:06:39.223 14:00:47 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:39.223 14:00:47 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:06:39.223 14:00:47 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:39.223 14:00:47 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:39.223 14:00:47 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:39.223 14:00:47 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:39.223 14:00:47 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:39.223 14:00:47 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:39.223 14:00:47 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=109574 00:06:39.223 14:00:47 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:39.223 14:00:47 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 109574 00:06:39.223 14:00:47 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 109574 ']' 00:06:39.223 14:00:47 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:39.223 14:00:47 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:39.223 14:00:47 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:39.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:39.223 14:00:47 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:39.223 14:00:47 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:39.223 [2024-07-26 14:00:47.144197] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:06:39.223 [2024-07-26 14:00:47.144287] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109574 ] 00:06:39.223 EAL: No free 2048 kB hugepages reported on node 1 00:06:39.223 [2024-07-26 14:00:47.200617] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:39.482 [2024-07-26 14:00:47.308977] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:39.482 [2024-07-26 14:00:47.308981] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.740 14:00:47 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:39.740 14:00:47 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:06:39.740 14:00:47 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=109703 00:06:39.740 14:00:47 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:39.740 14:00:47 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:39.999 [ 00:06:39.999 "bdev_malloc_delete", 00:06:39.999 "bdev_malloc_create", 00:06:39.999 "bdev_null_resize", 00:06:39.999 "bdev_null_delete", 00:06:39.999 "bdev_null_create", 00:06:39.999 "bdev_nvme_cuse_unregister", 00:06:39.999 "bdev_nvme_cuse_register", 00:06:39.999 "bdev_opal_new_user", 00:06:39.999 "bdev_opal_set_lock_state", 00:06:39.999 "bdev_opal_delete", 00:06:39.999 "bdev_opal_get_info", 00:06:39.999 "bdev_opal_create", 00:06:39.999 "bdev_nvme_opal_revert", 00:06:39.999 "bdev_nvme_opal_init", 00:06:39.999 "bdev_nvme_send_cmd", 00:06:39.999 "bdev_nvme_get_path_iostat", 00:06:39.999 "bdev_nvme_get_mdns_discovery_info", 00:06:39.999 "bdev_nvme_stop_mdns_discovery", 00:06:39.999 "bdev_nvme_start_mdns_discovery", 00:06:39.999 "bdev_nvme_set_multipath_policy", 00:06:39.999 "bdev_nvme_set_preferred_path", 00:06:39.999 "bdev_nvme_get_io_paths", 00:06:39.999 "bdev_nvme_remove_error_injection", 00:06:39.999 "bdev_nvme_add_error_injection", 00:06:39.999 "bdev_nvme_get_discovery_info", 00:06:39.999 "bdev_nvme_stop_discovery", 00:06:39.999 "bdev_nvme_start_discovery", 00:06:39.999 "bdev_nvme_get_controller_health_info", 00:06:39.999 "bdev_nvme_disable_controller", 00:06:39.999 "bdev_nvme_enable_controller", 00:06:39.999 "bdev_nvme_reset_controller", 00:06:39.999 "bdev_nvme_get_transport_statistics", 00:06:39.999 "bdev_nvme_apply_firmware", 00:06:39.999 "bdev_nvme_detach_controller", 00:06:39.999 "bdev_nvme_get_controllers", 00:06:39.999 "bdev_nvme_attach_controller", 00:06:39.999 "bdev_nvme_set_hotplug", 00:06:39.999 "bdev_nvme_set_options", 00:06:39.999 "bdev_passthru_delete", 00:06:39.999 "bdev_passthru_create", 00:06:39.999 "bdev_lvol_set_parent_bdev", 00:06:39.999 "bdev_lvol_set_parent", 00:06:39.999 "bdev_lvol_check_shallow_copy", 00:06:39.999 "bdev_lvol_start_shallow_copy", 00:06:39.999 "bdev_lvol_grow_lvstore", 00:06:39.999 "bdev_lvol_get_lvols", 00:06:39.999 "bdev_lvol_get_lvstores", 00:06:39.999 "bdev_lvol_delete", 00:06:39.999 "bdev_lvol_set_read_only", 00:06:39.999 "bdev_lvol_resize", 00:06:39.999 "bdev_lvol_decouple_parent", 00:06:39.999 "bdev_lvol_inflate", 00:06:39.999 "bdev_lvol_rename", 00:06:39.999 "bdev_lvol_clone_bdev", 00:06:39.999 "bdev_lvol_clone", 00:06:39.999 "bdev_lvol_snapshot", 00:06:39.999 "bdev_lvol_create", 00:06:39.999 "bdev_lvol_delete_lvstore", 00:06:39.999 "bdev_lvol_rename_lvstore", 00:06:39.999 "bdev_lvol_create_lvstore", 00:06:39.999 "bdev_raid_set_options", 00:06:39.999 "bdev_raid_remove_base_bdev", 00:06:39.999 "bdev_raid_add_base_bdev", 00:06:39.999 "bdev_raid_delete", 00:06:39.999 "bdev_raid_create", 00:06:39.999 "bdev_raid_get_bdevs", 00:06:39.999 "bdev_error_inject_error", 00:06:39.999 "bdev_error_delete", 00:06:39.999 "bdev_error_create", 00:06:39.999 "bdev_split_delete", 00:06:39.999 "bdev_split_create", 00:06:39.999 "bdev_delay_delete", 00:06:39.999 "bdev_delay_create", 00:06:39.999 "bdev_delay_update_latency", 00:06:39.999 "bdev_zone_block_delete", 00:06:39.999 "bdev_zone_block_create", 00:06:39.999 "blobfs_create", 00:06:39.999 "blobfs_detect", 00:06:39.999 "blobfs_set_cache_size", 00:06:39.999 "bdev_aio_delete", 00:06:39.999 "bdev_aio_rescan", 00:06:39.999 "bdev_aio_create", 00:06:40.000 "bdev_ftl_set_property", 00:06:40.000 "bdev_ftl_get_properties", 00:06:40.000 "bdev_ftl_get_stats", 00:06:40.000 "bdev_ftl_unmap", 00:06:40.000 "bdev_ftl_unload", 00:06:40.000 "bdev_ftl_delete", 00:06:40.000 "bdev_ftl_load", 00:06:40.000 "bdev_ftl_create", 00:06:40.000 "bdev_virtio_attach_controller", 00:06:40.000 "bdev_virtio_scsi_get_devices", 00:06:40.000 "bdev_virtio_detach_controller", 00:06:40.000 "bdev_virtio_blk_set_hotplug", 00:06:40.000 "bdev_iscsi_delete", 00:06:40.000 "bdev_iscsi_create", 00:06:40.000 "bdev_iscsi_set_options", 00:06:40.000 "accel_error_inject_error", 00:06:40.000 "ioat_scan_accel_module", 00:06:40.000 "dsa_scan_accel_module", 00:06:40.000 "iaa_scan_accel_module", 00:06:40.000 "vfu_virtio_create_scsi_endpoint", 00:06:40.000 "vfu_virtio_scsi_remove_target", 00:06:40.000 "vfu_virtio_scsi_add_target", 00:06:40.000 "vfu_virtio_create_blk_endpoint", 00:06:40.000 "vfu_virtio_delete_endpoint", 00:06:40.000 "keyring_file_remove_key", 00:06:40.000 "keyring_file_add_key", 00:06:40.000 "keyring_linux_set_options", 00:06:40.000 "iscsi_get_histogram", 00:06:40.000 "iscsi_enable_histogram", 00:06:40.000 "iscsi_set_options", 00:06:40.000 "iscsi_get_auth_groups", 00:06:40.000 "iscsi_auth_group_remove_secret", 00:06:40.000 "iscsi_auth_group_add_secret", 00:06:40.000 "iscsi_delete_auth_group", 00:06:40.000 "iscsi_create_auth_group", 00:06:40.000 "iscsi_set_discovery_auth", 00:06:40.000 "iscsi_get_options", 00:06:40.000 "iscsi_target_node_request_logout", 00:06:40.000 "iscsi_target_node_set_redirect", 00:06:40.000 "iscsi_target_node_set_auth", 00:06:40.000 "iscsi_target_node_add_lun", 00:06:40.000 "iscsi_get_stats", 00:06:40.000 "iscsi_get_connections", 00:06:40.000 "iscsi_portal_group_set_auth", 00:06:40.000 "iscsi_start_portal_group", 00:06:40.000 "iscsi_delete_portal_group", 00:06:40.000 "iscsi_create_portal_group", 00:06:40.000 "iscsi_get_portal_groups", 00:06:40.000 "iscsi_delete_target_node", 00:06:40.000 "iscsi_target_node_remove_pg_ig_maps", 00:06:40.000 "iscsi_target_node_add_pg_ig_maps", 00:06:40.000 "iscsi_create_target_node", 00:06:40.000 "iscsi_get_target_nodes", 00:06:40.000 "iscsi_delete_initiator_group", 00:06:40.000 "iscsi_initiator_group_remove_initiators", 00:06:40.000 "iscsi_initiator_group_add_initiators", 00:06:40.000 "iscsi_create_initiator_group", 00:06:40.000 "iscsi_get_initiator_groups", 00:06:40.000 "nvmf_set_crdt", 00:06:40.000 "nvmf_set_config", 00:06:40.000 "nvmf_set_max_subsystems", 00:06:40.000 "nvmf_stop_mdns_prr", 00:06:40.000 "nvmf_publish_mdns_prr", 00:06:40.000 "nvmf_subsystem_get_listeners", 00:06:40.000 "nvmf_subsystem_get_qpairs", 00:06:40.000 "nvmf_subsystem_get_controllers", 00:06:40.000 "nvmf_get_stats", 00:06:40.000 "nvmf_get_transports", 00:06:40.000 "nvmf_create_transport", 00:06:40.000 "nvmf_get_targets", 00:06:40.000 "nvmf_delete_target", 00:06:40.000 "nvmf_create_target", 00:06:40.000 "nvmf_subsystem_allow_any_host", 00:06:40.000 "nvmf_subsystem_remove_host", 00:06:40.000 "nvmf_subsystem_add_host", 00:06:40.000 "nvmf_ns_remove_host", 00:06:40.000 "nvmf_ns_add_host", 00:06:40.000 "nvmf_subsystem_remove_ns", 00:06:40.000 "nvmf_subsystem_add_ns", 00:06:40.000 "nvmf_subsystem_listener_set_ana_state", 00:06:40.000 "nvmf_discovery_get_referrals", 00:06:40.000 "nvmf_discovery_remove_referral", 00:06:40.000 "nvmf_discovery_add_referral", 00:06:40.000 "nvmf_subsystem_remove_listener", 00:06:40.000 "nvmf_subsystem_add_listener", 00:06:40.000 "nvmf_delete_subsystem", 00:06:40.000 "nvmf_create_subsystem", 00:06:40.000 "nvmf_get_subsystems", 00:06:40.000 "env_dpdk_get_mem_stats", 00:06:40.000 "nbd_get_disks", 00:06:40.000 "nbd_stop_disk", 00:06:40.000 "nbd_start_disk", 00:06:40.000 "ublk_recover_disk", 00:06:40.000 "ublk_get_disks", 00:06:40.000 "ublk_stop_disk", 00:06:40.000 "ublk_start_disk", 00:06:40.000 "ublk_destroy_target", 00:06:40.000 "ublk_create_target", 00:06:40.000 "virtio_blk_create_transport", 00:06:40.000 "virtio_blk_get_transports", 00:06:40.000 "vhost_controller_set_coalescing", 00:06:40.000 "vhost_get_controllers", 00:06:40.000 "vhost_delete_controller", 00:06:40.000 "vhost_create_blk_controller", 00:06:40.000 "vhost_scsi_controller_remove_target", 00:06:40.000 "vhost_scsi_controller_add_target", 00:06:40.000 "vhost_start_scsi_controller", 00:06:40.000 "vhost_create_scsi_controller", 00:06:40.000 "thread_set_cpumask", 00:06:40.000 "framework_get_governor", 00:06:40.000 "framework_get_scheduler", 00:06:40.000 "framework_set_scheduler", 00:06:40.000 "framework_get_reactors", 00:06:40.000 "thread_get_io_channels", 00:06:40.000 "thread_get_pollers", 00:06:40.000 "thread_get_stats", 00:06:40.000 "framework_monitor_context_switch", 00:06:40.000 "spdk_kill_instance", 00:06:40.000 "log_enable_timestamps", 00:06:40.000 "log_get_flags", 00:06:40.000 "log_clear_flag", 00:06:40.000 "log_set_flag", 00:06:40.000 "log_get_level", 00:06:40.000 "log_set_level", 00:06:40.000 "log_get_print_level", 00:06:40.000 "log_set_print_level", 00:06:40.000 "framework_enable_cpumask_locks", 00:06:40.000 "framework_disable_cpumask_locks", 00:06:40.000 "framework_wait_init", 00:06:40.000 "framework_start_init", 00:06:40.000 "scsi_get_devices", 00:06:40.000 "bdev_get_histogram", 00:06:40.000 "bdev_enable_histogram", 00:06:40.000 "bdev_set_qos_limit", 00:06:40.000 "bdev_set_qd_sampling_period", 00:06:40.000 "bdev_get_bdevs", 00:06:40.000 "bdev_reset_iostat", 00:06:40.000 "bdev_get_iostat", 00:06:40.000 "bdev_examine", 00:06:40.000 "bdev_wait_for_examine", 00:06:40.000 "bdev_set_options", 00:06:40.000 "notify_get_notifications", 00:06:40.000 "notify_get_types", 00:06:40.000 "accel_get_stats", 00:06:40.000 "accel_set_options", 00:06:40.000 "accel_set_driver", 00:06:40.000 "accel_crypto_key_destroy", 00:06:40.000 "accel_crypto_keys_get", 00:06:40.000 "accel_crypto_key_create", 00:06:40.000 "accel_assign_opc", 00:06:40.000 "accel_get_module_info", 00:06:40.000 "accel_get_opc_assignments", 00:06:40.000 "vmd_rescan", 00:06:40.000 "vmd_remove_device", 00:06:40.000 "vmd_enable", 00:06:40.000 "sock_get_default_impl", 00:06:40.000 "sock_set_default_impl", 00:06:40.000 "sock_impl_set_options", 00:06:40.000 "sock_impl_get_options", 00:06:40.000 "iobuf_get_stats", 00:06:40.000 "iobuf_set_options", 00:06:40.000 "keyring_get_keys", 00:06:40.000 "framework_get_pci_devices", 00:06:40.000 "framework_get_config", 00:06:40.000 "framework_get_subsystems", 00:06:40.000 "vfu_tgt_set_base_path", 00:06:40.000 "trace_get_info", 00:06:40.000 "trace_get_tpoint_group_mask", 00:06:40.000 "trace_disable_tpoint_group", 00:06:40.000 "trace_enable_tpoint_group", 00:06:40.000 "trace_clear_tpoint_mask", 00:06:40.000 "trace_set_tpoint_mask", 00:06:40.000 "spdk_get_version", 00:06:40.000 "rpc_get_methods" 00:06:40.000 ] 00:06:40.000 14:00:47 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:40.000 14:00:47 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:40.000 14:00:47 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:40.000 14:00:47 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:40.000 14:00:47 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 109574 00:06:40.000 14:00:47 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 109574 ']' 00:06:40.000 14:00:47 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 109574 00:06:40.000 14:00:47 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:06:40.000 14:00:47 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:40.000 14:00:47 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 109574 00:06:40.000 14:00:47 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:40.000 14:00:47 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:40.000 14:00:47 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 109574' 00:06:40.000 killing process with pid 109574 00:06:40.000 14:00:47 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 109574 00:06:40.000 14:00:47 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 109574 00:06:40.259 00:06:40.259 real 0m1.236s 00:06:40.259 user 0m2.182s 00:06:40.259 sys 0m0.429s 00:06:40.259 14:00:48 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:40.259 14:00:48 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:40.259 ************************************ 00:06:40.259 END TEST spdkcli_tcp 00:06:40.259 ************************************ 00:06:40.517 14:00:48 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:40.518 14:00:48 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:40.518 14:00:48 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:40.518 14:00:48 -- common/autotest_common.sh@10 -- # set +x 00:06:40.518 ************************************ 00:06:40.518 START TEST dpdk_mem_utility 00:06:40.518 ************************************ 00:06:40.518 14:00:48 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:40.518 * Looking for test storage... 00:06:40.518 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:06:40.518 14:00:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:40.518 14:00:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=109774 00:06:40.518 14:00:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:40.518 14:00:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 109774 00:06:40.518 14:00:48 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 109774 ']' 00:06:40.518 14:00:48 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:40.518 14:00:48 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:40.518 14:00:48 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:40.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:40.518 14:00:48 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:40.518 14:00:48 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:40.518 [2024-07-26 14:00:48.425153] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:06:40.518 [2024-07-26 14:00:48.425255] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109774 ] 00:06:40.518 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.518 [2024-07-26 14:00:48.483158] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.775 [2024-07-26 14:00:48.592138] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.035 14:00:48 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:41.035 14:00:48 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:06:41.035 14:00:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:41.035 14:00:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:41.035 14:00:48 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.036 14:00:48 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:41.036 { 00:06:41.036 "filename": "/tmp/spdk_mem_dump.txt" 00:06:41.036 } 00:06:41.036 14:00:48 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.036 14:00:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:41.036 DPDK memory size 814.000000 MiB in 1 heap(s) 00:06:41.036 1 heaps totaling size 814.000000 MiB 00:06:41.036 size: 814.000000 MiB heap id: 0 00:06:41.036 end heaps---------- 00:06:41.036 8 mempools totaling size 598.116089 MiB 00:06:41.036 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:41.036 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:41.036 size: 84.521057 MiB name: bdev_io_109774 00:06:41.036 size: 51.011292 MiB name: evtpool_109774 00:06:41.036 size: 50.003479 MiB name: msgpool_109774 00:06:41.036 size: 21.763794 MiB name: PDU_Pool 00:06:41.036 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:41.036 size: 0.026123 MiB name: Session_Pool 00:06:41.036 end mempools------- 00:06:41.036 6 memzones totaling size 4.142822 MiB 00:06:41.036 size: 1.000366 MiB name: RG_ring_0_109774 00:06:41.036 size: 1.000366 MiB name: RG_ring_1_109774 00:06:41.036 size: 1.000366 MiB name: RG_ring_4_109774 00:06:41.036 size: 1.000366 MiB name: RG_ring_5_109774 00:06:41.036 size: 0.125366 MiB name: RG_ring_2_109774 00:06:41.036 size: 0.015991 MiB name: RG_ring_3_109774 00:06:41.036 end memzones------- 00:06:41.036 14:00:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:41.036 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:06:41.036 list of free elements. size: 12.519348 MiB 00:06:41.036 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:41.036 element at address: 0x200018e00000 with size: 0.999878 MiB 00:06:41.036 element at address: 0x200019000000 with size: 0.999878 MiB 00:06:41.036 element at address: 0x200003e00000 with size: 0.996277 MiB 00:06:41.036 element at address: 0x200031c00000 with size: 0.994446 MiB 00:06:41.036 element at address: 0x200013800000 with size: 0.978699 MiB 00:06:41.036 element at address: 0x200007000000 with size: 0.959839 MiB 00:06:41.036 element at address: 0x200019200000 with size: 0.936584 MiB 00:06:41.036 element at address: 0x200000200000 with size: 0.841614 MiB 00:06:41.036 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:06:41.036 element at address: 0x20000b200000 with size: 0.490723 MiB 00:06:41.036 element at address: 0x200000800000 with size: 0.487793 MiB 00:06:41.036 element at address: 0x200019400000 with size: 0.485657 MiB 00:06:41.036 element at address: 0x200027e00000 with size: 0.410034 MiB 00:06:41.036 element at address: 0x200003a00000 with size: 0.355530 MiB 00:06:41.036 list of standard malloc elements. size: 199.218079 MiB 00:06:41.036 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:06:41.036 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:06:41.036 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:41.036 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:06:41.036 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:41.036 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:41.036 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:06:41.036 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:41.036 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:06:41.036 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:06:41.036 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:06:41.036 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:06:41.036 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:41.036 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:41.036 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:41.036 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:41.036 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:06:41.036 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:06:41.036 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:06:41.036 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:06:41.036 element at address: 0x200003adb300 with size: 0.000183 MiB 00:06:41.036 element at address: 0x200003adb500 with size: 0.000183 MiB 00:06:41.036 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:06:41.036 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:41.036 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:41.036 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:41.036 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:06:41.036 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:06:41.036 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:06:41.036 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:06:41.036 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:06:41.036 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:06:41.036 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:06:41.036 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:06:41.036 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:06:41.036 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:06:41.036 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:06:41.036 element at address: 0x200027e69040 with size: 0.000183 MiB 00:06:41.036 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:06:41.036 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:06:41.036 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:06:41.036 list of memzone associated elements. size: 602.262573 MiB 00:06:41.036 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:06:41.036 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:41.036 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:06:41.036 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:41.036 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:06:41.036 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_109774_0 00:06:41.036 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:41.036 associated memzone info: size: 48.002930 MiB name: MP_evtpool_109774_0 00:06:41.036 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:41.036 associated memzone info: size: 48.002930 MiB name: MP_msgpool_109774_0 00:06:41.036 element at address: 0x2000195be940 with size: 20.255554 MiB 00:06:41.036 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:41.036 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:06:41.036 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:41.036 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:41.036 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_109774 00:06:41.036 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:41.036 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_109774 00:06:41.036 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:41.036 associated memzone info: size: 1.007996 MiB name: MP_evtpool_109774 00:06:41.036 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:06:41.036 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:41.036 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:06:41.036 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:41.036 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:06:41.036 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:41.036 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:06:41.036 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:41.036 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:41.036 associated memzone info: size: 1.000366 MiB name: RG_ring_0_109774 00:06:41.036 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:41.036 associated memzone info: size: 1.000366 MiB name: RG_ring_1_109774 00:06:41.036 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:06:41.036 associated memzone info: size: 1.000366 MiB name: RG_ring_4_109774 00:06:41.036 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:06:41.036 associated memzone info: size: 1.000366 MiB name: RG_ring_5_109774 00:06:41.036 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:06:41.036 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_109774 00:06:41.036 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:06:41.036 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:41.036 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:06:41.036 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:41.036 element at address: 0x20001947c540 with size: 0.250488 MiB 00:06:41.036 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:41.036 element at address: 0x200003adf880 with size: 0.125488 MiB 00:06:41.036 associated memzone info: size: 0.125366 MiB name: RG_ring_2_109774 00:06:41.036 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:06:41.036 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:41.036 element at address: 0x200027e69100 with size: 0.023743 MiB 00:06:41.036 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:41.036 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:06:41.036 associated memzone info: size: 0.015991 MiB name: RG_ring_3_109774 00:06:41.036 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:06:41.036 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:41.036 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:06:41.036 associated memzone info: size: 0.000183 MiB name: MP_msgpool_109774 00:06:41.036 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:06:41.036 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_109774 00:06:41.036 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:06:41.036 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:41.036 14:00:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:41.036 14:00:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 109774 00:06:41.036 14:00:48 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 109774 ']' 00:06:41.036 14:00:48 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 109774 00:06:41.036 14:00:48 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:06:41.036 14:00:48 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:41.036 14:00:48 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 109774 00:06:41.036 14:00:48 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:41.036 14:00:48 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:41.036 14:00:48 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 109774' 00:06:41.036 killing process with pid 109774 00:06:41.036 14:00:48 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 109774 00:06:41.036 14:00:48 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 109774 00:06:41.598 00:06:41.598 real 0m1.086s 00:06:41.598 user 0m1.056s 00:06:41.598 sys 0m0.390s 00:06:41.598 14:00:49 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:41.598 14:00:49 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:41.598 ************************************ 00:06:41.598 END TEST dpdk_mem_utility 00:06:41.598 ************************************ 00:06:41.598 14:00:49 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:41.598 14:00:49 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:41.598 14:00:49 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:41.598 14:00:49 -- common/autotest_common.sh@10 -- # set +x 00:06:41.598 ************************************ 00:06:41.598 START TEST event 00:06:41.598 ************************************ 00:06:41.598 14:00:49 event -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:41.598 * Looking for test storage... 00:06:41.598 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:41.599 14:00:49 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:41.599 14:00:49 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:41.599 14:00:49 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:41.599 14:00:49 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:06:41.599 14:00:49 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:41.599 14:00:49 event -- common/autotest_common.sh@10 -- # set +x 00:06:41.599 ************************************ 00:06:41.599 START TEST event_perf 00:06:41.599 ************************************ 00:06:41.599 14:00:49 event.event_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:41.599 Running I/O for 1 seconds...[2024-07-26 14:00:49.546158] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:06:41.599 [2024-07-26 14:00:49.546224] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109966 ] 00:06:41.599 EAL: No free 2048 kB hugepages reported on node 1 00:06:41.599 [2024-07-26 14:00:49.604249] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:41.856 [2024-07-26 14:00:49.722479] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:41.856 [2024-07-26 14:00:49.722633] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:41.856 [2024-07-26 14:00:49.722664] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:41.856 [2024-07-26 14:00:49.722668] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.229 Running I/O for 1 seconds... 00:06:43.229 lcore 0: 232886 00:06:43.229 lcore 1: 232884 00:06:43.229 lcore 2: 232884 00:06:43.229 lcore 3: 232885 00:06:43.229 done. 00:06:43.229 00:06:43.229 real 0m1.304s 00:06:43.229 user 0m4.220s 00:06:43.229 sys 0m0.080s 00:06:43.229 14:00:50 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:43.229 14:00:50 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:43.229 ************************************ 00:06:43.229 END TEST event_perf 00:06:43.229 ************************************ 00:06:43.229 14:00:50 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:43.229 14:00:50 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:43.229 14:00:50 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:43.229 14:00:50 event -- common/autotest_common.sh@10 -- # set +x 00:06:43.229 ************************************ 00:06:43.229 START TEST event_reactor 00:06:43.229 ************************************ 00:06:43.229 14:00:50 event.event_reactor -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:43.229 [2024-07-26 14:00:50.892146] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:06:43.229 [2024-07-26 14:00:50.892207] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110245 ] 00:06:43.229 EAL: No free 2048 kB hugepages reported on node 1 00:06:43.229 [2024-07-26 14:00:50.950574] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.229 [2024-07-26 14:00:51.056211] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.162 test_start 00:06:44.162 oneshot 00:06:44.162 tick 100 00:06:44.162 tick 100 00:06:44.162 tick 250 00:06:44.162 tick 100 00:06:44.162 tick 100 00:06:44.162 tick 100 00:06:44.162 tick 250 00:06:44.162 tick 500 00:06:44.162 tick 100 00:06:44.162 tick 100 00:06:44.162 tick 250 00:06:44.162 tick 100 00:06:44.162 tick 100 00:06:44.162 test_end 00:06:44.162 00:06:44.162 real 0m1.283s 00:06:44.162 user 0m1.202s 00:06:44.162 sys 0m0.077s 00:06:44.162 14:00:52 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:44.162 14:00:52 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:44.162 ************************************ 00:06:44.162 END TEST event_reactor 00:06:44.162 ************************************ 00:06:44.421 14:00:52 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:44.421 14:00:52 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:44.421 14:00:52 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:44.421 14:00:52 event -- common/autotest_common.sh@10 -- # set +x 00:06:44.421 ************************************ 00:06:44.421 START TEST event_reactor_perf 00:06:44.421 ************************************ 00:06:44.421 14:00:52 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:44.421 [2024-07-26 14:00:52.230987] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:06:44.421 [2024-07-26 14:00:52.231055] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110405 ] 00:06:44.421 EAL: No free 2048 kB hugepages reported on node 1 00:06:44.421 [2024-07-26 14:00:52.289480] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.421 [2024-07-26 14:00:52.392611] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.823 test_start 00:06:45.823 test_end 00:06:45.823 Performance: 447410 events per second 00:06:45.823 00:06:45.823 real 0m1.286s 00:06:45.823 user 0m1.199s 00:06:45.823 sys 0m0.083s 00:06:45.823 14:00:53 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:45.823 14:00:53 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:45.823 ************************************ 00:06:45.823 END TEST event_reactor_perf 00:06:45.823 ************************************ 00:06:45.823 14:00:53 event -- event/event.sh@49 -- # uname -s 00:06:45.823 14:00:53 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:45.823 14:00:53 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:45.823 14:00:53 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:45.823 14:00:53 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:45.823 14:00:53 event -- common/autotest_common.sh@10 -- # set +x 00:06:45.823 ************************************ 00:06:45.823 START TEST event_scheduler 00:06:45.823 ************************************ 00:06:45.823 14:00:53 event.event_scheduler -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:45.823 * Looking for test storage... 00:06:45.823 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:06:45.823 14:00:53 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:45.823 14:00:53 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=110583 00:06:45.823 14:00:53 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:45.823 14:00:53 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:45.823 14:00:53 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 110583 00:06:45.823 14:00:53 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 110583 ']' 00:06:45.823 14:00:53 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.823 14:00:53 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:45.823 14:00:53 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.823 14:00:53 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:45.823 14:00:53 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:45.823 [2024-07-26 14:00:53.652042] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:06:45.823 [2024-07-26 14:00:53.652125] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110583 ] 00:06:45.823 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.823 [2024-07-26 14:00:53.710654] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:45.823 [2024-07-26 14:00:53.818555] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.823 [2024-07-26 14:00:53.818578] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:45.823 [2024-07-26 14:00:53.818606] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:45.823 [2024-07-26 14:00:53.818609] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:46.105 14:00:53 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:46.105 14:00:53 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:06:46.105 14:00:53 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:46.105 14:00:53 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.105 14:00:53 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:46.105 [2024-07-26 14:00:53.855436] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:06:46.105 [2024-07-26 14:00:53.855463] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:06:46.105 [2024-07-26 14:00:53.855497] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:46.105 [2024-07-26 14:00:53.855509] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:46.105 [2024-07-26 14:00:53.855520] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:46.105 14:00:53 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.105 14:00:53 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:46.105 14:00:53 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.105 14:00:53 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:46.105 [2024-07-26 14:00:53.952652] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:46.105 14:00:53 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.105 14:00:53 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:46.105 14:00:53 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:46.105 14:00:53 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:46.105 14:00:53 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:46.105 ************************************ 00:06:46.105 START TEST scheduler_create_thread 00:06:46.105 ************************************ 00:06:46.105 14:00:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:06:46.105 14:00:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:46.105 14:00:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.105 14:00:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.105 2 00:06:46.105 14:00:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.105 14:00:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:46.105 14:00:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.105 14:00:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.105 3 00:06:46.105 14:00:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.105 14:00:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:46.105 14:00:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.105 14:00:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.105 4 00:06:46.105 14:00:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.105 14:00:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:46.105 14:00:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.105 14:00:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.105 5 00:06:46.105 14:00:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.105 14:00:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:46.105 14:00:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.105 14:00:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.105 6 00:06:46.105 14:00:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.105 14:00:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:46.105 14:00:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.105 14:00:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.105 7 00:06:46.105 14:00:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.105 14:00:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:46.105 14:00:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.105 14:00:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.105 8 00:06:46.105 14:00:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.105 14:00:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:46.105 14:00:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.105 14:00:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.105 9 00:06:46.105 14:00:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.105 14:00:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:46.105 14:00:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.105 14:00:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.105 10 00:06:46.105 14:00:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.105 14:00:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:46.105 14:00:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.105 14:00:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.105 14:00:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.105 14:00:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:46.105 14:00:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:46.105 14:00:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.105 14:00:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.105 14:00:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.105 14:00:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:46.105 14:00:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.105 14:00:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.105 14:00:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.105 14:00:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:46.105 14:00:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:46.105 14:00:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.105 14:00:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.712 14:00:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.712 00:06:46.712 real 0m0.592s 00:06:46.712 user 0m0.010s 00:06:46.712 sys 0m0.003s 00:06:46.712 14:00:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:46.712 14:00:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.712 ************************************ 00:06:46.712 END TEST scheduler_create_thread 00:06:46.712 ************************************ 00:06:46.712 14:00:54 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:46.712 14:00:54 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 110583 00:06:46.712 14:00:54 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 110583 ']' 00:06:46.712 14:00:54 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 110583 00:06:46.712 14:00:54 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:06:46.712 14:00:54 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:46.712 14:00:54 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 110583 00:06:46.712 14:00:54 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:46.712 14:00:54 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:46.712 14:00:54 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 110583' 00:06:46.712 killing process with pid 110583 00:06:46.712 14:00:54 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 110583 00:06:46.712 14:00:54 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 110583 00:06:47.305 [2024-07-26 14:00:55.053053] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:47.305 00:06:47.305 real 0m1.759s 00:06:47.305 user 0m2.187s 00:06:47.305 sys 0m0.328s 00:06:47.305 14:00:55 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:47.305 14:00:55 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:47.305 ************************************ 00:06:47.305 END TEST event_scheduler 00:06:47.305 ************************************ 00:06:47.588 14:00:55 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:47.588 14:00:55 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:47.588 14:00:55 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:47.588 14:00:55 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:47.588 14:00:55 event -- common/autotest_common.sh@10 -- # set +x 00:06:47.588 ************************************ 00:06:47.588 START TEST app_repeat 00:06:47.588 ************************************ 00:06:47.588 14:00:55 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:06:47.588 14:00:55 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:47.588 14:00:55 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:47.588 14:00:55 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:47.588 14:00:55 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:47.588 14:00:55 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:47.588 14:00:55 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:47.588 14:00:55 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:47.588 14:00:55 event.app_repeat -- event/event.sh@19 -- # repeat_pid=110908 00:06:47.588 14:00:55 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:47.588 14:00:55 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:47.588 14:00:55 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 110908' 00:06:47.588 Process app_repeat pid: 110908 00:06:47.588 14:00:55 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:47.588 14:00:55 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:47.588 spdk_app_start Round 0 00:06:47.588 14:00:55 event.app_repeat -- event/event.sh@25 -- # waitforlisten 110908 /var/tmp/spdk-nbd.sock 00:06:47.588 14:00:55 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 110908 ']' 00:06:47.588 14:00:55 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:47.588 14:00:55 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:47.588 14:00:55 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:47.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:47.588 14:00:55 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:47.588 14:00:55 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:47.588 [2024-07-26 14:00:55.389816] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:06:47.588 [2024-07-26 14:00:55.389880] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110908 ] 00:06:47.588 EAL: No free 2048 kB hugepages reported on node 1 00:06:47.588 [2024-07-26 14:00:55.448040] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:47.588 [2024-07-26 14:00:55.559773] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:47.588 [2024-07-26 14:00:55.559777] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.859 14:00:55 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:47.859 14:00:55 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:47.859 14:00:55 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:48.141 Malloc0 00:06:48.141 14:00:55 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:48.426 Malloc1 00:06:48.426 14:00:56 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:48.426 14:00:56 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:48.426 14:00:56 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:48.426 14:00:56 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:48.426 14:00:56 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:48.426 14:00:56 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:48.426 14:00:56 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:48.426 14:00:56 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:48.426 14:00:56 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:48.426 14:00:56 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:48.426 14:00:56 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:48.426 14:00:56 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:48.426 14:00:56 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:48.426 14:00:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:48.426 14:00:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:48.426 14:00:56 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:48.711 /dev/nbd0 00:06:48.711 14:00:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:48.711 14:00:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:48.711 14:00:56 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:48.711 14:00:56 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:48.711 14:00:56 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:48.711 14:00:56 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:48.711 14:00:56 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:48.711 14:00:56 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:48.711 14:00:56 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:48.711 14:00:56 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:48.711 14:00:56 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:48.711 1+0 records in 00:06:48.711 1+0 records out 00:06:48.711 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000165305 s, 24.8 MB/s 00:06:48.711 14:00:56 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:48.711 14:00:56 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:48.711 14:00:56 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:48.711 14:00:56 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:48.711 14:00:56 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:48.711 14:00:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:48.711 14:00:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:48.711 14:00:56 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:48.996 /dev/nbd1 00:06:48.996 14:00:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:48.996 14:00:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:48.996 14:00:56 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:48.996 14:00:56 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:48.996 14:00:56 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:48.996 14:00:56 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:48.996 14:00:56 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:48.996 14:00:56 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:48.996 14:00:56 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:48.996 14:00:56 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:48.996 14:00:56 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:48.996 1+0 records in 00:06:48.996 1+0 records out 00:06:48.996 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000209174 s, 19.6 MB/s 00:06:48.996 14:00:56 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:48.996 14:00:56 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:48.996 14:00:56 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:48.996 14:00:56 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:48.996 14:00:56 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:48.996 14:00:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:48.996 14:00:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:48.996 14:00:56 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:48.996 14:00:56 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:48.996 14:00:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:49.267 14:00:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:49.267 { 00:06:49.267 "nbd_device": "/dev/nbd0", 00:06:49.267 "bdev_name": "Malloc0" 00:06:49.267 }, 00:06:49.267 { 00:06:49.267 "nbd_device": "/dev/nbd1", 00:06:49.267 "bdev_name": "Malloc1" 00:06:49.267 } 00:06:49.267 ]' 00:06:49.267 14:00:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:49.267 { 00:06:49.267 "nbd_device": "/dev/nbd0", 00:06:49.267 "bdev_name": "Malloc0" 00:06:49.267 }, 00:06:49.267 { 00:06:49.267 "nbd_device": "/dev/nbd1", 00:06:49.267 "bdev_name": "Malloc1" 00:06:49.267 } 00:06:49.267 ]' 00:06:49.267 14:00:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:49.267 14:00:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:49.267 /dev/nbd1' 00:06:49.267 14:00:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:49.267 /dev/nbd1' 00:06:49.267 14:00:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:49.267 14:00:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:49.267 14:00:57 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:49.267 14:00:57 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:49.267 14:00:57 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:49.267 14:00:57 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:49.267 14:00:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:49.267 14:00:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:49.267 14:00:57 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:49.267 14:00:57 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:49.267 14:00:57 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:49.267 14:00:57 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:49.267 256+0 records in 00:06:49.267 256+0 records out 00:06:49.267 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00494273 s, 212 MB/s 00:06:49.267 14:00:57 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:49.267 14:00:57 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:49.267 256+0 records in 00:06:49.267 256+0 records out 00:06:49.267 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0208388 s, 50.3 MB/s 00:06:49.267 14:00:57 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:49.267 14:00:57 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:49.267 256+0 records in 00:06:49.267 256+0 records out 00:06:49.267 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0227783 s, 46.0 MB/s 00:06:49.267 14:00:57 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:49.267 14:00:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:49.267 14:00:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:49.267 14:00:57 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:49.267 14:00:57 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:49.267 14:00:57 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:49.267 14:00:57 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:49.267 14:00:57 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:49.267 14:00:57 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:49.267 14:00:57 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:49.267 14:00:57 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:49.267 14:00:57 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:49.267 14:00:57 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:49.267 14:00:57 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:49.267 14:00:57 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:49.267 14:00:57 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:49.267 14:00:57 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:49.267 14:00:57 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:49.267 14:00:57 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:49.545 14:00:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:49.545 14:00:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:49.545 14:00:57 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:49.545 14:00:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:49.545 14:00:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:49.545 14:00:57 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:49.545 14:00:57 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:49.545 14:00:57 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:49.545 14:00:57 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:49.545 14:00:57 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:49.828 14:00:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:49.828 14:00:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:49.828 14:00:57 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:49.828 14:00:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:49.828 14:00:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:49.828 14:00:57 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:49.828 14:00:57 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:49.828 14:00:57 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:49.828 14:00:57 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:49.828 14:00:57 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:49.828 14:00:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:50.092 14:00:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:50.092 14:00:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:50.092 14:00:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:50.092 14:00:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:50.092 14:00:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:50.092 14:00:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:50.092 14:00:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:50.092 14:00:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:50.092 14:00:57 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:50.092 14:00:57 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:50.092 14:00:57 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:50.092 14:00:57 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:50.092 14:00:57 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:50.360 14:00:58 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:50.622 [2024-07-26 14:00:58.502630] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:50.622 [2024-07-26 14:00:58.607262] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.622 [2024-07-26 14:00:58.607262] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:50.882 [2024-07-26 14:00:58.658848] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:50.882 [2024-07-26 14:00:58.658920] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:53.412 14:01:01 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:53.412 14:01:01 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:53.412 spdk_app_start Round 1 00:06:53.412 14:01:01 event.app_repeat -- event/event.sh@25 -- # waitforlisten 110908 /var/tmp/spdk-nbd.sock 00:06:53.412 14:01:01 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 110908 ']' 00:06:53.412 14:01:01 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:53.412 14:01:01 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:53.412 14:01:01 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:53.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:53.412 14:01:01 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:53.412 14:01:01 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:53.670 14:01:01 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:53.670 14:01:01 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:53.670 14:01:01 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:53.928 Malloc0 00:06:53.928 14:01:01 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:54.187 Malloc1 00:06:54.187 14:01:02 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:54.187 14:01:02 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:54.187 14:01:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:54.187 14:01:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:54.187 14:01:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:54.187 14:01:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:54.187 14:01:02 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:54.187 14:01:02 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:54.187 14:01:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:54.187 14:01:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:54.187 14:01:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:54.187 14:01:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:54.187 14:01:02 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:54.187 14:01:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:54.187 14:01:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:54.187 14:01:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:54.463 /dev/nbd0 00:06:54.463 14:01:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:54.463 14:01:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:54.463 14:01:02 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:54.463 14:01:02 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:54.463 14:01:02 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:54.463 14:01:02 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:54.463 14:01:02 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:54.463 14:01:02 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:54.463 14:01:02 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:54.463 14:01:02 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:54.463 14:01:02 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:54.463 1+0 records in 00:06:54.463 1+0 records out 00:06:54.463 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000146502 s, 28.0 MB/s 00:06:54.463 14:01:02 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:54.463 14:01:02 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:54.463 14:01:02 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:54.463 14:01:02 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:54.463 14:01:02 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:54.463 14:01:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:54.463 14:01:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:54.463 14:01:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:54.722 /dev/nbd1 00:06:54.722 14:01:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:54.722 14:01:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:54.722 14:01:02 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:54.722 14:01:02 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:54.722 14:01:02 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:54.722 14:01:02 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:54.722 14:01:02 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:54.722 14:01:02 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:54.722 14:01:02 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:54.722 14:01:02 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:54.722 14:01:02 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:54.722 1+0 records in 00:06:54.722 1+0 records out 00:06:54.722 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00022401 s, 18.3 MB/s 00:06:54.722 14:01:02 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:54.722 14:01:02 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:54.722 14:01:02 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:54.722 14:01:02 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:54.722 14:01:02 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:54.722 14:01:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:54.722 14:01:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:54.722 14:01:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:54.722 14:01:02 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:54.722 14:01:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:54.981 14:01:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:54.981 { 00:06:54.981 "nbd_device": "/dev/nbd0", 00:06:54.981 "bdev_name": "Malloc0" 00:06:54.981 }, 00:06:54.981 { 00:06:54.981 "nbd_device": "/dev/nbd1", 00:06:54.981 "bdev_name": "Malloc1" 00:06:54.981 } 00:06:54.981 ]' 00:06:54.981 14:01:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:54.981 { 00:06:54.981 "nbd_device": "/dev/nbd0", 00:06:54.981 "bdev_name": "Malloc0" 00:06:54.981 }, 00:06:54.981 { 00:06:54.981 "nbd_device": "/dev/nbd1", 00:06:54.981 "bdev_name": "Malloc1" 00:06:54.981 } 00:06:54.981 ]' 00:06:54.981 14:01:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:54.981 14:01:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:54.981 /dev/nbd1' 00:06:54.981 14:01:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:54.981 /dev/nbd1' 00:06:54.981 14:01:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:54.981 14:01:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:54.981 14:01:02 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:54.981 14:01:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:54.981 14:01:02 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:54.981 14:01:02 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:54.981 14:01:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:54.981 14:01:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:54.981 14:01:02 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:54.981 14:01:02 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:54.981 14:01:02 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:54.981 14:01:02 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:54.981 256+0 records in 00:06:54.981 256+0 records out 00:06:54.981 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00507218 s, 207 MB/s 00:06:54.981 14:01:02 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:54.981 14:01:02 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:54.981 256+0 records in 00:06:54.981 256+0 records out 00:06:54.981 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0209197 s, 50.1 MB/s 00:06:54.981 14:01:02 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:54.981 14:01:02 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:54.981 256+0 records in 00:06:54.981 256+0 records out 00:06:54.981 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0225319 s, 46.5 MB/s 00:06:54.981 14:01:02 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:54.981 14:01:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:54.981 14:01:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:54.981 14:01:02 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:54.981 14:01:02 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:54.981 14:01:02 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:54.981 14:01:02 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:54.981 14:01:02 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:54.981 14:01:02 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:54.981 14:01:02 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:54.981 14:01:02 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:54.981 14:01:02 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:54.981 14:01:02 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:54.981 14:01:02 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:54.981 14:01:02 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:54.981 14:01:02 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:54.981 14:01:02 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:54.981 14:01:02 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:54.981 14:01:02 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:55.240 14:01:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:55.240 14:01:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:55.240 14:01:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:55.240 14:01:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:55.240 14:01:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:55.240 14:01:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:55.240 14:01:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:55.240 14:01:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:55.240 14:01:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:55.240 14:01:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:55.497 14:01:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:55.498 14:01:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:55.498 14:01:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:55.498 14:01:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:55.498 14:01:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:55.498 14:01:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:55.498 14:01:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:55.498 14:01:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:55.498 14:01:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:55.498 14:01:03 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:55.498 14:01:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:55.755 14:01:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:55.755 14:01:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:55.755 14:01:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:55.755 14:01:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:55.755 14:01:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:55.755 14:01:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:55.755 14:01:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:55.755 14:01:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:55.755 14:01:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:55.755 14:01:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:55.755 14:01:03 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:55.755 14:01:03 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:55.755 14:01:03 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:56.013 14:01:04 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:56.270 [2024-07-26 14:01:04.263624] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:56.528 [2024-07-26 14:01:04.367951] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:56.528 [2024-07-26 14:01:04.367955] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.528 [2024-07-26 14:01:04.420987] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:56.528 [2024-07-26 14:01:04.421055] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:59.057 14:01:07 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:59.057 14:01:07 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:59.057 spdk_app_start Round 2 00:06:59.057 14:01:07 event.app_repeat -- event/event.sh@25 -- # waitforlisten 110908 /var/tmp/spdk-nbd.sock 00:06:59.057 14:01:07 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 110908 ']' 00:06:59.057 14:01:07 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:59.057 14:01:07 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:59.057 14:01:07 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:59.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:59.057 14:01:07 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:59.057 14:01:07 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:59.315 14:01:07 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:59.315 14:01:07 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:59.315 14:01:07 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:59.574 Malloc0 00:06:59.574 14:01:07 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:59.833 Malloc1 00:06:59.833 14:01:07 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:59.833 14:01:07 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:59.833 14:01:07 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:59.833 14:01:07 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:59.833 14:01:07 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:59.833 14:01:07 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:59.833 14:01:07 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:59.833 14:01:07 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:59.833 14:01:07 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:59.833 14:01:07 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:59.833 14:01:07 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:59.833 14:01:07 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:59.833 14:01:07 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:59.833 14:01:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:59.833 14:01:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:59.833 14:01:07 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:00.091 /dev/nbd0 00:07:00.091 14:01:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:00.091 14:01:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:00.091 14:01:08 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:00.091 14:01:08 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:00.091 14:01:08 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:00.091 14:01:08 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:00.091 14:01:08 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:00.091 14:01:08 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:00.091 14:01:08 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:00.091 14:01:08 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:00.091 14:01:08 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:00.091 1+0 records in 00:07:00.091 1+0 records out 00:07:00.091 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000193595 s, 21.2 MB/s 00:07:00.091 14:01:08 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:00.091 14:01:08 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:00.091 14:01:08 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:00.091 14:01:08 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:00.091 14:01:08 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:00.091 14:01:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:00.091 14:01:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:00.091 14:01:08 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:00.349 /dev/nbd1 00:07:00.349 14:01:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:00.349 14:01:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:00.349 14:01:08 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:00.349 14:01:08 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:00.349 14:01:08 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:00.349 14:01:08 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:00.349 14:01:08 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:00.349 14:01:08 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:00.349 14:01:08 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:00.349 14:01:08 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:00.349 14:01:08 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:00.349 1+0 records in 00:07:00.349 1+0 records out 00:07:00.349 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000162329 s, 25.2 MB/s 00:07:00.349 14:01:08 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:00.349 14:01:08 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:00.349 14:01:08 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:00.349 14:01:08 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:00.349 14:01:08 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:00.349 14:01:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:00.349 14:01:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:00.349 14:01:08 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:00.349 14:01:08 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:00.349 14:01:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:00.607 14:01:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:00.607 { 00:07:00.607 "nbd_device": "/dev/nbd0", 00:07:00.607 "bdev_name": "Malloc0" 00:07:00.607 }, 00:07:00.607 { 00:07:00.607 "nbd_device": "/dev/nbd1", 00:07:00.607 "bdev_name": "Malloc1" 00:07:00.607 } 00:07:00.607 ]' 00:07:00.607 14:01:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:00.607 { 00:07:00.607 "nbd_device": "/dev/nbd0", 00:07:00.607 "bdev_name": "Malloc0" 00:07:00.607 }, 00:07:00.607 { 00:07:00.607 "nbd_device": "/dev/nbd1", 00:07:00.607 "bdev_name": "Malloc1" 00:07:00.607 } 00:07:00.607 ]' 00:07:00.607 14:01:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:00.607 14:01:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:00.607 /dev/nbd1' 00:07:00.607 14:01:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:00.607 /dev/nbd1' 00:07:00.608 14:01:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:00.608 14:01:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:00.608 14:01:08 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:00.608 14:01:08 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:00.608 14:01:08 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:00.608 14:01:08 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:00.608 14:01:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:00.608 14:01:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:00.608 14:01:08 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:00.608 14:01:08 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:00.608 14:01:08 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:00.608 14:01:08 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:00.608 256+0 records in 00:07:00.608 256+0 records out 00:07:00.608 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00509075 s, 206 MB/s 00:07:00.608 14:01:08 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:00.608 14:01:08 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:00.866 256+0 records in 00:07:00.866 256+0 records out 00:07:00.866 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0224977 s, 46.6 MB/s 00:07:00.866 14:01:08 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:00.866 14:01:08 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:00.866 256+0 records in 00:07:00.866 256+0 records out 00:07:00.866 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0234447 s, 44.7 MB/s 00:07:00.866 14:01:08 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:00.866 14:01:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:00.866 14:01:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:00.866 14:01:08 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:00.866 14:01:08 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:00.866 14:01:08 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:00.866 14:01:08 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:00.866 14:01:08 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:00.866 14:01:08 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:00.866 14:01:08 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:00.866 14:01:08 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:00.866 14:01:08 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:00.866 14:01:08 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:00.866 14:01:08 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:00.866 14:01:08 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:00.866 14:01:08 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:00.866 14:01:08 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:00.866 14:01:08 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:00.866 14:01:08 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:01.124 14:01:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:01.124 14:01:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:01.124 14:01:08 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:01.124 14:01:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:01.124 14:01:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:01.124 14:01:08 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:01.124 14:01:08 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:01.124 14:01:08 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:01.124 14:01:08 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:01.124 14:01:08 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:01.382 14:01:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:01.382 14:01:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:01.382 14:01:09 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:01.382 14:01:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:01.382 14:01:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:01.382 14:01:09 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:01.382 14:01:09 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:01.382 14:01:09 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:01.382 14:01:09 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:01.382 14:01:09 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:01.382 14:01:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:01.641 14:01:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:01.641 14:01:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:01.641 14:01:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:01.641 14:01:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:01.641 14:01:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:01.641 14:01:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:01.641 14:01:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:01.641 14:01:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:01.641 14:01:09 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:01.641 14:01:09 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:01.641 14:01:09 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:01.641 14:01:09 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:01.641 14:01:09 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:01.899 14:01:09 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:02.157 [2024-07-26 14:01:10.002738] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:02.157 [2024-07-26 14:01:10.112549] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:02.157 [2024-07-26 14:01:10.112576] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.157 [2024-07-26 14:01:10.164273] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:02.157 [2024-07-26 14:01:10.164360] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:05.448 14:01:12 event.app_repeat -- event/event.sh@38 -- # waitforlisten 110908 /var/tmp/spdk-nbd.sock 00:07:05.449 14:01:12 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 110908 ']' 00:07:05.449 14:01:12 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:05.449 14:01:12 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:05.449 14:01:12 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:05.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:05.449 14:01:12 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:05.449 14:01:12 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:05.449 14:01:13 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:05.449 14:01:13 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:05.449 14:01:13 event.app_repeat -- event/event.sh@39 -- # killprocess 110908 00:07:05.449 14:01:13 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 110908 ']' 00:07:05.449 14:01:13 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 110908 00:07:05.449 14:01:13 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:07:05.449 14:01:13 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:05.449 14:01:13 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 110908 00:07:05.449 14:01:13 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:05.449 14:01:13 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:05.449 14:01:13 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 110908' 00:07:05.449 killing process with pid 110908 00:07:05.449 14:01:13 event.app_repeat -- common/autotest_common.sh@969 -- # kill 110908 00:07:05.449 14:01:13 event.app_repeat -- common/autotest_common.sh@974 -- # wait 110908 00:07:05.449 spdk_app_start is called in Round 0. 00:07:05.449 Shutdown signal received, stop current app iteration 00:07:05.449 Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 reinitialization... 00:07:05.449 spdk_app_start is called in Round 1. 00:07:05.449 Shutdown signal received, stop current app iteration 00:07:05.449 Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 reinitialization... 00:07:05.449 spdk_app_start is called in Round 2. 00:07:05.449 Shutdown signal received, stop current app iteration 00:07:05.449 Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 reinitialization... 00:07:05.449 spdk_app_start is called in Round 3. 00:07:05.449 Shutdown signal received, stop current app iteration 00:07:05.449 14:01:13 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:05.449 14:01:13 event.app_repeat -- event/event.sh@42 -- # return 0 00:07:05.449 00:07:05.449 real 0m17.896s 00:07:05.449 user 0m38.771s 00:07:05.450 sys 0m3.173s 00:07:05.450 14:01:13 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:05.450 14:01:13 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:05.450 ************************************ 00:07:05.450 END TEST app_repeat 00:07:05.450 ************************************ 00:07:05.450 14:01:13 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:05.450 14:01:13 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:05.450 14:01:13 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:05.450 14:01:13 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:05.450 14:01:13 event -- common/autotest_common.sh@10 -- # set +x 00:07:05.450 ************************************ 00:07:05.450 START TEST cpu_locks 00:07:05.450 ************************************ 00:07:05.450 14:01:13 event.cpu_locks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:05.450 * Looking for test storage... 00:07:05.450 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:07:05.450 14:01:13 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:05.450 14:01:13 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:05.450 14:01:13 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:05.450 14:01:13 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:05.450 14:01:13 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:05.450 14:01:13 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:05.450 14:01:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:05.450 ************************************ 00:07:05.450 START TEST default_locks 00:07:05.450 ************************************ 00:07:05.450 14:01:13 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:07:05.450 14:01:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=113275 00:07:05.450 14:01:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:05.450 14:01:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 113275 00:07:05.450 14:01:13 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 113275 ']' 00:07:05.451 14:01:13 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:05.451 14:01:13 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:05.451 14:01:13 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:05.451 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:05.451 14:01:13 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:05.451 14:01:13 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:05.451 [2024-07-26 14:01:13.439299] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:07:05.451 [2024-07-26 14:01:13.439377] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113275 ] 00:07:05.710 EAL: No free 2048 kB hugepages reported on node 1 00:07:05.710 [2024-07-26 14:01:13.497452] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.710 [2024-07-26 14:01:13.603320] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.969 14:01:13 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:05.969 14:01:13 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:07:05.969 14:01:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 113275 00:07:05.969 14:01:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 113275 00:07:05.969 14:01:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:06.227 lslocks: write error 00:07:06.227 14:01:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 113275 00:07:06.227 14:01:14 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 113275 ']' 00:07:06.227 14:01:14 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 113275 00:07:06.227 14:01:14 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:07:06.227 14:01:14 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:06.227 14:01:14 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 113275 00:07:06.227 14:01:14 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:06.227 14:01:14 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:06.227 14:01:14 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 113275' 00:07:06.227 killing process with pid 113275 00:07:06.227 14:01:14 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 113275 00:07:06.227 14:01:14 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 113275 00:07:06.794 14:01:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 113275 00:07:06.794 14:01:14 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:07:06.794 14:01:14 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 113275 00:07:06.794 14:01:14 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:06.794 14:01:14 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:06.794 14:01:14 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:06.794 14:01:14 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:06.794 14:01:14 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 113275 00:07:06.794 14:01:14 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 113275 ']' 00:07:06.794 14:01:14 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:06.794 14:01:14 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:06.794 14:01:14 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:06.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:06.794 14:01:14 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:06.794 14:01:14 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:06.794 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (113275) - No such process 00:07:06.794 ERROR: process (pid: 113275) is no longer running 00:07:06.794 14:01:14 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:06.794 14:01:14 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:07:06.794 14:01:14 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:07:06.794 14:01:14 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:06.794 14:01:14 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:06.794 14:01:14 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:06.794 14:01:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:06.794 14:01:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:06.794 14:01:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:06.794 14:01:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:06.794 00:07:06.794 real 0m1.206s 00:07:06.794 user 0m1.156s 00:07:06.794 sys 0m0.509s 00:07:06.794 14:01:14 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:06.794 14:01:14 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:06.794 ************************************ 00:07:06.794 END TEST default_locks 00:07:06.794 ************************************ 00:07:06.794 14:01:14 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:06.794 14:01:14 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:06.794 14:01:14 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:06.794 14:01:14 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:06.794 ************************************ 00:07:06.794 START TEST default_locks_via_rpc 00:07:06.794 ************************************ 00:07:06.794 14:01:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:07:06.794 14:01:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=113439 00:07:06.794 14:01:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:06.794 14:01:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 113439 00:07:06.794 14:01:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 113439 ']' 00:07:06.794 14:01:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:06.794 14:01:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:06.794 14:01:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:06.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:06.794 14:01:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:06.794 14:01:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:06.794 [2024-07-26 14:01:14.698636] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:07:06.794 [2024-07-26 14:01:14.698716] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113439 ] 00:07:06.794 EAL: No free 2048 kB hugepages reported on node 1 00:07:06.794 [2024-07-26 14:01:14.755892] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.052 [2024-07-26 14:01:14.856859] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.309 14:01:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:07.309 14:01:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:07.309 14:01:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:07.309 14:01:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.309 14:01:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.309 14:01:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.309 14:01:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:07.309 14:01:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:07.309 14:01:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:07.309 14:01:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:07.309 14:01:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:07.309 14:01:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.309 14:01:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.309 14:01:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.309 14:01:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 113439 00:07:07.309 14:01:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 113439 00:07:07.309 14:01:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:07.567 14:01:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 113439 00:07:07.567 14:01:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 113439 ']' 00:07:07.567 14:01:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 113439 00:07:07.567 14:01:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:07:07.567 14:01:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:07.567 14:01:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 113439 00:07:07.567 14:01:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:07.567 14:01:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:07.567 14:01:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 113439' 00:07:07.567 killing process with pid 113439 00:07:07.567 14:01:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 113439 00:07:07.567 14:01:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 113439 00:07:07.826 00:07:07.826 real 0m1.197s 00:07:07.826 user 0m1.146s 00:07:07.826 sys 0m0.481s 00:07:07.826 14:01:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:07.826 14:01:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.826 ************************************ 00:07:07.826 END TEST default_locks_via_rpc 00:07:07.826 ************************************ 00:07:08.086 14:01:15 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:08.086 14:01:15 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:08.086 14:01:15 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:08.086 14:01:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:08.086 ************************************ 00:07:08.086 START TEST non_locking_app_on_locked_coremask 00:07:08.086 ************************************ 00:07:08.086 14:01:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:07:08.086 14:01:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=113603 00:07:08.086 14:01:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:08.086 14:01:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 113603 /var/tmp/spdk.sock 00:07:08.086 14:01:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 113603 ']' 00:07:08.086 14:01:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:08.086 14:01:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:08.086 14:01:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:08.086 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:08.086 14:01:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:08.086 14:01:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:08.086 [2024-07-26 14:01:15.941996] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:07:08.086 [2024-07-26 14:01:15.942090] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113603 ] 00:07:08.086 EAL: No free 2048 kB hugepages reported on node 1 00:07:08.086 [2024-07-26 14:01:15.998248] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.344 [2024-07-26 14:01:16.110688] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.344 14:01:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:08.344 14:01:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:08.344 14:01:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=113613 00:07:08.344 14:01:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:08.344 14:01:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 113613 /var/tmp/spdk2.sock 00:07:08.344 14:01:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 113613 ']' 00:07:08.344 14:01:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:08.344 14:01:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:08.344 14:01:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:08.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:08.344 14:01:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:08.345 14:01:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:08.603 [2024-07-26 14:01:16.392964] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:07:08.603 [2024-07-26 14:01:16.393049] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113613 ] 00:07:08.603 EAL: No free 2048 kB hugepages reported on node 1 00:07:08.603 [2024-07-26 14:01:16.474618] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:08.603 [2024-07-26 14:01:16.474645] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.861 [2024-07-26 14:01:16.689124] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.427 14:01:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:09.427 14:01:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:09.427 14:01:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 113603 00:07:09.427 14:01:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 113603 00:07:09.427 14:01:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:09.993 lslocks: write error 00:07:09.993 14:01:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 113603 00:07:09.993 14:01:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 113603 ']' 00:07:09.993 14:01:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 113603 00:07:09.993 14:01:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:09.993 14:01:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:09.993 14:01:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 113603 00:07:09.993 14:01:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:09.993 14:01:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:09.993 14:01:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 113603' 00:07:09.993 killing process with pid 113603 00:07:09.993 14:01:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 113603 00:07:09.993 14:01:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 113603 00:07:10.929 14:01:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 113613 00:07:10.929 14:01:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 113613 ']' 00:07:10.929 14:01:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 113613 00:07:10.929 14:01:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:10.929 14:01:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:10.929 14:01:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 113613 00:07:10.929 14:01:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:10.929 14:01:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:10.929 14:01:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 113613' 00:07:10.929 killing process with pid 113613 00:07:10.929 14:01:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 113613 00:07:10.929 14:01:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 113613 00:07:11.188 00:07:11.188 real 0m3.167s 00:07:11.188 user 0m3.379s 00:07:11.188 sys 0m0.967s 00:07:11.188 14:01:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:11.188 14:01:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:11.188 ************************************ 00:07:11.188 END TEST non_locking_app_on_locked_coremask 00:07:11.188 ************************************ 00:07:11.188 14:01:19 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:11.188 14:01:19 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:11.188 14:01:19 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:11.188 14:01:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:11.188 ************************************ 00:07:11.188 START TEST locking_app_on_unlocked_coremask 00:07:11.188 ************************************ 00:07:11.188 14:01:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:07:11.188 14:01:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=114033 00:07:11.188 14:01:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:11.188 14:01:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 114033 /var/tmp/spdk.sock 00:07:11.188 14:01:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 114033 ']' 00:07:11.188 14:01:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:11.188 14:01:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:11.188 14:01:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:11.188 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:11.188 14:01:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:11.188 14:01:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:11.188 [2024-07-26 14:01:19.162636] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:07:11.188 [2024-07-26 14:01:19.162900] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114033 ] 00:07:11.188 EAL: No free 2048 kB hugepages reported on node 1 00:07:11.447 [2024-07-26 14:01:19.222896] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:11.447 [2024-07-26 14:01:19.222928] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.447 [2024-07-26 14:01:19.327249] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.707 14:01:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:11.707 14:01:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:11.707 14:01:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=114047 00:07:11.707 14:01:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:11.707 14:01:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 114047 /var/tmp/spdk2.sock 00:07:11.707 14:01:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 114047 ']' 00:07:11.707 14:01:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:11.707 14:01:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:11.707 14:01:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:11.707 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:11.707 14:01:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:11.707 14:01:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:11.707 [2024-07-26 14:01:19.624378] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:07:11.707 [2024-07-26 14:01:19.624457] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114047 ] 00:07:11.707 EAL: No free 2048 kB hugepages reported on node 1 00:07:11.707 [2024-07-26 14:01:19.705464] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.964 [2024-07-26 14:01:19.919006] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.530 14:01:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:12.787 14:01:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:12.787 14:01:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 114047 00:07:12.787 14:01:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 114047 00:07:12.787 14:01:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:13.351 lslocks: write error 00:07:13.351 14:01:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 114033 00:07:13.351 14:01:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 114033 ']' 00:07:13.351 14:01:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 114033 00:07:13.351 14:01:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:13.351 14:01:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:13.351 14:01:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 114033 00:07:13.351 14:01:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:13.351 14:01:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:13.351 14:01:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 114033' 00:07:13.351 killing process with pid 114033 00:07:13.351 14:01:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 114033 00:07:13.351 14:01:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 114033 00:07:14.285 14:01:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 114047 00:07:14.285 14:01:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 114047 ']' 00:07:14.285 14:01:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 114047 00:07:14.285 14:01:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:14.285 14:01:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:14.285 14:01:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 114047 00:07:14.285 14:01:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:14.285 14:01:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:14.285 14:01:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 114047' 00:07:14.285 killing process with pid 114047 00:07:14.285 14:01:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 114047 00:07:14.285 14:01:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 114047 00:07:14.544 00:07:14.544 real 0m3.313s 00:07:14.544 user 0m3.500s 00:07:14.544 sys 0m1.023s 00:07:14.544 14:01:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:14.544 14:01:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:14.544 ************************************ 00:07:14.544 END TEST locking_app_on_unlocked_coremask 00:07:14.544 ************************************ 00:07:14.544 14:01:22 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:14.544 14:01:22 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:14.544 14:01:22 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:14.544 14:01:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:14.544 ************************************ 00:07:14.544 START TEST locking_app_on_locked_coremask 00:07:14.544 ************************************ 00:07:14.544 14:01:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:07:14.544 14:01:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=114473 00:07:14.544 14:01:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:14.544 14:01:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 114473 /var/tmp/spdk.sock 00:07:14.544 14:01:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 114473 ']' 00:07:14.544 14:01:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:14.544 14:01:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:14.544 14:01:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:14.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:14.544 14:01:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:14.544 14:01:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:14.544 [2024-07-26 14:01:22.526872] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:07:14.544 [2024-07-26 14:01:22.526956] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114473 ] 00:07:14.544 EAL: No free 2048 kB hugepages reported on node 1 00:07:14.802 [2024-07-26 14:01:22.583838] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.802 [2024-07-26 14:01:22.684759] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.061 14:01:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:15.061 14:01:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:15.061 14:01:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=114481 00:07:15.061 14:01:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:15.061 14:01:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 114481 /var/tmp/spdk2.sock 00:07:15.061 14:01:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:07:15.061 14:01:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 114481 /var/tmp/spdk2.sock 00:07:15.061 14:01:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:15.061 14:01:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:15.061 14:01:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:15.061 14:01:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:15.061 14:01:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 114481 /var/tmp/spdk2.sock 00:07:15.061 14:01:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 114481 ']' 00:07:15.061 14:01:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:15.061 14:01:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:15.061 14:01:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:15.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:15.061 14:01:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:15.061 14:01:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:15.061 [2024-07-26 14:01:22.976367] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:07:15.061 [2024-07-26 14:01:22.976448] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114481 ] 00:07:15.061 EAL: No free 2048 kB hugepages reported on node 1 00:07:15.061 [2024-07-26 14:01:23.059809] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 114473 has claimed it. 00:07:15.061 [2024-07-26 14:01:23.059904] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:15.994 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (114481) - No such process 00:07:15.994 ERROR: process (pid: 114481) is no longer running 00:07:15.994 14:01:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:15.994 14:01:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:07:15.994 14:01:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:07:15.994 14:01:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:15.994 14:01:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:15.994 14:01:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:15.994 14:01:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 114473 00:07:15.994 14:01:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 114473 00:07:15.994 14:01:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:15.994 lslocks: write error 00:07:15.994 14:01:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 114473 00:07:15.994 14:01:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 114473 ']' 00:07:15.994 14:01:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 114473 00:07:15.994 14:01:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:15.994 14:01:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:15.994 14:01:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 114473 00:07:16.252 14:01:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:16.252 14:01:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:16.252 14:01:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 114473' 00:07:16.252 killing process with pid 114473 00:07:16.252 14:01:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 114473 00:07:16.252 14:01:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 114473 00:07:16.510 00:07:16.510 real 0m1.983s 00:07:16.510 user 0m2.154s 00:07:16.510 sys 0m0.621s 00:07:16.510 14:01:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:16.510 14:01:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:16.510 ************************************ 00:07:16.510 END TEST locking_app_on_locked_coremask 00:07:16.510 ************************************ 00:07:16.510 14:01:24 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:16.510 14:01:24 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:16.510 14:01:24 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:16.510 14:01:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:16.510 ************************************ 00:07:16.510 START TEST locking_overlapped_coremask 00:07:16.510 ************************************ 00:07:16.510 14:01:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:07:16.510 14:01:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=114768 00:07:16.510 14:01:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:07:16.510 14:01:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 114768 /var/tmp/spdk.sock 00:07:16.510 14:01:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 114768 ']' 00:07:16.510 14:01:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:16.510 14:01:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:16.510 14:01:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:16.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:16.510 14:01:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:16.510 14:01:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:16.784 [2024-07-26 14:01:24.555328] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:07:16.784 [2024-07-26 14:01:24.555426] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114768 ] 00:07:16.784 EAL: No free 2048 kB hugepages reported on node 1 00:07:16.784 [2024-07-26 14:01:24.612106] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:16.784 [2024-07-26 14:01:24.723751] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:16.784 [2024-07-26 14:01:24.723807] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:16.784 [2024-07-26 14:01:24.723810] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.045 14:01:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:17.045 14:01:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:17.045 14:01:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=114780 00:07:17.045 14:01:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 114780 /var/tmp/spdk2.sock 00:07:17.045 14:01:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:17.045 14:01:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:07:17.045 14:01:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 114780 /var/tmp/spdk2.sock 00:07:17.045 14:01:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:17.045 14:01:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:17.045 14:01:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:17.045 14:01:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:17.045 14:01:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 114780 /var/tmp/spdk2.sock 00:07:17.045 14:01:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 114780 ']' 00:07:17.045 14:01:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:17.045 14:01:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:17.045 14:01:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:17.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:17.045 14:01:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:17.045 14:01:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:17.045 [2024-07-26 14:01:25.023258] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:07:17.045 [2024-07-26 14:01:25.023335] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114780 ] 00:07:17.045 EAL: No free 2048 kB hugepages reported on node 1 00:07:17.302 [2024-07-26 14:01:25.111516] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 114768 has claimed it. 00:07:17.302 [2024-07-26 14:01:25.111585] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:17.868 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (114780) - No such process 00:07:17.868 ERROR: process (pid: 114780) is no longer running 00:07:17.868 14:01:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:17.868 14:01:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:07:17.868 14:01:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:07:17.868 14:01:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:17.868 14:01:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:17.868 14:01:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:17.868 14:01:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:17.868 14:01:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:17.868 14:01:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:17.868 14:01:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:17.868 14:01:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 114768 00:07:17.868 14:01:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 114768 ']' 00:07:17.868 14:01:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 114768 00:07:17.868 14:01:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:07:17.868 14:01:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:17.868 14:01:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 114768 00:07:17.868 14:01:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:17.868 14:01:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:17.868 14:01:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 114768' 00:07:17.868 killing process with pid 114768 00:07:17.868 14:01:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 114768 00:07:17.868 14:01:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 114768 00:07:18.435 00:07:18.435 real 0m1.672s 00:07:18.435 user 0m4.420s 00:07:18.435 sys 0m0.472s 00:07:18.435 14:01:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:18.435 14:01:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:18.435 ************************************ 00:07:18.435 END TEST locking_overlapped_coremask 00:07:18.435 ************************************ 00:07:18.435 14:01:26 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:18.435 14:01:26 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:18.435 14:01:26 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:18.435 14:01:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:18.435 ************************************ 00:07:18.435 START TEST locking_overlapped_coremask_via_rpc 00:07:18.435 ************************************ 00:07:18.435 14:01:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:07:18.435 14:01:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=114943 00:07:18.435 14:01:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:18.435 14:01:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 114943 /var/tmp/spdk.sock 00:07:18.435 14:01:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 114943 ']' 00:07:18.435 14:01:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:18.435 14:01:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:18.435 14:01:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:18.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:18.435 14:01:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:18.435 14:01:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:18.435 [2024-07-26 14:01:26.276349] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:07:18.435 [2024-07-26 14:01:26.276438] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114943 ] 00:07:18.435 EAL: No free 2048 kB hugepages reported on node 1 00:07:18.435 [2024-07-26 14:01:26.334635] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:18.435 [2024-07-26 14:01:26.334666] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:18.435 [2024-07-26 14:01:26.434123] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:18.435 [2024-07-26 14:01:26.434231] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:18.435 [2024-07-26 14:01:26.434235] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.694 14:01:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:18.694 14:01:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:18.694 14:01:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=115042 00:07:18.694 14:01:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 115042 /var/tmp/spdk2.sock 00:07:18.694 14:01:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:18.694 14:01:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 115042 ']' 00:07:18.694 14:01:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:18.694 14:01:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:18.694 14:01:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:18.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:18.694 14:01:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:18.694 14:01:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:18.953 [2024-07-26 14:01:26.731042] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:07:18.953 [2024-07-26 14:01:26.731122] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115042 ] 00:07:18.953 EAL: No free 2048 kB hugepages reported on node 1 00:07:18.953 [2024-07-26 14:01:26.817796] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:18.953 [2024-07-26 14:01:26.817843] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:19.212 [2024-07-26 14:01:27.049671] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:19.212 [2024-07-26 14:01:27.053873] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:07:19.212 [2024-07-26 14:01:27.053876] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:19.777 14:01:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:19.777 14:01:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:19.777 14:01:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:19.777 14:01:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.777 14:01:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:19.777 14:01:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.777 14:01:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:19.777 14:01:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:07:19.777 14:01:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:19.777 14:01:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:19.777 14:01:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:19.777 14:01:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:19.777 14:01:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:19.777 14:01:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:19.777 14:01:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.777 14:01:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:19.777 [2024-07-26 14:01:27.708633] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 114943 has claimed it. 00:07:19.777 request: 00:07:19.777 { 00:07:19.777 "method": "framework_enable_cpumask_locks", 00:07:19.777 "req_id": 1 00:07:19.777 } 00:07:19.777 Got JSON-RPC error response 00:07:19.777 response: 00:07:19.778 { 00:07:19.778 "code": -32603, 00:07:19.778 "message": "Failed to claim CPU core: 2" 00:07:19.778 } 00:07:19.778 14:01:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:19.778 14:01:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:07:19.778 14:01:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:19.778 14:01:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:19.778 14:01:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:19.778 14:01:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 114943 /var/tmp/spdk.sock 00:07:19.778 14:01:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 114943 ']' 00:07:19.778 14:01:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:19.778 14:01:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:19.778 14:01:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:19.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:19.778 14:01:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:19.778 14:01:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:20.036 14:01:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:20.036 14:01:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:20.036 14:01:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 115042 /var/tmp/spdk2.sock 00:07:20.036 14:01:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 115042 ']' 00:07:20.036 14:01:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:20.036 14:01:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:20.036 14:01:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:20.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:20.036 14:01:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:20.036 14:01:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:20.295 14:01:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:20.295 14:01:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:20.295 14:01:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:20.295 14:01:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:20.295 14:01:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:20.295 14:01:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:20.295 00:07:20.295 real 0m2.006s 00:07:20.295 user 0m1.058s 00:07:20.295 sys 0m0.159s 00:07:20.295 14:01:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:20.295 14:01:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:20.295 ************************************ 00:07:20.295 END TEST locking_overlapped_coremask_via_rpc 00:07:20.295 ************************************ 00:07:20.295 14:01:28 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:20.295 14:01:28 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 114943 ]] 00:07:20.295 14:01:28 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 114943 00:07:20.295 14:01:28 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 114943 ']' 00:07:20.295 14:01:28 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 114943 00:07:20.295 14:01:28 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:07:20.295 14:01:28 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:20.295 14:01:28 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 114943 00:07:20.295 14:01:28 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:20.295 14:01:28 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:20.295 14:01:28 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 114943' 00:07:20.295 killing process with pid 114943 00:07:20.295 14:01:28 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 114943 00:07:20.295 14:01:28 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 114943 00:07:20.862 14:01:28 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 115042 ]] 00:07:20.862 14:01:28 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 115042 00:07:20.862 14:01:28 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 115042 ']' 00:07:20.862 14:01:28 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 115042 00:07:20.862 14:01:28 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:07:20.862 14:01:28 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:20.862 14:01:28 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 115042 00:07:20.862 14:01:28 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:07:20.862 14:01:28 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:07:20.862 14:01:28 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 115042' 00:07:20.862 killing process with pid 115042 00:07:20.862 14:01:28 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 115042 00:07:20.862 14:01:28 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 115042 00:07:21.429 14:01:29 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:21.429 14:01:29 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:21.429 14:01:29 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 114943 ]] 00:07:21.429 14:01:29 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 114943 00:07:21.429 14:01:29 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 114943 ']' 00:07:21.429 14:01:29 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 114943 00:07:21.429 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (114943) - No such process 00:07:21.429 14:01:29 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 114943 is not found' 00:07:21.429 Process with pid 114943 is not found 00:07:21.429 14:01:29 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 115042 ]] 00:07:21.429 14:01:29 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 115042 00:07:21.429 14:01:29 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 115042 ']' 00:07:21.429 14:01:29 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 115042 00:07:21.429 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (115042) - No such process 00:07:21.429 14:01:29 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 115042 is not found' 00:07:21.429 Process with pid 115042 is not found 00:07:21.429 14:01:29 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:21.429 00:07:21.429 real 0m15.880s 00:07:21.429 user 0m27.847s 00:07:21.429 sys 0m5.106s 00:07:21.429 14:01:29 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:21.429 14:01:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:21.429 ************************************ 00:07:21.429 END TEST cpu_locks 00:07:21.429 ************************************ 00:07:21.429 00:07:21.429 real 0m39.757s 00:07:21.429 user 1m15.571s 00:07:21.429 sys 0m9.072s 00:07:21.429 14:01:29 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:21.429 14:01:29 event -- common/autotest_common.sh@10 -- # set +x 00:07:21.429 ************************************ 00:07:21.429 END TEST event 00:07:21.429 ************************************ 00:07:21.429 14:01:29 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:21.429 14:01:29 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:21.429 14:01:29 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:21.429 14:01:29 -- common/autotest_common.sh@10 -- # set +x 00:07:21.429 ************************************ 00:07:21.429 START TEST thread 00:07:21.429 ************************************ 00:07:21.429 14:01:29 thread -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:21.429 * Looking for test storage... 00:07:21.429 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:07:21.429 14:01:29 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:21.429 14:01:29 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:07:21.429 14:01:29 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:21.429 14:01:29 thread -- common/autotest_common.sh@10 -- # set +x 00:07:21.429 ************************************ 00:07:21.429 START TEST thread_poller_perf 00:07:21.429 ************************************ 00:07:21.429 14:01:29 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:21.429 [2024-07-26 14:01:29.354015] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:07:21.429 [2024-07-26 14:01:29.354089] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115443 ] 00:07:21.429 EAL: No free 2048 kB hugepages reported on node 1 00:07:21.429 [2024-07-26 14:01:29.417976] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.688 [2024-07-26 14:01:29.526976] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.688 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:23.063 ====================================== 00:07:23.063 busy:2713788606 (cyc) 00:07:23.063 total_run_count: 361000 00:07:23.063 tsc_hz: 2700000000 (cyc) 00:07:23.063 ====================================== 00:07:23.063 poller_cost: 7517 (cyc), 2784 (nsec) 00:07:23.063 00:07:23.063 real 0m1.303s 00:07:23.063 user 0m1.216s 00:07:23.063 sys 0m0.082s 00:07:23.063 14:01:30 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:23.063 14:01:30 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:23.063 ************************************ 00:07:23.063 END TEST thread_poller_perf 00:07:23.063 ************************************ 00:07:23.063 14:01:30 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:23.063 14:01:30 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:07:23.063 14:01:30 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:23.063 14:01:30 thread -- common/autotest_common.sh@10 -- # set +x 00:07:23.063 ************************************ 00:07:23.063 START TEST thread_poller_perf 00:07:23.063 ************************************ 00:07:23.063 14:01:30 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:23.063 [2024-07-26 14:01:30.705068] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:07:23.063 [2024-07-26 14:01:30.705134] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115601 ] 00:07:23.063 EAL: No free 2048 kB hugepages reported on node 1 00:07:23.063 [2024-07-26 14:01:30.760934] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.063 [2024-07-26 14:01:30.865204] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.063 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:23.998 ====================================== 00:07:23.998 busy:2702539785 (cyc) 00:07:23.998 total_run_count: 4877000 00:07:23.998 tsc_hz: 2700000000 (cyc) 00:07:23.998 ====================================== 00:07:23.998 poller_cost: 554 (cyc), 205 (nsec) 00:07:23.998 00:07:23.998 real 0m1.284s 00:07:23.998 user 0m1.207s 00:07:23.998 sys 0m0.072s 00:07:23.998 14:01:31 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:23.998 14:01:31 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:23.998 ************************************ 00:07:23.998 END TEST thread_poller_perf 00:07:23.998 ************************************ 00:07:23.998 14:01:31 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:23.998 00:07:23.998 real 0m2.739s 00:07:23.998 user 0m2.497s 00:07:23.998 sys 0m0.244s 00:07:23.998 14:01:31 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:23.998 14:01:31 thread -- common/autotest_common.sh@10 -- # set +x 00:07:23.998 ************************************ 00:07:23.998 END TEST thread 00:07:23.998 ************************************ 00:07:24.257 14:01:32 -- spdk/autotest.sh@184 -- # [[ 0 -eq 1 ]] 00:07:24.257 14:01:32 -- spdk/autotest.sh@189 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:24.257 14:01:32 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:24.257 14:01:32 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:24.257 14:01:32 -- common/autotest_common.sh@10 -- # set +x 00:07:24.257 ************************************ 00:07:24.257 START TEST app_cmdline 00:07:24.257 ************************************ 00:07:24.257 14:01:32 app_cmdline -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:24.257 * Looking for test storage... 00:07:24.257 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:24.257 14:01:32 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:24.257 14:01:32 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=115793 00:07:24.257 14:01:32 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:24.257 14:01:32 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 115793 00:07:24.257 14:01:32 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 115793 ']' 00:07:24.257 14:01:32 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:24.257 14:01:32 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:24.258 14:01:32 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:24.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:24.258 14:01:32 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:24.258 14:01:32 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:24.258 [2024-07-26 14:01:32.152501] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:07:24.258 [2024-07-26 14:01:32.152598] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115793 ] 00:07:24.258 EAL: No free 2048 kB hugepages reported on node 1 00:07:24.258 [2024-07-26 14:01:32.214005] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.516 [2024-07-26 14:01:32.326761] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.774 14:01:32 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:24.774 14:01:32 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:07:24.774 14:01:32 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:25.032 { 00:07:25.032 "version": "SPDK v24.09-pre git sha1 477912bde", 00:07:25.032 "fields": { 00:07:25.032 "major": 24, 00:07:25.032 "minor": 9, 00:07:25.032 "patch": 0, 00:07:25.032 "suffix": "-pre", 00:07:25.032 "commit": "477912bde" 00:07:25.032 } 00:07:25.032 } 00:07:25.032 14:01:32 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:25.032 14:01:32 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:25.032 14:01:32 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:25.032 14:01:32 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:25.032 14:01:32 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:25.032 14:01:32 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.032 14:01:32 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:25.032 14:01:32 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:25.032 14:01:32 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:25.032 14:01:32 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.032 14:01:32 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:25.032 14:01:32 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:25.032 14:01:32 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:25.032 14:01:32 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:07:25.032 14:01:32 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:25.032 14:01:32 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:25.033 14:01:32 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:25.033 14:01:32 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:25.033 14:01:32 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:25.033 14:01:32 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:25.033 14:01:32 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:25.033 14:01:32 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:25.033 14:01:32 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:25.033 14:01:32 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:25.291 request: 00:07:25.291 { 00:07:25.291 "method": "env_dpdk_get_mem_stats", 00:07:25.291 "req_id": 1 00:07:25.291 } 00:07:25.291 Got JSON-RPC error response 00:07:25.291 response: 00:07:25.291 { 00:07:25.291 "code": -32601, 00:07:25.291 "message": "Method not found" 00:07:25.291 } 00:07:25.291 14:01:33 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:07:25.291 14:01:33 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:25.291 14:01:33 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:25.291 14:01:33 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:25.291 14:01:33 app_cmdline -- app/cmdline.sh@1 -- # killprocess 115793 00:07:25.291 14:01:33 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 115793 ']' 00:07:25.291 14:01:33 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 115793 00:07:25.291 14:01:33 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:07:25.291 14:01:33 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:25.291 14:01:33 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 115793 00:07:25.292 14:01:33 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:25.292 14:01:33 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:25.292 14:01:33 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 115793' 00:07:25.292 killing process with pid 115793 00:07:25.292 14:01:33 app_cmdline -- common/autotest_common.sh@969 -- # kill 115793 00:07:25.292 14:01:33 app_cmdline -- common/autotest_common.sh@974 -- # wait 115793 00:07:25.550 00:07:25.550 real 0m1.513s 00:07:25.550 user 0m1.861s 00:07:25.550 sys 0m0.445s 00:07:25.550 14:01:33 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:25.550 14:01:33 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:25.550 ************************************ 00:07:25.550 END TEST app_cmdline 00:07:25.550 ************************************ 00:07:25.809 14:01:33 -- spdk/autotest.sh@190 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:25.809 14:01:33 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:25.809 14:01:33 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:25.809 14:01:33 -- common/autotest_common.sh@10 -- # set +x 00:07:25.809 ************************************ 00:07:25.809 START TEST version 00:07:25.809 ************************************ 00:07:25.809 14:01:33 version -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:25.809 * Looking for test storage... 00:07:25.809 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:25.809 14:01:33 version -- app/version.sh@17 -- # get_header_version major 00:07:25.809 14:01:33 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:25.809 14:01:33 version -- app/version.sh@14 -- # cut -f2 00:07:25.809 14:01:33 version -- app/version.sh@14 -- # tr -d '"' 00:07:25.809 14:01:33 version -- app/version.sh@17 -- # major=24 00:07:25.809 14:01:33 version -- app/version.sh@18 -- # get_header_version minor 00:07:25.809 14:01:33 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:25.809 14:01:33 version -- app/version.sh@14 -- # cut -f2 00:07:25.809 14:01:33 version -- app/version.sh@14 -- # tr -d '"' 00:07:25.809 14:01:33 version -- app/version.sh@18 -- # minor=9 00:07:25.809 14:01:33 version -- app/version.sh@19 -- # get_header_version patch 00:07:25.809 14:01:33 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:25.809 14:01:33 version -- app/version.sh@14 -- # cut -f2 00:07:25.809 14:01:33 version -- app/version.sh@14 -- # tr -d '"' 00:07:25.809 14:01:33 version -- app/version.sh@19 -- # patch=0 00:07:25.809 14:01:33 version -- app/version.sh@20 -- # get_header_version suffix 00:07:25.809 14:01:33 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:25.809 14:01:33 version -- app/version.sh@14 -- # cut -f2 00:07:25.809 14:01:33 version -- app/version.sh@14 -- # tr -d '"' 00:07:25.809 14:01:33 version -- app/version.sh@20 -- # suffix=-pre 00:07:25.809 14:01:33 version -- app/version.sh@22 -- # version=24.9 00:07:25.809 14:01:33 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:25.809 14:01:33 version -- app/version.sh@28 -- # version=24.9rc0 00:07:25.809 14:01:33 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:25.809 14:01:33 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:25.809 14:01:33 version -- app/version.sh@30 -- # py_version=24.9rc0 00:07:25.809 14:01:33 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:07:25.809 00:07:25.809 real 0m0.106s 00:07:25.809 user 0m0.052s 00:07:25.809 sys 0m0.075s 00:07:25.809 14:01:33 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:25.809 14:01:33 version -- common/autotest_common.sh@10 -- # set +x 00:07:25.809 ************************************ 00:07:25.809 END TEST version 00:07:25.809 ************************************ 00:07:25.809 14:01:33 -- spdk/autotest.sh@192 -- # '[' 0 -eq 1 ']' 00:07:25.809 14:01:33 -- spdk/autotest.sh@202 -- # uname -s 00:07:25.809 14:01:33 -- spdk/autotest.sh@202 -- # [[ Linux == Linux ]] 00:07:25.809 14:01:33 -- spdk/autotest.sh@203 -- # [[ 0 -eq 1 ]] 00:07:25.809 14:01:33 -- spdk/autotest.sh@203 -- # [[ 0 -eq 1 ]] 00:07:25.809 14:01:33 -- spdk/autotest.sh@215 -- # '[' 0 -eq 1 ']' 00:07:25.809 14:01:33 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:07:25.809 14:01:33 -- spdk/autotest.sh@264 -- # timing_exit lib 00:07:25.809 14:01:33 -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:25.809 14:01:33 -- common/autotest_common.sh@10 -- # set +x 00:07:25.809 14:01:33 -- spdk/autotest.sh@266 -- # '[' 0 -eq 1 ']' 00:07:25.809 14:01:33 -- spdk/autotest.sh@274 -- # '[' 0 -eq 1 ']' 00:07:25.809 14:01:33 -- spdk/autotest.sh@283 -- # '[' 1 -eq 1 ']' 00:07:25.809 14:01:33 -- spdk/autotest.sh@284 -- # export NET_TYPE 00:07:25.809 14:01:33 -- spdk/autotest.sh@287 -- # '[' tcp = rdma ']' 00:07:25.809 14:01:33 -- spdk/autotest.sh@290 -- # '[' tcp = tcp ']' 00:07:25.809 14:01:33 -- spdk/autotest.sh@291 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:25.809 14:01:33 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:25.809 14:01:33 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:25.809 14:01:33 -- common/autotest_common.sh@10 -- # set +x 00:07:25.809 ************************************ 00:07:25.809 START TEST nvmf_tcp 00:07:25.809 ************************************ 00:07:25.809 14:01:33 nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:26.069 * Looking for test storage... 00:07:26.069 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:26.069 14:01:33 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:26.069 14:01:33 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:26.069 14:01:33 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:26.069 14:01:33 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:26.069 14:01:33 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:26.069 14:01:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:26.069 ************************************ 00:07:26.069 START TEST nvmf_target_core 00:07:26.069 ************************************ 00:07:26.069 14:01:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:26.069 * Looking for test storage... 00:07:26.069 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:26.069 14:01:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:07:26.069 14:01:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:26.069 14:01:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:26.069 14:01:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:07:26.069 14:01:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:26.069 14:01:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:26.069 14:01:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:26.069 14:01:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:26.069 14:01:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:26.069 14:01:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:26.069 14:01:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:26.069 14:01:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:26.069 14:01:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:26.069 14:01:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:26.069 14:01:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:07:26.069 14:01:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:07:26.069 14:01:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:26.069 14:01:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:26.069 14:01:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:26.069 14:01:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:26.069 14:01:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:26.069 14:01:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:26.069 14:01:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:26.069 14:01:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:26.069 14:01:33 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.069 14:01:33 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.069 14:01:33 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.069 14:01:33 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:07:26.069 14:01:33 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.069 14:01:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@47 -- # : 0 00:07:26.069 14:01:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:26.069 14:01:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:26.069 14:01:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:26.069 14:01:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:26.069 14:01:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:26.069 14:01:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:26.069 14:01:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:26.069 14:01:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:26.069 14:01:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:26.069 14:01:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:07:26.069 14:01:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:07:26.069 14:01:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:26.069 14:01:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:26.069 14:01:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:26.069 14:01:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:26.069 ************************************ 00:07:26.069 START TEST nvmf_abort 00:07:26.069 ************************************ 00:07:26.069 14:01:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:26.069 * Looking for test storage... 00:07:26.069 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:26.069 14:01:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:26.069 14:01:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:07:26.069 14:01:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:26.069 14:01:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:26.069 14:01:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:26.069 14:01:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:26.069 14:01:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:26.069 14:01:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:26.069 14:01:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:26.069 14:01:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:26.069 14:01:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:26.069 14:01:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:26.069 14:01:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:07:26.069 14:01:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:07:26.069 14:01:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:26.069 14:01:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:26.069 14:01:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:26.069 14:01:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:26.069 14:01:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:26.069 14:01:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:26.069 14:01:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:26.069 14:01:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:26.069 14:01:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.070 14:01:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.070 14:01:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.070 14:01:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:07:26.070 14:01:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.070 14:01:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:07:26.070 14:01:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:26.070 14:01:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:26.070 14:01:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:26.070 14:01:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:26.070 14:01:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:26.070 14:01:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:26.070 14:01:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:26.070 14:01:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:26.070 14:01:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:26.070 14:01:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:07:26.070 14:01:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:07:26.070 14:01:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:26.070 14:01:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:26.070 14:01:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:26.070 14:01:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:26.070 14:01:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:26.070 14:01:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:26.070 14:01:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:26.070 14:01:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:26.070 14:01:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:26.070 14:01:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:26.070 14:01:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:07:26.070 14:01:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:28.608 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:28.608 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:07:28.608 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:28.608 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:28.608 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:28.608 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:28.608 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:28.608 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:07:28.608 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:28.608 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:07:28.608 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:07:28.608 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:07:28.608 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:07:28.608 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:07:28.608 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:07:28.608 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:28.608 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:28.608 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:28.608 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:28.608 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:28.608 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:28.608 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:28.608 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:28.608 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:28.608 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:28.608 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:28.608 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:28.608 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:28.608 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:28.608 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:28.608 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:28.608 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:28.608 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:28.608 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:07:28.608 Found 0000:09:00.0 (0x8086 - 0x159b) 00:07:28.608 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:28.608 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:28.608 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:28.608 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:28.608 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:28.608 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:28.608 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:07:28.608 Found 0000:09:00.1 (0x8086 - 0x159b) 00:07:28.608 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:28.608 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:28.608 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:28.608 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:28.608 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:28.608 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:28.608 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:28.608 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:28.608 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:28.608 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:28.608 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:28.608 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:28.608 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:28.608 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:28.608 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:28.609 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:07:28.609 Found net devices under 0000:09:00.0: cvl_0_0 00:07:28.609 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:28.609 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:28.609 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:28.609 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:28.609 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:28.609 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:28.609 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:28.609 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:28.609 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:07:28.609 Found net devices under 0000:09:00.1: cvl_0_1 00:07:28.609 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:28.609 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:28.609 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:07:28.609 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:28.609 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:28.609 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:28.609 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:28.609 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:28.609 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:28.609 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:28.609 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:28.609 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:28.609 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:28.609 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:28.609 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:28.609 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:28.609 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:28.609 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:28.609 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:28.609 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:28.609 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:28.609 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:28.609 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:28.609 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:28.609 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:28.609 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:28.609 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:28.609 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.117 ms 00:07:28.609 00:07:28.609 --- 10.0.0.2 ping statistics --- 00:07:28.609 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:28.609 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:07:28.609 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:28.609 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:28.609 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.084 ms 00:07:28.609 00:07:28.609 --- 10.0.0.1 ping statistics --- 00:07:28.609 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:28.609 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:07:28.609 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:28.609 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:07:28.609 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:28.609 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:28.609 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:28.609 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:28.609 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:28.609 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:28.609 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:28.609 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:07:28.609 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:28.609 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:28.609 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:28.609 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=117839 00:07:28.609 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:28.609 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 117839 00:07:28.609 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 117839 ']' 00:07:28.609 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:28.609 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:28.609 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:28.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:28.609 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:28.609 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:28.609 [2024-07-26 14:01:36.384644] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:07:28.609 [2024-07-26 14:01:36.384745] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:28.609 EAL: No free 2048 kB hugepages reported on node 1 00:07:28.609 [2024-07-26 14:01:36.450133] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:28.609 [2024-07-26 14:01:36.562823] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:28.609 [2024-07-26 14:01:36.562898] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:28.609 [2024-07-26 14:01:36.562926] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:28.609 [2024-07-26 14:01:36.562938] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:28.609 [2024-07-26 14:01:36.562948] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:28.609 [2024-07-26 14:01:36.563028] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:28.609 [2024-07-26 14:01:36.565548] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:28.609 [2024-07-26 14:01:36.565609] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:28.868 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:28.868 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:07:28.868 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:28.868 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:28.868 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:28.868 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:28.868 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:07:28.868 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.868 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:28.868 [2024-07-26 14:01:36.716037] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:28.868 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.868 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:07:28.868 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.868 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:28.868 Malloc0 00:07:28.868 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.868 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:28.868 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.868 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:28.868 Delay0 00:07:28.868 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.868 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:28.868 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.868 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:28.868 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.868 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:07:28.868 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.868 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:28.868 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.868 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:28.868 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.868 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:28.868 [2024-07-26 14:01:36.783218] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:28.868 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.868 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:28.868 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.868 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:28.868 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.868 14:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:07:28.868 EAL: No free 2048 kB hugepages reported on node 1 00:07:29.127 [2024-07-26 14:01:36.888450] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:07:31.028 Initializing NVMe Controllers 00:07:31.028 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:31.028 controller IO queue size 128 less than required 00:07:31.028 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:07:31.028 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:07:31.028 Initialization complete. Launching workers. 00:07:31.028 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 124, failed: 34573 00:07:31.028 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 34635, failed to submit 62 00:07:31.028 success 34577, unsuccess 58, failed 0 00:07:31.028 14:01:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:31.028 14:01:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.028 14:01:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:31.028 14:01:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.028 14:01:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:07:31.028 14:01:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:07:31.028 14:01:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:31.028 14:01:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:07:31.028 14:01:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:31.028 14:01:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:07:31.028 14:01:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:31.028 14:01:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:31.028 rmmod nvme_tcp 00:07:31.028 rmmod nvme_fabrics 00:07:31.028 rmmod nvme_keyring 00:07:31.028 14:01:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:31.028 14:01:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:07:31.028 14:01:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:07:31.028 14:01:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 117839 ']' 00:07:31.028 14:01:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 117839 00:07:31.028 14:01:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 117839 ']' 00:07:31.028 14:01:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 117839 00:07:31.028 14:01:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:07:31.028 14:01:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:31.028 14:01:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 117839 00:07:31.286 14:01:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:31.286 14:01:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:31.286 14:01:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 117839' 00:07:31.286 killing process with pid 117839 00:07:31.286 14:01:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@969 -- # kill 117839 00:07:31.286 14:01:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@974 -- # wait 117839 00:07:31.547 14:01:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:31.547 14:01:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:31.547 14:01:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:31.547 14:01:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:31.547 14:01:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:31.547 14:01:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:31.547 14:01:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:31.547 14:01:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:33.460 14:01:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:33.460 00:07:33.460 real 0m7.437s 00:07:33.460 user 0m10.846s 00:07:33.460 sys 0m2.341s 00:07:33.460 14:01:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:33.460 14:01:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:33.460 ************************************ 00:07:33.460 END TEST nvmf_abort 00:07:33.460 ************************************ 00:07:33.460 14:01:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:33.460 14:01:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:33.460 14:01:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:33.460 14:01:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:33.460 ************************************ 00:07:33.460 START TEST nvmf_ns_hotplug_stress 00:07:33.460 ************************************ 00:07:33.460 14:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:33.721 * Looking for test storage... 00:07:33.721 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:33.721 14:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:33.721 14:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:07:33.721 14:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:33.721 14:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:33.721 14:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:33.721 14:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:33.721 14:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:33.721 14:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:33.721 14:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:33.721 14:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:33.721 14:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:33.721 14:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:33.721 14:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:07:33.721 14:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:07:33.721 14:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:33.721 14:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:33.721 14:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:33.721 14:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:33.721 14:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:33.721 14:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:33.721 14:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:33.721 14:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:33.721 14:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.721 14:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.721 14:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.721 14:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:07:33.721 14:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.721 14:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:07:33.721 14:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:33.721 14:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:33.721 14:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:33.721 14:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:33.721 14:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:33.721 14:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:33.721 14:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:33.721 14:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:33.721 14:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:33.721 14:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:07:33.721 14:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:33.721 14:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:33.721 14:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:33.721 14:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:33.721 14:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:33.721 14:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:33.721 14:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:33.721 14:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:33.721 14:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:33.721 14:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:33.721 14:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:07:33.721 14:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:35.631 14:01:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:35.631 14:01:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:07:35.631 14:01:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:35.631 14:01:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:35.631 14:01:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:35.631 14:01:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:35.631 14:01:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:35.631 14:01:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:07:35.631 14:01:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:35.631 14:01:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:07:35.631 14:01:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:07:35.631 14:01:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:07:35.631 14:01:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:07:35.631 14:01:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:07:35.631 14:01:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:07:35.631 14:01:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:35.631 14:01:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:35.631 14:01:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:35.631 14:01:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:35.631 14:01:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:35.631 14:01:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:35.632 14:01:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:35.632 14:01:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:35.632 14:01:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:35.632 14:01:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:35.632 14:01:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:35.632 14:01:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:35.632 14:01:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:35.632 14:01:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:35.632 14:01:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:35.632 14:01:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:35.632 14:01:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:35.632 14:01:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:35.632 14:01:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:07:35.632 Found 0000:09:00.0 (0x8086 - 0x159b) 00:07:35.632 14:01:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:35.632 14:01:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:35.632 14:01:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:35.632 14:01:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:35.632 14:01:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:35.632 14:01:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:35.632 14:01:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:07:35.632 Found 0000:09:00.1 (0x8086 - 0x159b) 00:07:35.632 14:01:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:35.632 14:01:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:35.632 14:01:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:35.632 14:01:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:35.632 14:01:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:35.632 14:01:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:35.632 14:01:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:35.632 14:01:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:35.632 14:01:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:35.632 14:01:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:35.632 14:01:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:35.632 14:01:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:35.632 14:01:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:35.632 14:01:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:35.632 14:01:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:35.632 14:01:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:07:35.632 Found net devices under 0000:09:00.0: cvl_0_0 00:07:35.632 14:01:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:35.632 14:01:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:35.632 14:01:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:35.632 14:01:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:35.632 14:01:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:35.632 14:01:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:35.632 14:01:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:35.632 14:01:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:35.632 14:01:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:07:35.632 Found net devices under 0000:09:00.1: cvl_0_1 00:07:35.632 14:01:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:35.632 14:01:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:35.632 14:01:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:07:35.632 14:01:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:35.632 14:01:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:35.632 14:01:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:35.632 14:01:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:35.632 14:01:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:35.632 14:01:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:35.632 14:01:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:35.632 14:01:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:35.632 14:01:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:35.632 14:01:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:35.632 14:01:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:35.632 14:01:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:35.632 14:01:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:35.632 14:01:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:35.632 14:01:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:35.632 14:01:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:35.632 14:01:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:35.632 14:01:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:35.632 14:01:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:35.891 14:01:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:35.891 14:01:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:35.891 14:01:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:35.891 14:01:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:35.891 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:35.891 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.133 ms 00:07:35.891 00:07:35.891 --- 10.0.0.2 ping statistics --- 00:07:35.891 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:35.891 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:07:35.891 14:01:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:35.891 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:35.891 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:07:35.891 00:07:35.891 --- 10.0.0.1 ping statistics --- 00:07:35.891 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:35.891 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:07:35.891 14:01:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:35.891 14:01:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:07:35.891 14:01:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:35.891 14:01:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:35.892 14:01:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:35.892 14:01:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:35.892 14:01:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:35.892 14:01:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:35.892 14:01:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:35.892 14:01:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:07:35.892 14:01:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:35.892 14:01:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:35.892 14:01:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:35.892 14:01:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=120185 00:07:35.892 14:01:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:35.892 14:01:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 120185 00:07:35.892 14:01:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 120185 ']' 00:07:35.892 14:01:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:35.892 14:01:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:35.892 14:01:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:35.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:35.892 14:01:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:35.892 14:01:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:35.892 [2024-07-26 14:01:43.781896] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:07:35.892 [2024-07-26 14:01:43.781974] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:35.892 EAL: No free 2048 kB hugepages reported on node 1 00:07:35.892 [2024-07-26 14:01:43.846602] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:36.150 [2024-07-26 14:01:43.954869] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:36.150 [2024-07-26 14:01:43.954943] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:36.150 [2024-07-26 14:01:43.954957] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:36.150 [2024-07-26 14:01:43.954969] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:36.150 [2024-07-26 14:01:43.954978] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:36.150 [2024-07-26 14:01:43.955059] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:36.150 [2024-07-26 14:01:43.955123] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:36.150 [2024-07-26 14:01:43.955126] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:36.150 14:01:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:36.150 14:01:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:07:36.150 14:01:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:36.150 14:01:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:36.150 14:01:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:36.150 14:01:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:36.150 14:01:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:07:36.150 14:01:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:36.407 [2024-07-26 14:01:44.317703] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:36.407 14:01:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:36.665 14:01:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:36.923 [2024-07-26 14:01:44.839154] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:36.923 14:01:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:37.181 14:01:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:07:37.439 Malloc0 00:07:37.439 14:01:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:37.707 Delay0 00:07:37.708 14:01:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:37.972 14:01:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:07:38.229 NULL1 00:07:38.229 14:01:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:07:38.487 14:01:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=120484 00:07:38.487 14:01:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:07:38.487 14:01:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 120484 00:07:38.487 14:01:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:38.487 EAL: No free 2048 kB hugepages reported on node 1 00:07:39.862 Read completed with error (sct=0, sc=11) 00:07:39.862 14:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:39.862 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:39.862 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:39.862 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:39.862 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:39.862 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:39.862 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:39.862 14:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:07:39.862 14:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:07:40.120 true 00:07:40.120 14:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 120484 00:07:40.120 14:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:41.054 14:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:41.054 14:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:07:41.054 14:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:07:41.312 true 00:07:41.312 14:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 120484 00:07:41.312 14:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:41.570 14:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:41.828 14:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:07:41.828 14:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:07:42.086 true 00:07:42.086 14:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 120484 00:07:42.086 14:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:43.020 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:43.020 14:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:43.020 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:43.020 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:43.020 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:43.277 14:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:07:43.277 14:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:07:43.536 true 00:07:43.536 14:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 120484 00:07:43.536 14:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:43.794 14:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:44.052 14:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:07:44.052 14:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:07:44.310 true 00:07:44.310 14:01:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 120484 00:07:44.310 14:01:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:45.243 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:45.243 14:01:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:45.500 14:01:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:07:45.500 14:01:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:07:45.756 true 00:07:45.756 14:01:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 120484 00:07:45.757 14:01:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:46.014 14:01:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:46.271 14:01:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:07:46.271 14:01:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:07:46.529 true 00:07:46.529 14:01:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 120484 00:07:46.529 14:01:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:47.460 14:01:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:47.717 14:01:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:07:47.717 14:01:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:07:47.975 true 00:07:47.975 14:01:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 120484 00:07:47.975 14:01:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:48.232 14:01:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:48.490 14:01:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:07:48.490 14:01:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:07:48.490 true 00:07:48.748 14:01:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 120484 00:07:48.748 14:01:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:48.748 14:01:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:49.006 14:01:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:07:49.006 14:01:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:07:49.263 true 00:07:49.263 14:01:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 120484 00:07:49.263 14:01:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:50.636 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:50.636 14:01:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:50.636 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:50.636 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:50.636 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:50.636 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:50.636 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:50.636 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:50.636 14:01:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:07:50.636 14:01:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:07:50.894 true 00:07:50.894 14:01:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 120484 00:07:50.894 14:01:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:51.827 14:01:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:51.827 14:01:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:07:51.827 14:01:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:07:52.084 true 00:07:52.085 14:02:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 120484 00:07:52.085 14:02:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:52.342 14:02:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:52.599 14:02:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:07:52.599 14:02:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:07:52.857 true 00:07:52.857 14:02:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 120484 00:07:52.857 14:02:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:53.115 14:02:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:53.373 14:02:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:07:53.373 14:02:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:07:53.631 true 00:07:53.631 14:02:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 120484 00:07:53.631 14:02:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:55.003 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:55.003 14:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:55.003 14:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:07:55.003 14:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:07:55.261 true 00:07:55.261 14:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 120484 00:07:55.261 14:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:55.519 14:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:55.776 14:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:07:55.776 14:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:07:56.034 true 00:07:56.034 14:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 120484 00:07:56.034 14:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:56.968 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:56.968 14:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:56.968 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:57.226 14:02:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:07:57.226 14:02:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:07:57.484 true 00:07:57.484 14:02:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 120484 00:07:57.484 14:02:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:57.742 14:02:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:58.000 14:02:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:07:58.000 14:02:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:07:58.258 true 00:07:58.258 14:02:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 120484 00:07:58.258 14:02:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:59.238 14:02:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:59.238 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:59.238 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:59.238 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:59.238 14:02:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:07:59.238 14:02:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:07:59.496 true 00:07:59.496 14:02:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 120484 00:07:59.496 14:02:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:59.754 14:02:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:00.012 14:02:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:08:00.012 14:02:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:08:00.270 true 00:08:00.270 14:02:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 120484 00:08:00.270 14:02:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:01.205 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:01.205 14:02:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:01.205 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:01.463 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:01.463 14:02:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:08:01.463 14:02:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:08:01.721 true 00:08:01.721 14:02:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 120484 00:08:01.721 14:02:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:01.985 14:02:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:02.243 14:02:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:08:02.243 14:02:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:08:02.500 true 00:08:02.500 14:02:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 120484 00:08:02.500 14:02:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:03.875 14:02:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:03.875 14:02:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:08:03.875 14:02:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:08:04.132 true 00:08:04.132 14:02:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 120484 00:08:04.132 14:02:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:04.390 14:02:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:04.657 14:02:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:08:04.657 14:02:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:08:04.914 true 00:08:04.914 14:02:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 120484 00:08:04.914 14:02:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:05.847 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:05.847 14:02:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:05.847 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:05.847 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:05.847 14:02:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:08:05.847 14:02:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:08:06.105 true 00:08:06.105 14:02:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 120484 00:08:06.105 14:02:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:06.364 14:02:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:06.622 14:02:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:08:06.622 14:02:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:08:06.880 true 00:08:06.880 14:02:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 120484 00:08:06.880 14:02:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:07.814 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:07.814 14:02:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:08.072 14:02:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:08:08.072 14:02:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:08:08.330 true 00:08:08.330 14:02:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 120484 00:08:08.330 14:02:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:08.588 14:02:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:08.846 Initializing NVMe Controllers 00:08:08.846 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:08.846 Controller IO queue size 128, less than required. 00:08:08.846 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:08.846 Controller IO queue size 128, less than required. 00:08:08.846 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:08.846 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:08.846 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:08:08.846 Initialization complete. Launching workers. 00:08:08.846 ======================================================== 00:08:08.846 Latency(us) 00:08:08.846 Device Information : IOPS MiB/s Average min max 00:08:08.846 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1120.50 0.55 61040.17 3096.04 1013931.64 00:08:08.846 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 11214.42 5.48 11413.97 3259.61 471713.43 00:08:08.846 ======================================================== 00:08:08.846 Total : 12334.92 6.02 15922.01 3096.04 1013931.64 00:08:08.846 00:08:08.846 14:02:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:08:08.846 14:02:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:08:09.104 true 00:08:09.104 14:02:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 120484 00:08:09.104 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (120484) - No such process 00:08:09.104 14:02:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 120484 00:08:09.104 14:02:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:09.362 14:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:09.620 14:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:08:09.620 14:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:08:09.620 14:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:08:09.620 14:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:09.620 14:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:08:09.877 null0 00:08:09.877 14:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:09.877 14:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:09.877 14:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:08:10.133 null1 00:08:10.133 14:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:10.133 14:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:10.133 14:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:08:10.391 null2 00:08:10.391 14:02:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:10.391 14:02:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:10.391 14:02:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:08:10.649 null3 00:08:10.649 14:02:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:10.649 14:02:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:10.649 14:02:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:08:10.649 null4 00:08:10.907 14:02:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:10.907 14:02:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:10.907 14:02:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:08:10.907 null5 00:08:11.164 14:02:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:11.164 14:02:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:11.164 14:02:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:08:11.164 null6 00:08:11.165 14:02:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:11.165 14:02:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:11.165 14:02:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:08:11.423 null7 00:08:11.423 14:02:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:11.423 14:02:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:11.423 14:02:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:08:11.423 14:02:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:11.423 14:02:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:11.423 14:02:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:11.423 14:02:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:11.423 14:02:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:08:11.423 14:02:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:08:11.423 14:02:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:11.423 14:02:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:11.423 14:02:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.423 14:02:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:11.423 14:02:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:11.423 14:02:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:08:11.423 14:02:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:11.423 14:02:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:08:11.423 14:02:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:11.423 14:02:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.423 14:02:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:11.423 14:02:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:11.423 14:02:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:11.423 14:02:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:08:11.423 14:02:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:11.423 14:02:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:08:11.423 14:02:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:11.423 14:02:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.423 14:02:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:11.423 14:02:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:11.423 14:02:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:08:11.423 14:02:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:11.423 14:02:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:08:11.423 14:02:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:11.423 14:02:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:11.423 14:02:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.423 14:02:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:11.423 14:02:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:11.423 14:02:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:08:11.423 14:02:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:11.423 14:02:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:11.423 14:02:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:08:11.423 14:02:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:11.423 14:02:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.423 14:02:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:11.423 14:02:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:11.423 14:02:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:08:11.423 14:02:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:11.423 14:02:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:11.423 14:02:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:08:11.423 14:02:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:11.423 14:02:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.423 14:02:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:11.423 14:02:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:11.423 14:02:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:08:11.423 14:02:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:11.423 14:02:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:11.423 14:02:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:08:11.423 14:02:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:11.423 14:02:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.423 14:02:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:11.682 14:02:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:11.682 14:02:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:08:11.682 14:02:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:11.682 14:02:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:08:11.682 14:02:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:11.682 14:02:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:11.682 14:02:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 124542 124543 124545 124547 124549 124551 124553 124555 00:08:11.682 14:02:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.682 14:02:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:11.940 14:02:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:11.940 14:02:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:11.940 14:02:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:11.940 14:02:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:11.940 14:02:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:11.940 14:02:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:11.940 14:02:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:11.940 14:02:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:12.199 14:02:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.199 14:02:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.199 14:02:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:12.199 14:02:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.199 14:02:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.199 14:02:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:12.199 14:02:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.199 14:02:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.199 14:02:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:12.199 14:02:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.199 14:02:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.199 14:02:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:12.199 14:02:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.199 14:02:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.199 14:02:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:12.199 14:02:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.199 14:02:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.199 14:02:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:12.199 14:02:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.199 14:02:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.199 14:02:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:12.199 14:02:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.199 14:02:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.199 14:02:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:12.458 14:02:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:12.458 14:02:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:12.458 14:02:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:12.458 14:02:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:12.458 14:02:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:12.458 14:02:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:12.458 14:02:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:12.458 14:02:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:12.717 14:02:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.717 14:02:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.717 14:02:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:12.717 14:02:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.717 14:02:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.717 14:02:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:12.717 14:02:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.717 14:02:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.717 14:02:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:12.717 14:02:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.717 14:02:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.717 14:02:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:12.717 14:02:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.717 14:02:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.717 14:02:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:12.717 14:02:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.717 14:02:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.717 14:02:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:12.717 14:02:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.717 14:02:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.717 14:02:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:12.717 14:02:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.717 14:02:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.717 14:02:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:12.976 14:02:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:12.976 14:02:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:12.976 14:02:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:12.976 14:02:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:12.976 14:02:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:12.976 14:02:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:12.976 14:02:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:12.976 14:02:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:13.234 14:02:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.234 14:02:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.234 14:02:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:13.234 14:02:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.234 14:02:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.234 14:02:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:13.234 14:02:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.234 14:02:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.234 14:02:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:13.234 14:02:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.234 14:02:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.234 14:02:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:13.234 14:02:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.234 14:02:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.234 14:02:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:13.234 14:02:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.234 14:02:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.234 14:02:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:13.234 14:02:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.234 14:02:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.234 14:02:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:13.234 14:02:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.234 14:02:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.234 14:02:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:13.493 14:02:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:13.493 14:02:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:13.493 14:02:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:13.493 14:02:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:13.493 14:02:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:13.493 14:02:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:13.493 14:02:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:13.493 14:02:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:13.751 14:02:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.751 14:02:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.751 14:02:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:13.751 14:02:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.751 14:02:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.752 14:02:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:13.752 14:02:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.752 14:02:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.752 14:02:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:13.752 14:02:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.752 14:02:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.752 14:02:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:13.752 14:02:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.752 14:02:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.752 14:02:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:13.752 14:02:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.752 14:02:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.752 14:02:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:13.752 14:02:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.752 14:02:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.752 14:02:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:13.752 14:02:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.752 14:02:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.752 14:02:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:14.011 14:02:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:14.011 14:02:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:14.011 14:02:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:14.011 14:02:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:14.011 14:02:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:14.011 14:02:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:14.011 14:02:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:14.011 14:02:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:14.270 14:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.270 14:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.270 14:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:14.270 14:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.270 14:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.270 14:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:14.270 14:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.270 14:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.270 14:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:14.270 14:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.270 14:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.270 14:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:14.270 14:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.270 14:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.270 14:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:14.270 14:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.270 14:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.270 14:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:14.270 14:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.270 14:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.270 14:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:14.270 14:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.270 14:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.270 14:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:14.529 14:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:14.529 14:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:14.529 14:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:14.529 14:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:14.529 14:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:14.529 14:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:14.529 14:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:14.529 14:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:14.787 14:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.787 14:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.787 14:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:14.787 14:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.787 14:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.787 14:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:14.787 14:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.787 14:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.787 14:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:14.787 14:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.787 14:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.787 14:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:14.787 14:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.787 14:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.787 14:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.787 14:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.787 14:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:14.787 14:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:14.787 14:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.787 14:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.787 14:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:14.787 14:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.787 14:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.787 14:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:15.046 14:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:15.046 14:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:15.046 14:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:15.046 14:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:15.046 14:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:15.046 14:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:15.046 14:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:15.046 14:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:15.305 14:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.305 14:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.305 14:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:15.305 14:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.305 14:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.305 14:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:15.305 14:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.305 14:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.305 14:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:15.305 14:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.305 14:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.305 14:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:15.563 14:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.563 14:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.563 14:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:15.563 14:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.563 14:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.563 14:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.563 14:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.563 14:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:15.563 14:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:15.563 14:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.563 14:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.563 14:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:15.563 14:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:15.821 14:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:15.821 14:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:15.821 14:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:15.821 14:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:15.821 14:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:15.821 14:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:15.821 14:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:16.080 14:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.080 14:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.080 14:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:16.080 14:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.080 14:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.080 14:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:16.080 14:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.080 14:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.080 14:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:16.080 14:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.080 14:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.080 14:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.080 14:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:16.080 14:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.080 14:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:16.080 14:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.080 14:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.080 14:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:16.080 14:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.080 14:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.080 14:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:16.080 14:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.080 14:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.080 14:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:16.338 14:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:16.338 14:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:16.338 14:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:16.338 14:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:16.338 14:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:16.338 14:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:16.338 14:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:16.338 14:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:16.597 14:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.597 14:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.597 14:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:16.597 14:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.597 14:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.597 14:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:16.597 14:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.597 14:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.597 14:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:16.597 14:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.597 14:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.597 14:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:16.597 14:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.597 14:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.597 14:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:16.597 14:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.597 14:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.597 14:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:16.597 14:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.597 14:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.597 14:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:16.597 14:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.597 14:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.597 14:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:16.856 14:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:16.856 14:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:16.856 14:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:16.856 14:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:16.856 14:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:16.856 14:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:16.856 14:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:16.856 14:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:17.114 14:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.114 14:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.114 14:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.114 14:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.114 14:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.114 14:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.114 14:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.114 14:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.114 14:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.114 14:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.114 14:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.114 14:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.114 14:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.114 14:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.114 14:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.114 14:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.114 14:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:08:17.114 14:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:08:17.114 14:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:17.114 14:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:08:17.114 14:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:17.114 14:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:08:17.114 14:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:17.114 14:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:17.114 rmmod nvme_tcp 00:08:17.114 rmmod nvme_fabrics 00:08:17.114 rmmod nvme_keyring 00:08:17.114 14:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:17.114 14:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:08:17.114 14:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:08:17.114 14:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 120185 ']' 00:08:17.114 14:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 120185 00:08:17.114 14:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 120185 ']' 00:08:17.114 14:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 120185 00:08:17.114 14:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:08:17.114 14:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:17.114 14:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 120185 00:08:17.114 14:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:17.114 14:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:17.114 14:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 120185' 00:08:17.114 killing process with pid 120185 00:08:17.114 14:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 120185 00:08:17.114 14:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 120185 00:08:17.374 14:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:17.374 14:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:17.374 14:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:17.374 14:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:17.374 14:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:17.374 14:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:17.374 14:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:17.374 14:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:19.915 14:02:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:19.915 00:08:19.915 real 0m45.915s 00:08:19.915 user 3m29.424s 00:08:19.915 sys 0m16.133s 00:08:19.915 14:02:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:19.915 14:02:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:19.915 ************************************ 00:08:19.915 END TEST nvmf_ns_hotplug_stress 00:08:19.915 ************************************ 00:08:19.915 14:02:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:19.915 14:02:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:19.915 14:02:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:19.915 14:02:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:19.915 ************************************ 00:08:19.915 START TEST nvmf_delete_subsystem 00:08:19.915 ************************************ 00:08:19.915 14:02:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:19.915 * Looking for test storage... 00:08:19.915 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:19.915 14:02:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:19.915 14:02:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:08:19.915 14:02:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:19.916 14:02:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:19.916 14:02:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:19.916 14:02:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:19.916 14:02:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:19.916 14:02:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:19.916 14:02:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:19.916 14:02:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:19.916 14:02:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:19.916 14:02:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:19.916 14:02:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:08:19.916 14:02:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:08:19.916 14:02:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:19.916 14:02:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:19.916 14:02:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:19.916 14:02:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:19.916 14:02:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:19.916 14:02:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:19.916 14:02:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:19.916 14:02:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:19.916 14:02:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.916 14:02:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.916 14:02:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.916 14:02:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:08:19.916 14:02:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.916 14:02:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:08:19.916 14:02:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:19.916 14:02:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:19.916 14:02:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:19.916 14:02:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:19.916 14:02:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:19.916 14:02:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:19.916 14:02:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:19.916 14:02:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:19.916 14:02:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:08:19.916 14:02:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:19.916 14:02:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:19.916 14:02:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:19.916 14:02:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:19.916 14:02:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:19.916 14:02:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:19.916 14:02:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:19.916 14:02:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:19.916 14:02:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:19.916 14:02:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:19.916 14:02:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:08:19.916 14:02:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:21.821 14:02:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:21.821 14:02:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:08:21.821 14:02:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:21.821 14:02:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:21.821 14:02:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:21.821 14:02:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:21.821 14:02:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:21.821 14:02:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:08:21.821 14:02:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:21.821 14:02:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:08:21.821 14:02:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:08:21.821 14:02:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:08:21.821 14:02:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:08:21.821 14:02:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:08:21.821 14:02:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:08:21.821 14:02:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:21.821 14:02:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:21.821 14:02:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:21.821 14:02:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:21.821 14:02:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:21.821 14:02:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:21.821 14:02:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:21.821 14:02:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:21.821 14:02:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:21.821 14:02:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:21.821 14:02:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:21.821 14:02:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:21.821 14:02:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:21.821 14:02:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:21.821 14:02:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:21.821 14:02:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:21.821 14:02:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:21.821 14:02:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:21.821 14:02:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:08:21.821 Found 0000:09:00.0 (0x8086 - 0x159b) 00:08:21.821 14:02:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:21.821 14:02:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:21.821 14:02:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:21.821 14:02:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:21.821 14:02:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:21.821 14:02:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:21.821 14:02:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:08:21.821 Found 0000:09:00.1 (0x8086 - 0x159b) 00:08:21.821 14:02:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:21.821 14:02:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:21.821 14:02:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:21.821 14:02:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:21.821 14:02:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:21.821 14:02:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:21.821 14:02:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:21.821 14:02:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:21.821 14:02:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:21.821 14:02:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:21.821 14:02:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:21.821 14:02:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:21.821 14:02:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:21.821 14:02:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:21.821 14:02:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:21.821 14:02:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:08:21.821 Found net devices under 0000:09:00.0: cvl_0_0 00:08:21.821 14:02:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:21.821 14:02:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:21.822 14:02:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:21.822 14:02:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:21.822 14:02:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:21.822 14:02:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:21.822 14:02:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:21.822 14:02:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:21.822 14:02:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:08:21.822 Found net devices under 0000:09:00.1: cvl_0_1 00:08:21.822 14:02:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:21.822 14:02:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:21.822 14:02:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:08:21.822 14:02:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:21.822 14:02:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:21.822 14:02:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:21.822 14:02:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:21.822 14:02:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:21.822 14:02:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:21.822 14:02:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:21.822 14:02:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:21.822 14:02:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:21.822 14:02:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:21.822 14:02:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:21.822 14:02:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:21.822 14:02:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:21.822 14:02:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:21.822 14:02:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:21.822 14:02:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:21.822 14:02:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:21.822 14:02:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:21.822 14:02:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:21.822 14:02:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:21.822 14:02:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:21.822 14:02:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:21.822 14:02:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:21.822 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:21.822 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.224 ms 00:08:21.822 00:08:21.822 --- 10.0.0.2 ping statistics --- 00:08:21.822 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:21.822 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:08:21.822 14:02:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:21.822 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:21.822 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.140 ms 00:08:21.822 00:08:21.822 --- 10.0.0.1 ping statistics --- 00:08:21.822 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:21.822 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:08:21.822 14:02:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:21.822 14:02:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:08:21.822 14:02:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:21.822 14:02:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:21.822 14:02:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:21.822 14:02:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:21.822 14:02:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:21.822 14:02:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:21.822 14:02:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:21.822 14:02:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:08:21.822 14:02:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:21.822 14:02:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:21.822 14:02:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:21.822 14:02:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=127301 00:08:21.822 14:02:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:08:21.822 14:02:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 127301 00:08:21.822 14:02:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 127301 ']' 00:08:21.822 14:02:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:21.822 14:02:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:21.822 14:02:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:21.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:21.822 14:02:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:21.822 14:02:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:21.822 [2024-07-26 14:02:29.724223] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:08:21.822 [2024-07-26 14:02:29.724304] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:21.822 EAL: No free 2048 kB hugepages reported on node 1 00:08:21.822 [2024-07-26 14:02:29.789378] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:22.081 [2024-07-26 14:02:29.902945] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:22.081 [2024-07-26 14:02:29.903001] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:22.081 [2024-07-26 14:02:29.903015] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:22.081 [2024-07-26 14:02:29.903026] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:22.081 [2024-07-26 14:02:29.903036] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:22.081 [2024-07-26 14:02:29.903091] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:22.081 [2024-07-26 14:02:29.903096] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.081 14:02:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:22.081 14:02:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:08:22.081 14:02:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:22.081 14:02:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:22.081 14:02:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:22.081 14:02:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:22.081 14:02:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:22.081 14:02:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.081 14:02:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:22.081 [2024-07-26 14:02:30.045890] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:22.081 14:02:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.081 14:02:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:22.081 14:02:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.081 14:02:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:22.081 14:02:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.081 14:02:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:22.081 14:02:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.081 14:02:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:22.081 [2024-07-26 14:02:30.062077] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:22.081 14:02:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.081 14:02:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:22.081 14:02:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.081 14:02:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:22.081 NULL1 00:08:22.081 14:02:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.081 14:02:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:22.081 14:02:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.081 14:02:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:22.081 Delay0 00:08:22.081 14:02:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.081 14:02:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:22.082 14:02:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.082 14:02:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:22.082 14:02:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.082 14:02:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=127333 00:08:22.082 14:02:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:08:22.082 14:02:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:22.340 EAL: No free 2048 kB hugepages reported on node 1 00:08:22.340 [2024-07-26 14:02:30.136810] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:24.237 14:02:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:24.237 14:02:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.237 14:02:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:24.495 Write completed with error (sct=0, sc=8) 00:08:24.495 Read completed with error (sct=0, sc=8) 00:08:24.495 Write completed with error (sct=0, sc=8) 00:08:24.495 Write completed with error (sct=0, sc=8) 00:08:24.495 starting I/O failed: -6 00:08:24.495 Write completed with error (sct=0, sc=8) 00:08:24.495 Read completed with error (sct=0, sc=8) 00:08:24.495 Read completed with error (sct=0, sc=8) 00:08:24.495 Read completed with error (sct=0, sc=8) 00:08:24.495 starting I/O failed: -6 00:08:24.495 Read completed with error (sct=0, sc=8) 00:08:24.495 Read completed with error (sct=0, sc=8) 00:08:24.495 Read completed with error (sct=0, sc=8) 00:08:24.495 Read completed with error (sct=0, sc=8) 00:08:24.495 starting I/O failed: -6 00:08:24.495 Write completed with error (sct=0, sc=8) 00:08:24.495 Write completed with error (sct=0, sc=8) 00:08:24.495 Write completed with error (sct=0, sc=8) 00:08:24.495 Read completed with error (sct=0, sc=8) 00:08:24.495 starting I/O failed: -6 00:08:24.495 Read completed with error (sct=0, sc=8) 00:08:24.495 Write completed with error (sct=0, sc=8) 00:08:24.495 Read completed with error (sct=0, sc=8) 00:08:24.495 Write completed with error (sct=0, sc=8) 00:08:24.495 starting I/O failed: -6 00:08:24.495 Read completed with error (sct=0, sc=8) 00:08:24.495 Read completed with error (sct=0, sc=8) 00:08:24.495 Read completed with error (sct=0, sc=8) 00:08:24.495 Read completed with error (sct=0, sc=8) 00:08:24.495 starting I/O failed: -6 00:08:24.495 Write completed with error (sct=0, sc=8) 00:08:24.495 Read completed with error (sct=0, sc=8) 00:08:24.495 Read completed with error (sct=0, sc=8) 00:08:24.495 Read completed with error (sct=0, sc=8) 00:08:24.495 starting I/O failed: -6 00:08:24.495 Write completed with error (sct=0, sc=8) 00:08:24.495 Read completed with error (sct=0, sc=8) 00:08:24.495 Read completed with error (sct=0, sc=8) 00:08:24.495 Write completed with error (sct=0, sc=8) 00:08:24.495 starting I/O failed: -6 00:08:24.495 Read completed with error (sct=0, sc=8) 00:08:24.495 Read completed with error (sct=0, sc=8) 00:08:24.495 Write completed with error (sct=0, sc=8) 00:08:24.495 Write completed with error (sct=0, sc=8) 00:08:24.495 starting I/O failed: -6 00:08:24.495 Write completed with error (sct=0, sc=8) 00:08:24.495 Write completed with error (sct=0, sc=8) 00:08:24.495 Write completed with error (sct=0, sc=8) 00:08:24.495 Write completed with error (sct=0, sc=8) 00:08:24.495 starting I/O failed: -6 00:08:24.495 Read completed with error (sct=0, sc=8) 00:08:24.495 Read completed with error (sct=0, sc=8) 00:08:24.495 [2024-07-26 14:02:32.265358] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae75c0 is same with the state(5) to be set 00:08:24.495 Read completed with error (sct=0, sc=8) 00:08:24.495 Write completed with error (sct=0, sc=8) 00:08:24.495 Read completed with error (sct=0, sc=8) 00:08:24.495 Read completed with error (sct=0, sc=8) 00:08:24.495 Read completed with error (sct=0, sc=8) 00:08:24.495 Read completed with error (sct=0, sc=8) 00:08:24.495 Read completed with error (sct=0, sc=8) 00:08:24.495 Read completed with error (sct=0, sc=8) 00:08:24.495 Read completed with error (sct=0, sc=8) 00:08:24.495 Write completed with error (sct=0, sc=8) 00:08:24.495 Read completed with error (sct=0, sc=8) 00:08:24.495 Write completed with error (sct=0, sc=8) 00:08:24.495 Read completed with error (sct=0, sc=8) 00:08:24.495 Read completed with error (sct=0, sc=8) 00:08:24.495 Write completed with error (sct=0, sc=8) 00:08:24.495 Write completed with error (sct=0, sc=8) 00:08:24.495 Read completed with error (sct=0, sc=8) 00:08:24.495 Read completed with error (sct=0, sc=8) 00:08:24.495 Read completed with error (sct=0, sc=8) 00:08:24.495 Read completed with error (sct=0, sc=8) 00:08:24.495 Read completed with error (sct=0, sc=8) 00:08:24.495 Read completed with error (sct=0, sc=8) 00:08:24.495 Read completed with error (sct=0, sc=8) 00:08:24.495 Read completed with error (sct=0, sc=8) 00:08:24.495 Read completed with error (sct=0, sc=8) 00:08:24.495 Write completed with error (sct=0, sc=8) 00:08:24.495 Read completed with error (sct=0, sc=8) 00:08:24.495 Read completed with error (sct=0, sc=8) 00:08:24.495 Write completed with error (sct=0, sc=8) 00:08:24.495 Read completed with error (sct=0, sc=8) 00:08:24.495 Read completed with error (sct=0, sc=8) 00:08:24.495 Read completed with error (sct=0, sc=8) 00:08:24.495 Read completed with error (sct=0, sc=8) 00:08:24.495 Read completed with error (sct=0, sc=8) 00:08:24.495 Read completed with error (sct=0, sc=8) 00:08:24.495 Read completed with error (sct=0, sc=8) 00:08:24.495 Read completed with error (sct=0, sc=8) 00:08:24.495 Read completed with error (sct=0, sc=8) 00:08:24.495 Read completed with error (sct=0, sc=8) 00:08:24.495 Write completed with error (sct=0, sc=8) 00:08:24.495 Write completed with error (sct=0, sc=8) 00:08:24.495 Write completed with error (sct=0, sc=8) 00:08:24.496 Read completed with error (sct=0, sc=8) 00:08:24.496 Read completed with error (sct=0, sc=8) 00:08:24.496 Read completed with error (sct=0, sc=8) 00:08:24.496 Read completed with error (sct=0, sc=8) 00:08:24.496 Write completed with error (sct=0, sc=8) 00:08:24.496 Write completed with error (sct=0, sc=8) 00:08:24.496 Read completed with error (sct=0, sc=8) 00:08:24.496 Read completed with error (sct=0, sc=8) 00:08:24.496 Read completed with error (sct=0, sc=8) 00:08:24.496 Read completed with error (sct=0, sc=8) 00:08:24.496 [2024-07-26 14:02:32.266012] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae7c20 is same with the state(5) to be set 00:08:24.496 Read completed with error (sct=0, sc=8) 00:08:24.496 starting I/O failed: -6 00:08:24.496 Write completed with error (sct=0, sc=8) 00:08:24.496 Read completed with error (sct=0, sc=8) 00:08:24.496 Read completed with error (sct=0, sc=8) 00:08:24.496 Read completed with error (sct=0, sc=8) 00:08:24.496 starting I/O failed: -6 00:08:24.496 Read completed with error (sct=0, sc=8) 00:08:24.496 Write completed with error (sct=0, sc=8) 00:08:24.496 Write completed with error (sct=0, sc=8) 00:08:24.496 Read completed with error (sct=0, sc=8) 00:08:24.496 starting I/O failed: -6 00:08:24.496 Read completed with error (sct=0, sc=8) 00:08:24.496 Read completed with error (sct=0, sc=8) 00:08:24.496 Read completed with error (sct=0, sc=8) 00:08:24.496 Read completed with error (sct=0, sc=8) 00:08:24.496 starting I/O failed: -6 00:08:24.496 Read completed with error (sct=0, sc=8) 00:08:24.496 Read completed with error (sct=0, sc=8) 00:08:24.496 Read completed with error (sct=0, sc=8) 00:08:24.496 Read completed with error (sct=0, sc=8) 00:08:24.496 starting I/O failed: -6 00:08:24.496 Read completed with error (sct=0, sc=8) 00:08:24.496 Write completed with error (sct=0, sc=8) 00:08:24.496 Read completed with error (sct=0, sc=8) 00:08:24.496 Read completed with error (sct=0, sc=8) 00:08:24.496 starting I/O failed: -6 00:08:24.496 Write completed with error (sct=0, sc=8) 00:08:24.496 Write completed with error (sct=0, sc=8) 00:08:24.496 Read completed with error (sct=0, sc=8) 00:08:24.496 Read completed with error (sct=0, sc=8) 00:08:24.496 starting I/O failed: -6 00:08:24.496 Read completed with error (sct=0, sc=8) 00:08:24.496 Read completed with error (sct=0, sc=8) 00:08:24.496 Read completed with error (sct=0, sc=8) 00:08:24.496 Read completed with error (sct=0, sc=8) 00:08:24.496 starting I/O failed: -6 00:08:24.496 Write completed with error (sct=0, sc=8) 00:08:24.496 Read completed with error (sct=0, sc=8) 00:08:24.496 Write completed with error (sct=0, sc=8) 00:08:24.496 Write completed with error (sct=0, sc=8) 00:08:24.496 starting I/O failed: -6 00:08:24.496 Read completed with error (sct=0, sc=8) 00:08:24.496 Read completed with error (sct=0, sc=8) 00:08:24.496 Read completed with error (sct=0, sc=8) 00:08:24.496 Write completed with error (sct=0, sc=8) 00:08:24.496 starting I/O failed: -6 00:08:24.496 Read completed with error (sct=0, sc=8) 00:08:24.496 Write completed with error (sct=0, sc=8) 00:08:24.496 Read completed with error (sct=0, sc=8) 00:08:24.496 Read completed with error (sct=0, sc=8) 00:08:24.496 starting I/O failed: -6 00:08:24.496 Read completed with error (sct=0, sc=8) 00:08:24.496 Read completed with error (sct=0, sc=8) 00:08:24.496 Read completed with error (sct=0, sc=8) 00:08:24.496 Write completed with error (sct=0, sc=8) 00:08:24.496 starting I/O failed: -6 00:08:24.496 [2024-07-26 14:02:32.266517] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f2e34000c00 is same with the state(5) to be set 00:08:24.496 Read completed with error (sct=0, sc=8) 00:08:24.496 Write completed with error (sct=0, sc=8) 00:08:24.496 Read completed with error (sct=0, sc=8) 00:08:24.496 Read completed with error (sct=0, sc=8) 00:08:24.496 Write completed with error (sct=0, sc=8) 00:08:24.496 Read completed with error (sct=0, sc=8) 00:08:24.496 Read completed with error (sct=0, sc=8) 00:08:24.496 Write completed with error (sct=0, sc=8) 00:08:24.496 Write completed with error (sct=0, sc=8) 00:08:24.496 Read completed with error (sct=0, sc=8) 00:08:24.496 Write completed with error (sct=0, sc=8) 00:08:24.496 Read completed with error (sct=0, sc=8) 00:08:24.496 Read completed with error (sct=0, sc=8) 00:08:24.496 Read completed with error (sct=0, sc=8) 00:08:24.496 Write completed with error (sct=0, sc=8) 00:08:24.496 Write completed with error (sct=0, sc=8) 00:08:24.496 Read completed with error (sct=0, sc=8) 00:08:24.496 Write completed with error (sct=0, sc=8) 00:08:24.496 Read completed with error (sct=0, sc=8) 00:08:24.496 Write completed with error (sct=0, sc=8) 00:08:24.496 Read completed with error (sct=0, sc=8) 00:08:24.496 Read completed with error (sct=0, sc=8) 00:08:24.496 Write completed with error (sct=0, sc=8) 00:08:24.496 Read completed with error (sct=0, sc=8) 00:08:24.496 Read completed with error (sct=0, sc=8) 00:08:24.496 Write completed with error (sct=0, sc=8) 00:08:24.496 Read completed with error (sct=0, sc=8) 00:08:24.496 Read completed with error (sct=0, sc=8) 00:08:24.496 Read completed with error (sct=0, sc=8) 00:08:24.496 Write completed with error (sct=0, sc=8) 00:08:24.496 Read completed with error (sct=0, sc=8) 00:08:24.496 Read completed with error (sct=0, sc=8) 00:08:24.496 Read completed with error (sct=0, sc=8) 00:08:24.496 Read completed with error (sct=0, sc=8) 00:08:24.496 Read completed with error (sct=0, sc=8) 00:08:24.496 Read completed with error (sct=0, sc=8) 00:08:24.496 Read completed with error (sct=0, sc=8) 00:08:24.496 Read completed with error (sct=0, sc=8) 00:08:24.496 Read completed with error (sct=0, sc=8) 00:08:24.496 Write completed with error (sct=0, sc=8) 00:08:24.496 Read completed with error (sct=0, sc=8) 00:08:24.496 Write completed with error (sct=0, sc=8) 00:08:24.496 Read completed with error (sct=0, sc=8) 00:08:24.496 Write completed with error (sct=0, sc=8) 00:08:24.496 Read completed with error (sct=0, sc=8) 00:08:24.496 Write completed with error (sct=0, sc=8) 00:08:24.496 Read completed with error (sct=0, sc=8) 00:08:24.496 Write completed with error (sct=0, sc=8) 00:08:24.496 Write completed with error (sct=0, sc=8) 00:08:24.496 Read completed with error (sct=0, sc=8) 00:08:24.496 Read completed with error (sct=0, sc=8) 00:08:24.496 Read completed with error (sct=0, sc=8) 00:08:24.496 Read completed with error (sct=0, sc=8) 00:08:24.496 Write completed with error (sct=0, sc=8) 00:08:24.496 Read completed with error (sct=0, sc=8) 00:08:24.496 Read completed with error (sct=0, sc=8) 00:08:25.431 [2024-07-26 14:02:33.232077] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8ac0 is same with the state(5) to be set 00:08:25.431 Read completed with error (sct=0, sc=8) 00:08:25.431 Read completed with error (sct=0, sc=8) 00:08:25.431 Write completed with error (sct=0, sc=8) 00:08:25.431 Read completed with error (sct=0, sc=8) 00:08:25.431 Write completed with error (sct=0, sc=8) 00:08:25.431 Read completed with error (sct=0, sc=8) 00:08:25.431 Read completed with error (sct=0, sc=8) 00:08:25.431 Read completed with error (sct=0, sc=8) 00:08:25.431 Read completed with error (sct=0, sc=8) 00:08:25.431 Read completed with error (sct=0, sc=8) 00:08:25.431 Read completed with error (sct=0, sc=8) 00:08:25.431 Read completed with error (sct=0, sc=8) 00:08:25.431 Write completed with error (sct=0, sc=8) 00:08:25.431 Write completed with error (sct=0, sc=8) 00:08:25.431 Write completed with error (sct=0, sc=8) 00:08:25.431 Read completed with error (sct=0, sc=8) 00:08:25.431 Write completed with error (sct=0, sc=8) 00:08:25.431 Read completed with error (sct=0, sc=8) 00:08:25.431 Read completed with error (sct=0, sc=8) 00:08:25.431 Read completed with error (sct=0, sc=8) 00:08:25.431 Read completed with error (sct=0, sc=8) 00:08:25.431 Read completed with error (sct=0, sc=8) 00:08:25.431 Write completed with error (sct=0, sc=8) 00:08:25.431 Read completed with error (sct=0, sc=8) 00:08:25.431 Read completed with error (sct=0, sc=8) 00:08:25.431 Write completed with error (sct=0, sc=8) 00:08:25.431 [2024-07-26 14:02:33.267495] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae78f0 is same with the state(5) to be set 00:08:25.431 Write completed with error (sct=0, sc=8) 00:08:25.431 Write completed with error (sct=0, sc=8) 00:08:25.431 Write completed with error (sct=0, sc=8) 00:08:25.431 Write completed with error (sct=0, sc=8) 00:08:25.431 Read completed with error (sct=0, sc=8) 00:08:25.431 Read completed with error (sct=0, sc=8) 00:08:25.431 Write completed with error (sct=0, sc=8) 00:08:25.431 Read completed with error (sct=0, sc=8) 00:08:25.431 Read completed with error (sct=0, sc=8) 00:08:25.431 Read completed with error (sct=0, sc=8) 00:08:25.431 Read completed with error (sct=0, sc=8) 00:08:25.431 Read completed with error (sct=0, sc=8) 00:08:25.431 Write completed with error (sct=0, sc=8) 00:08:25.431 Read completed with error (sct=0, sc=8) 00:08:25.431 Read completed with error (sct=0, sc=8) 00:08:25.431 Read completed with error (sct=0, sc=8) 00:08:25.431 Read completed with error (sct=0, sc=8) 00:08:25.431 Read completed with error (sct=0, sc=8) 00:08:25.431 Read completed with error (sct=0, sc=8) 00:08:25.431 Read completed with error (sct=0, sc=8) 00:08:25.431 [2024-07-26 14:02:33.267774] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae73e0 is same with the state(5) to be set 00:08:25.431 Read completed with error (sct=0, sc=8) 00:08:25.431 Read completed with error (sct=0, sc=8) 00:08:25.431 Read completed with error (sct=0, sc=8) 00:08:25.431 Read completed with error (sct=0, sc=8) 00:08:25.431 Read completed with error (sct=0, sc=8) 00:08:25.431 Read completed with error (sct=0, sc=8) 00:08:25.431 Read completed with error (sct=0, sc=8) 00:08:25.431 Read completed with error (sct=0, sc=8) 00:08:25.431 Write completed with error (sct=0, sc=8) 00:08:25.431 Read completed with error (sct=0, sc=8) 00:08:25.431 Read completed with error (sct=0, sc=8) 00:08:25.431 Read completed with error (sct=0, sc=8) 00:08:25.431 Read completed with error (sct=0, sc=8) 00:08:25.431 Read completed with error (sct=0, sc=8) 00:08:25.431 Read completed with error (sct=0, sc=8) 00:08:25.431 Read completed with error (sct=0, sc=8) 00:08:25.431 Read completed with error (sct=0, sc=8) 00:08:25.431 Read completed with error (sct=0, sc=8) 00:08:25.431 Read completed with error (sct=0, sc=8) 00:08:25.431 Read completed with error (sct=0, sc=8) 00:08:25.431 Write completed with error (sct=0, sc=8) 00:08:25.431 Read completed with error (sct=0, sc=8) 00:08:25.431 Read completed with error (sct=0, sc=8) 00:08:25.431 Read completed with error (sct=0, sc=8) 00:08:25.431 Read completed with error (sct=0, sc=8) 00:08:25.431 Write completed with error (sct=0, sc=8) 00:08:25.431 Read completed with error (sct=0, sc=8) 00:08:25.431 Write completed with error (sct=0, sc=8) 00:08:25.431 Write completed with error (sct=0, sc=8) 00:08:25.431 Read completed with error (sct=0, sc=8) 00:08:25.431 [2024-07-26 14:02:33.268578] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f2e3400d000 is same with the state(5) to be set 00:08:25.431 Read completed with error (sct=0, sc=8) 00:08:25.431 Read completed with error (sct=0, sc=8) 00:08:25.431 Read completed with error (sct=0, sc=8) 00:08:25.432 Read completed with error (sct=0, sc=8) 00:08:25.432 Write completed with error (sct=0, sc=8) 00:08:25.432 Write completed with error (sct=0, sc=8) 00:08:25.432 Read completed with error (sct=0, sc=8) 00:08:25.432 Read completed with error (sct=0, sc=8) 00:08:25.432 Read completed with error (sct=0, sc=8) 00:08:25.432 Read completed with error (sct=0, sc=8) 00:08:25.432 Read completed with error (sct=0, sc=8) 00:08:25.432 Write completed with error (sct=0, sc=8) 00:08:25.432 Read completed with error (sct=0, sc=8) 00:08:25.432 Read completed with error (sct=0, sc=8) 00:08:25.432 Write completed with error (sct=0, sc=8) 00:08:25.432 Read completed with error (sct=0, sc=8) 00:08:25.432 Read completed with error (sct=0, sc=8) 00:08:25.432 Write completed with error (sct=0, sc=8) 00:08:25.432 Read completed with error (sct=0, sc=8) 00:08:25.432 Read completed with error (sct=0, sc=8) 00:08:25.432 Read completed with error (sct=0, sc=8) 00:08:25.432 Read completed with error (sct=0, sc=8) 00:08:25.432 Read completed with error (sct=0, sc=8) 00:08:25.432 Write completed with error (sct=0, sc=8) 00:08:25.432 Read completed with error (sct=0, sc=8) 00:08:25.432 Write completed with error (sct=0, sc=8) 00:08:25.432 Read completed with error (sct=0, sc=8) 00:08:25.432 Read completed with error (sct=0, sc=8) 00:08:25.432 Read completed with error (sct=0, sc=8) 00:08:25.432 Read completed with error (sct=0, sc=8) 00:08:25.432 [2024-07-26 14:02:33.268797] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f2e3400d7c0 is same with the state(5) to be set 00:08:25.432 Initializing NVMe Controllers 00:08:25.432 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:25.432 Controller IO queue size 128, less than required. 00:08:25.432 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:25.432 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:25.432 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:25.432 Initialization complete. Launching workers. 00:08:25.432 ======================================================== 00:08:25.432 Latency(us) 00:08:25.432 Device Information : IOPS MiB/s Average min max 00:08:25.432 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 161.88 0.08 931813.87 660.05 2000989.27 00:08:25.432 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 169.33 0.08 932227.57 443.90 2002095.26 00:08:25.432 ======================================================== 00:08:25.432 Total : 331.21 0.16 932025.37 443.90 2002095.26 00:08:25.432 00:08:25.432 [2024-07-26 14:02:33.269495] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8ac0 (9): Bad file descriptor 00:08:25.432 14:02:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.432 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:08:25.432 14:02:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:08:25.432 14:02:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 127333 00:08:25.432 14:02:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:08:25.997 14:02:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:08:25.997 14:02:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 127333 00:08:25.997 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (127333) - No such process 00:08:25.997 14:02:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 127333 00:08:25.997 14:02:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:08:25.997 14:02:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 127333 00:08:25.997 14:02:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:08:25.997 14:02:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:25.997 14:02:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:08:25.997 14:02:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:25.997 14:02:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 127333 00:08:25.997 14:02:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:08:25.997 14:02:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:25.997 14:02:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:25.997 14:02:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:25.997 14:02:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:25.997 14:02:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.997 14:02:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:25.997 14:02:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.997 14:02:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:25.997 14:02:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.997 14:02:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:25.997 [2024-07-26 14:02:33.790829] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:25.997 14:02:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.997 14:02:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:25.997 14:02:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.997 14:02:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:25.997 14:02:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.997 14:02:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=127749 00:08:25.997 14:02:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:08:25.997 14:02:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:25.997 14:02:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 127749 00:08:25.997 14:02:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:25.997 EAL: No free 2048 kB hugepages reported on node 1 00:08:25.997 [2024-07-26 14:02:33.850689] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:26.562 14:02:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:26.562 14:02:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 127749 00:08:26.562 14:02:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:26.820 14:02:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:26.820 14:02:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 127749 00:08:26.820 14:02:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:27.385 14:02:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:27.385 14:02:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 127749 00:08:27.385 14:02:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:27.950 14:02:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:27.950 14:02:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 127749 00:08:27.950 14:02:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:28.545 14:02:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:28.545 14:02:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 127749 00:08:28.545 14:02:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:28.802 14:02:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:28.802 14:02:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 127749 00:08:28.802 14:02:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:29.059 Initializing NVMe Controllers 00:08:29.059 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:29.059 Controller IO queue size 128, less than required. 00:08:29.059 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:29.059 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:29.059 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:29.059 Initialization complete. Launching workers. 00:08:29.059 ======================================================== 00:08:29.059 Latency(us) 00:08:29.059 Device Information : IOPS MiB/s Average min max 00:08:29.059 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004583.31 1000170.19 1043007.42 00:08:29.060 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005842.25 1000203.50 1013957.41 00:08:29.060 ======================================================== 00:08:29.060 Total : 256.00 0.12 1005212.78 1000170.19 1043007.42 00:08:29.060 00:08:29.317 14:02:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:29.317 14:02:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 127749 00:08:29.317 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (127749) - No such process 00:08:29.317 14:02:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 127749 00:08:29.317 14:02:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:08:29.317 14:02:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:08:29.317 14:02:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:29.317 14:02:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:08:29.317 14:02:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:29.317 14:02:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:08:29.317 14:02:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:29.318 14:02:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:29.318 rmmod nvme_tcp 00:08:29.576 rmmod nvme_fabrics 00:08:29.576 rmmod nvme_keyring 00:08:29.576 14:02:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:29.576 14:02:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:08:29.576 14:02:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:08:29.576 14:02:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 127301 ']' 00:08:29.576 14:02:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 127301 00:08:29.576 14:02:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 127301 ']' 00:08:29.576 14:02:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 127301 00:08:29.576 14:02:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:08:29.576 14:02:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:29.576 14:02:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 127301 00:08:29.576 14:02:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:29.576 14:02:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:29.576 14:02:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 127301' 00:08:29.576 killing process with pid 127301 00:08:29.576 14:02:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 127301 00:08:29.576 14:02:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 127301 00:08:29.835 14:02:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:29.835 14:02:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:29.835 14:02:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:29.835 14:02:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:29.835 14:02:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:29.835 14:02:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:29.835 14:02:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:29.835 14:02:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:31.743 14:02:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:31.743 00:08:31.743 real 0m12.302s 00:08:31.743 user 0m27.705s 00:08:31.743 sys 0m2.934s 00:08:31.743 14:02:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:31.743 14:02:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:31.743 ************************************ 00:08:31.743 END TEST nvmf_delete_subsystem 00:08:31.743 ************************************ 00:08:31.743 14:02:39 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:31.743 14:02:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:31.743 14:02:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:31.743 14:02:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:31.743 ************************************ 00:08:31.743 START TEST nvmf_host_management 00:08:31.743 ************************************ 00:08:31.743 14:02:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:32.002 * Looking for test storage... 00:08:32.002 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:32.002 14:02:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:32.002 14:02:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:08:32.002 14:02:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:32.002 14:02:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:32.002 14:02:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:32.002 14:02:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:32.002 14:02:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:32.002 14:02:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:32.002 14:02:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:32.002 14:02:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:32.003 14:02:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:32.003 14:02:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:32.003 14:02:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:08:32.003 14:02:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:08:32.003 14:02:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:32.003 14:02:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:32.003 14:02:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:32.003 14:02:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:32.003 14:02:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:32.003 14:02:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:32.003 14:02:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:32.003 14:02:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:32.003 14:02:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.003 14:02:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.003 14:02:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.003 14:02:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:08:32.003 14:02:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.003 14:02:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:08:32.003 14:02:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:32.003 14:02:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:32.003 14:02:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:32.003 14:02:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:32.003 14:02:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:32.003 14:02:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:32.003 14:02:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:32.003 14:02:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:32.003 14:02:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:32.003 14:02:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:32.003 14:02:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:08:32.003 14:02:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:32.003 14:02:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:32.003 14:02:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:32.003 14:02:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:32.003 14:02:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:32.003 14:02:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:32.003 14:02:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:32.003 14:02:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:32.003 14:02:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:32.003 14:02:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:32.003 14:02:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:08:32.003 14:02:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:34.549 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:34.549 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:08:34.549 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:34.549 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:34.549 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:34.549 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:34.549 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:34.549 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:08:34.549 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:34.549 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:08:34.549 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:08:34.549 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:08:34.549 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:08:34.549 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:08:34.549 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:08:34.549 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:34.549 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:34.549 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:34.549 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:34.549 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:34.549 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:34.549 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:34.549 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:34.549 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:34.549 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:34.549 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:34.549 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:34.549 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:34.549 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:34.549 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:34.549 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:34.549 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:34.549 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:34.549 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:08:34.549 Found 0000:09:00.0 (0x8086 - 0x159b) 00:08:34.549 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:34.549 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:34.549 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:34.549 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:34.549 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:34.549 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:34.549 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:08:34.549 Found 0000:09:00.1 (0x8086 - 0x159b) 00:08:34.549 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:34.549 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:34.549 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:34.549 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:34.549 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:34.550 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:34.550 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:34.550 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:34.550 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:34.550 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:34.550 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:34.550 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:34.550 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:34.550 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:34.550 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:34.550 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:08:34.550 Found net devices under 0000:09:00.0: cvl_0_0 00:08:34.550 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:34.550 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:34.550 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:34.550 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:34.550 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:34.550 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:34.550 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:34.550 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:34.550 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:08:34.550 Found net devices under 0000:09:00.1: cvl_0_1 00:08:34.550 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:34.550 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:34.550 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:08:34.550 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:34.550 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:34.550 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:34.550 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:34.550 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:34.550 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:34.550 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:34.550 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:34.550 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:34.550 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:34.550 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:34.550 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:34.550 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:34.550 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:34.550 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:34.550 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:34.550 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:34.550 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:34.550 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:34.550 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:34.550 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:34.550 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:34.550 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:34.550 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:34.550 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.193 ms 00:08:34.550 00:08:34.550 --- 10.0.0.2 ping statistics --- 00:08:34.550 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:34.550 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:08:34.550 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:34.550 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:34.550 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.083 ms 00:08:34.550 00:08:34.550 --- 10.0.0.1 ping statistics --- 00:08:34.550 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:34.550 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:08:34.550 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:34.550 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:08:34.550 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:34.550 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:34.550 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:34.550 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:34.550 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:34.550 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:34.550 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:34.550 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:08:34.550 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:08:34.550 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:08:34.550 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:34.550 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:34.550 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:34.550 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=130206 00:08:34.550 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 130206 00:08:34.550 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:08:34.550 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 130206 ']' 00:08:34.550 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:34.550 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:34.550 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:34.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:34.550 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:34.550 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:34.550 [2024-07-26 14:02:42.230519] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:08:34.550 [2024-07-26 14:02:42.230639] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:34.550 EAL: No free 2048 kB hugepages reported on node 1 00:08:34.550 [2024-07-26 14:02:42.294545] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:34.550 [2024-07-26 14:02:42.396994] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:34.550 [2024-07-26 14:02:42.397049] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:34.550 [2024-07-26 14:02:42.397072] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:34.550 [2024-07-26 14:02:42.397083] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:34.550 [2024-07-26 14:02:42.397092] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:34.550 [2024-07-26 14:02:42.397227] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:34.550 [2024-07-26 14:02:42.397295] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:34.550 [2024-07-26 14:02:42.397361] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:08:34.550 [2024-07-26 14:02:42.397365] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:34.550 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:34.550 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:08:34.550 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:34.550 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:34.550 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:34.550 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:34.550 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:34.550 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.550 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:34.551 [2024-07-26 14:02:42.553077] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:34.551 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.551 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:08:34.551 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:34.551 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:34.551 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:34.809 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:08:34.809 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:08:34.809 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.809 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:34.809 Malloc0 00:08:34.809 [2024-07-26 14:02:42.611771] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:34.809 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.809 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:08:34.809 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:34.809 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:34.809 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=130251 00:08:34.809 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 130251 /var/tmp/bdevperf.sock 00:08:34.809 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 130251 ']' 00:08:34.809 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:08:34.809 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:08:34.809 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:34.809 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:34.809 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:08:34.809 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:34.809 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:34.809 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:08:34.809 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:34.809 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:34.809 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:34.809 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:34.809 { 00:08:34.809 "params": { 00:08:34.809 "name": "Nvme$subsystem", 00:08:34.809 "trtype": "$TEST_TRANSPORT", 00:08:34.809 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:34.809 "adrfam": "ipv4", 00:08:34.809 "trsvcid": "$NVMF_PORT", 00:08:34.809 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:34.809 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:34.809 "hdgst": ${hdgst:-false}, 00:08:34.809 "ddgst": ${ddgst:-false} 00:08:34.809 }, 00:08:34.809 "method": "bdev_nvme_attach_controller" 00:08:34.809 } 00:08:34.809 EOF 00:08:34.809 )") 00:08:34.809 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:08:34.809 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:08:34.810 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:08:34.810 14:02:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:34.810 "params": { 00:08:34.810 "name": "Nvme0", 00:08:34.810 "trtype": "tcp", 00:08:34.810 "traddr": "10.0.0.2", 00:08:34.810 "adrfam": "ipv4", 00:08:34.810 "trsvcid": "4420", 00:08:34.810 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:34.810 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:34.810 "hdgst": false, 00:08:34.810 "ddgst": false 00:08:34.810 }, 00:08:34.810 "method": "bdev_nvme_attach_controller" 00:08:34.810 }' 00:08:34.810 [2024-07-26 14:02:42.683632] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:08:34.810 [2024-07-26 14:02:42.683709] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid130251 ] 00:08:34.810 EAL: No free 2048 kB hugepages reported on node 1 00:08:34.810 [2024-07-26 14:02:42.745431] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.067 [2024-07-26 14:02:42.856687] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.067 Running I/O for 10 seconds... 00:08:35.326 14:02:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:35.326 14:02:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:08:35.326 14:02:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:08:35.326 14:02:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.326 14:02:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:35.326 14:02:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.326 14:02:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:35.326 14:02:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:08:35.326 14:02:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:08:35.326 14:02:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:08:35.326 14:02:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:08:35.326 14:02:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:08:35.326 14:02:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:08:35.326 14:02:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:35.326 14:02:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:35.326 14:02:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:35.326 14:02:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.326 14:02:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:35.326 14:02:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.326 14:02:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:08:35.326 14:02:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:08:35.326 14:02:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:08:35.585 14:02:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:08:35.585 14:02:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:35.585 14:02:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:35.585 14:02:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:35.585 14:02:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.585 14:02:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:35.585 14:02:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.585 14:02:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=579 00:08:35.585 14:02:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 579 -ge 100 ']' 00:08:35.585 14:02:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:08:35.585 14:02:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:08:35.585 14:02:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:08:35.585 14:02:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:35.585 14:02:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.585 14:02:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:35.585 [2024-07-26 14:02:43.475012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.585 [2024-07-26 14:02:43.475079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:35.585 [2024-07-26 14:02:43.475110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.585 [2024-07-26 14:02:43.475127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:35.585 [2024-07-26 14:02:43.475145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.585 [2024-07-26 14:02:43.475159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:35.585 [2024-07-26 14:02:43.475175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.585 [2024-07-26 14:02:43.475188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:35.585 [2024-07-26 14:02:43.475205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.585 [2024-07-26 14:02:43.475219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:35.586 [2024-07-26 14:02:43.475234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.586 [2024-07-26 14:02:43.475248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:35.586 [2024-07-26 14:02:43.475272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.586 [2024-07-26 14:02:43.475286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:35.586 [2024-07-26 14:02:43.475301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.586 [2024-07-26 14:02:43.475314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:35.586 [2024-07-26 14:02:43.475329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.586 [2024-07-26 14:02:43.475342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:35.586 [2024-07-26 14:02:43.475358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.586 [2024-07-26 14:02:43.475371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:35.586 [2024-07-26 14:02:43.475386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.586 [2024-07-26 14:02:43.475400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:35.586 [2024-07-26 14:02:43.475414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.586 [2024-07-26 14:02:43.475427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:35.586 [2024-07-26 14:02:43.475443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.586 [2024-07-26 14:02:43.475457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:35.586 [2024-07-26 14:02:43.475487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.586 [2024-07-26 14:02:43.475501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:35.586 [2024-07-26 14:02:43.475516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.586 [2024-07-26 14:02:43.475552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:35.586 [2024-07-26 14:02:43.475588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.586 [2024-07-26 14:02:43.475602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:35.586 [2024-07-26 14:02:43.475617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.586 [2024-07-26 14:02:43.475630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:35.586 [2024-07-26 14:02:43.475645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.586 [2024-07-26 14:02:43.475659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:35.586 [2024-07-26 14:02:43.475673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.586 [2024-07-26 14:02:43.475690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:35.586 [2024-07-26 14:02:43.475706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.586 [2024-07-26 14:02:43.475719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:35.586 [2024-07-26 14:02:43.475734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.586 [2024-07-26 14:02:43.475748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:35.586 [2024-07-26 14:02:43.475762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.586 [2024-07-26 14:02:43.475776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:35.586 [2024-07-26 14:02:43.475791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.586 [2024-07-26 14:02:43.475804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:35.586 [2024-07-26 14:02:43.475819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.586 [2024-07-26 14:02:43.475832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:35.586 [2024-07-26 14:02:43.475847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.586 [2024-07-26 14:02:43.475860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:35.586 [2024-07-26 14:02:43.475891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.586 [2024-07-26 14:02:43.475903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:35.586 [2024-07-26 14:02:43.475917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.586 [2024-07-26 14:02:43.475931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:35.586 [2024-07-26 14:02:43.475945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.586 [2024-07-26 14:02:43.475958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:35.586 [2024-07-26 14:02:43.475972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.586 [2024-07-26 14:02:43.475984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:35.586 [2024-07-26 14:02:43.475999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.586 [2024-07-26 14:02:43.476012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:35.586 [2024-07-26 14:02:43.476027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.586 [2024-07-26 14:02:43.476039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:35.586 [2024-07-26 14:02:43.476063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.586 [2024-07-26 14:02:43.476077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:35.586 [2024-07-26 14:02:43.476091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.586 [2024-07-26 14:02:43.476104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:35.586 [2024-07-26 14:02:43.476118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.586 [2024-07-26 14:02:43.476131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:35.586 [2024-07-26 14:02:43.476146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.586 [2024-07-26 14:02:43.476158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:35.586 [2024-07-26 14:02:43.476173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.586 [2024-07-26 14:02:43.476185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:35.586 [2024-07-26 14:02:43.476206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.586 [2024-07-26 14:02:43.476219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:35.586 [2024-07-26 14:02:43.476234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.586 [2024-07-26 14:02:43.476247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:35.586 [2024-07-26 14:02:43.476261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.586 [2024-07-26 14:02:43.476273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:35.586 [2024-07-26 14:02:43.476288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.586 [2024-07-26 14:02:43.476301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:35.586 [2024-07-26 14:02:43.476315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.586 [2024-07-26 14:02:43.476328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:35.586 [2024-07-26 14:02:43.476342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.586 [2024-07-26 14:02:43.476355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:35.586 [2024-07-26 14:02:43.476369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.586 [2024-07-26 14:02:43.476382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:35.586 [2024-07-26 14:02:43.476396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.587 [2024-07-26 14:02:43.476413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:35.587 [2024-07-26 14:02:43.476427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.587 [2024-07-26 14:02:43.476440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:35.587 [2024-07-26 14:02:43.476455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.587 [2024-07-26 14:02:43.476467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:35.587 [2024-07-26 14:02:43.476482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.587 [2024-07-26 14:02:43.476495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:35.587 [2024-07-26 14:02:43.476514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.587 [2024-07-26 14:02:43.476533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:35.587 [2024-07-26 14:02:43.476566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.587 [2024-07-26 14:02:43.476585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:35.587 [2024-07-26 14:02:43.476600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.587 [2024-07-26 14:02:43.476613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:35.587 [2024-07-26 14:02:43.476628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.587 [2024-07-26 14:02:43.476641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:35.587 [2024-07-26 14:02:43.476656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.587 [2024-07-26 14:02:43.476669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:35.587 [2024-07-26 14:02:43.476688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.587 [2024-07-26 14:02:43.476702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:35.587 [2024-07-26 14:02:43.476717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.587 [2024-07-26 14:02:43.476731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:35.587 [2024-07-26 14:02:43.476746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.587 [2024-07-26 14:02:43.476759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:35.587 [2024-07-26 14:02:43.476774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.587 [2024-07-26 14:02:43.476787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:35.587 [2024-07-26 14:02:43.476806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.587 [2024-07-26 14:02:43.476831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:35.587 [2024-07-26 14:02:43.476861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.587 [2024-07-26 14:02:43.476874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:35.587 [2024-07-26 14:02:43.476897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.587 [2024-07-26 14:02:43.476910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:35.587 [2024-07-26 14:02:43.476925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.587 [2024-07-26 14:02:43.476937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:35.587 [2024-07-26 14:02:43.476951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.587 [2024-07-26 14:02:43.476964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:35.587 [2024-07-26 14:02:43.476978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.587 [2024-07-26 14:02:43.476991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:35.587 [2024-07-26 14:02:43.477006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.587 [2024-07-26 14:02:43.477018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:35.587 [2024-07-26 14:02:43.477038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.587 [2024-07-26 14:02:43.477051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:35.587 [2024-07-26 14:02:43.477144] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x226b5a0 was disconnected and freed. reset controller. 00:08:35.587 [2024-07-26 14:02:43.478272] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:08:35.587 14:02:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.587 14:02:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:35.587 14:02:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.587 14:02:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:35.587 task offset: 81920 on job bdev=Nvme0n1 fails 00:08:35.587 00:08:35.587 Latency(us) 00:08:35.587 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:35.587 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:35.587 Job: Nvme0n1 ended in about 0.40 seconds with error 00:08:35.587 Verification LBA range: start 0x0 length 0x400 00:08:35.587 Nvme0n1 : 0.40 1601.99 100.12 160.20 0.00 35263.08 2767.08 34758.35 00:08:35.587 =================================================================================================================== 00:08:35.587 Total : 1601.99 100.12 160.20 0.00 35263.08 2767.08 34758.35 00:08:35.587 [2024-07-26 14:02:43.480174] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:35.587 [2024-07-26 14:02:43.480202] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5a790 (9): Bad file descriptor 00:08:35.587 14:02:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.587 14:02:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:08:35.587 [2024-07-26 14:02:43.526738] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:36.520 14:02:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 130251 00:08:36.520 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (130251) - No such process 00:08:36.520 14:02:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:08:36.520 14:02:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:08:36.520 14:02:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:08:36.520 14:02:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:08:36.520 14:02:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:08:36.520 14:02:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:08:36.520 14:02:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:36.520 14:02:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:36.520 { 00:08:36.520 "params": { 00:08:36.520 "name": "Nvme$subsystem", 00:08:36.520 "trtype": "$TEST_TRANSPORT", 00:08:36.520 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:36.520 "adrfam": "ipv4", 00:08:36.520 "trsvcid": "$NVMF_PORT", 00:08:36.520 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:36.520 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:36.520 "hdgst": ${hdgst:-false}, 00:08:36.520 "ddgst": ${ddgst:-false} 00:08:36.520 }, 00:08:36.520 "method": "bdev_nvme_attach_controller" 00:08:36.520 } 00:08:36.520 EOF 00:08:36.520 )") 00:08:36.520 14:02:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:08:36.520 14:02:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:08:36.520 14:02:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:08:36.520 14:02:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:36.521 "params": { 00:08:36.521 "name": "Nvme0", 00:08:36.521 "trtype": "tcp", 00:08:36.521 "traddr": "10.0.0.2", 00:08:36.521 "adrfam": "ipv4", 00:08:36.521 "trsvcid": "4420", 00:08:36.521 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:36.521 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:36.521 "hdgst": false, 00:08:36.521 "ddgst": false 00:08:36.521 }, 00:08:36.521 "method": "bdev_nvme_attach_controller" 00:08:36.521 }' 00:08:36.521 [2024-07-26 14:02:44.535614] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:08:36.521 [2024-07-26 14:02:44.535704] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid130529 ] 00:08:36.780 EAL: No free 2048 kB hugepages reported on node 1 00:08:36.780 [2024-07-26 14:02:44.596560] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.780 [2024-07-26 14:02:44.710259] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:37.038 Running I/O for 1 seconds... 00:08:37.972 00:08:37.972 Latency(us) 00:08:37.972 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:37.972 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:37.972 Verification LBA range: start 0x0 length 0x400 00:08:37.972 Nvme0n1 : 1.02 1698.15 106.13 0.00 0.00 37074.40 6505.05 32428.18 00:08:37.972 =================================================================================================================== 00:08:37.972 Total : 1698.15 106.13 0.00 0.00 37074.40 6505.05 32428.18 00:08:38.231 14:02:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:08:38.231 14:02:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:08:38.231 14:02:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:08:38.231 14:02:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:38.231 14:02:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:08:38.231 14:02:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:38.231 14:02:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:08:38.231 14:02:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:38.231 14:02:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:08:38.231 14:02:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:38.231 14:02:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:38.231 rmmod nvme_tcp 00:08:38.231 rmmod nvme_fabrics 00:08:38.490 rmmod nvme_keyring 00:08:38.490 14:02:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:38.490 14:02:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:08:38.490 14:02:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:08:38.490 14:02:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 130206 ']' 00:08:38.490 14:02:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 130206 00:08:38.490 14:02:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 130206 ']' 00:08:38.490 14:02:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 130206 00:08:38.490 14:02:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:08:38.490 14:02:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:38.490 14:02:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 130206 00:08:38.490 14:02:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:38.490 14:02:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:38.490 14:02:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 130206' 00:08:38.490 killing process with pid 130206 00:08:38.490 14:02:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 130206 00:08:38.490 14:02:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 130206 00:08:38.751 [2024-07-26 14:02:46.554043] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:08:38.751 14:02:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:38.751 14:02:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:38.751 14:02:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:38.751 14:02:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:38.751 14:02:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:38.751 14:02:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:38.751 14:02:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:38.751 14:02:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:40.663 14:02:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:40.663 14:02:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:08:40.663 00:08:40.663 real 0m8.869s 00:08:40.663 user 0m19.619s 00:08:40.663 sys 0m2.813s 00:08:40.663 14:02:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:40.663 14:02:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:40.663 ************************************ 00:08:40.663 END TEST nvmf_host_management 00:08:40.663 ************************************ 00:08:40.663 14:02:48 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:40.663 14:02:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:40.663 14:02:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:40.663 14:02:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:40.663 ************************************ 00:08:40.663 START TEST nvmf_lvol 00:08:40.663 ************************************ 00:08:40.663 14:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:40.922 * Looking for test storage... 00:08:40.922 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:40.922 14:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:40.922 14:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:08:40.922 14:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:40.922 14:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:40.922 14:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:40.922 14:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:40.922 14:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:40.922 14:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:40.922 14:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:40.922 14:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:40.922 14:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:40.922 14:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:40.922 14:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:08:40.922 14:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:08:40.922 14:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:40.922 14:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:40.922 14:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:40.922 14:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:40.922 14:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:40.922 14:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:40.922 14:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:40.922 14:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:40.922 14:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.922 14:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.922 14:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.922 14:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:08:40.922 14:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.922 14:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:08:40.922 14:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:40.922 14:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:40.922 14:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:40.922 14:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:40.922 14:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:40.922 14:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:40.922 14:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:40.922 14:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:40.922 14:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:40.922 14:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:40.922 14:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:08:40.922 14:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:08:40.922 14:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:40.922 14:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:08:40.923 14:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:40.923 14:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:40.923 14:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:40.923 14:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:40.923 14:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:40.923 14:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:40.923 14:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:40.923 14:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:40.923 14:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:40.923 14:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:40.923 14:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:08:40.923 14:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:43.459 14:02:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:43.459 14:02:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:08:43.459 14:02:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:43.459 14:02:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:43.459 14:02:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:43.459 14:02:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:43.459 14:02:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:43.459 14:02:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:08:43.459 14:02:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:43.459 14:02:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:08:43.459 14:02:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:08:43.459 14:02:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:08:43.459 14:02:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:08:43.459 14:02:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:08:43.459 14:02:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:08:43.459 14:02:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:43.459 14:02:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:43.459 14:02:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:43.459 14:02:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:43.459 14:02:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:43.459 14:02:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:43.459 14:02:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:43.459 14:02:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:43.459 14:02:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:43.459 14:02:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:43.459 14:02:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:43.459 14:02:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:43.459 14:02:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:43.459 14:02:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:43.459 14:02:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:43.459 14:02:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:43.459 14:02:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:43.459 14:02:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:43.459 14:02:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:08:43.459 Found 0000:09:00.0 (0x8086 - 0x159b) 00:08:43.459 14:02:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:43.459 14:02:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:43.459 14:02:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:43.459 14:02:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:43.459 14:02:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:43.459 14:02:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:43.459 14:02:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:08:43.459 Found 0000:09:00.1 (0x8086 - 0x159b) 00:08:43.459 14:02:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:43.459 14:02:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:43.459 14:02:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:43.459 14:02:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:43.459 14:02:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:43.459 14:02:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:43.459 14:02:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:43.459 14:02:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:43.459 14:02:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:43.459 14:02:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:43.459 14:02:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:43.459 14:02:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:43.459 14:02:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:43.459 14:02:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:43.459 14:02:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:43.459 14:02:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:08:43.459 Found net devices under 0000:09:00.0: cvl_0_0 00:08:43.459 14:02:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:43.459 14:02:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:43.459 14:02:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:43.459 14:02:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:43.459 14:02:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:43.459 14:02:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:43.459 14:02:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:43.459 14:02:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:43.459 14:02:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:08:43.459 Found net devices under 0000:09:00.1: cvl_0_1 00:08:43.459 14:02:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:43.459 14:02:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:43.459 14:02:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:08:43.459 14:02:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:43.459 14:02:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:43.459 14:02:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:43.459 14:02:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:43.459 14:02:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:43.459 14:02:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:43.459 14:02:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:43.459 14:02:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:43.459 14:02:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:43.459 14:02:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:43.459 14:02:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:43.459 14:02:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:43.459 14:02:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:43.459 14:02:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:43.459 14:02:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:43.459 14:02:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:43.459 14:02:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:43.459 14:02:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:43.459 14:02:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:43.459 14:02:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:43.459 14:02:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:43.459 14:02:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:43.460 14:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:43.460 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:43.460 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.237 ms 00:08:43.460 00:08:43.460 --- 10.0.0.2 ping statistics --- 00:08:43.460 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:43.460 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:08:43.460 14:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:43.460 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:43.460 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.070 ms 00:08:43.460 00:08:43.460 --- 10.0.0.1 ping statistics --- 00:08:43.460 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:43.460 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:08:43.460 14:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:43.460 14:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:08:43.460 14:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:43.460 14:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:43.460 14:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:43.460 14:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:43.460 14:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:43.460 14:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:43.460 14:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:43.460 14:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:08:43.460 14:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:43.460 14:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:43.460 14:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:43.460 14:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=132731 00:08:43.460 14:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:08:43.460 14:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 132731 00:08:43.460 14:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 132731 ']' 00:08:43.460 14:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:43.460 14:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:43.460 14:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:43.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:43.460 14:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:43.460 14:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:43.460 [2024-07-26 14:02:51.091074] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:08:43.460 [2024-07-26 14:02:51.091163] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:43.460 EAL: No free 2048 kB hugepages reported on node 1 00:08:43.460 [2024-07-26 14:02:51.153102] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:43.460 [2024-07-26 14:02:51.263108] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:43.460 [2024-07-26 14:02:51.263166] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:43.460 [2024-07-26 14:02:51.263194] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:43.460 [2024-07-26 14:02:51.263206] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:43.460 [2024-07-26 14:02:51.263216] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:43.460 [2024-07-26 14:02:51.263297] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:43.460 [2024-07-26 14:02:51.263361] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:43.460 [2024-07-26 14:02:51.263363] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.460 14:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:43.460 14:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:08:43.460 14:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:43.460 14:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:43.460 14:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:43.460 14:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:43.460 14:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:43.718 [2024-07-26 14:02:51.630181] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:43.718 14:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:43.976 14:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:08:43.976 14:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:44.234 14:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:08:44.234 14:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:08:44.491 14:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:08:44.750 14:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=08121aa7-8005-4046-832d-a47fe9521543 00:08:44.750 14:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 08121aa7-8005-4046-832d-a47fe9521543 lvol 20 00:08:45.007 14:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=729448c0-1468-47b3-9603-a04a60148a12 00:08:45.007 14:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:45.264 14:02:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 729448c0-1468-47b3-9603-a04a60148a12 00:08:45.521 14:02:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:45.779 [2024-07-26 14:02:53.671711] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:45.779 14:02:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:46.036 14:02:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=133037 00:08:46.036 14:02:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:08:46.036 14:02:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:08:46.036 EAL: No free 2048 kB hugepages reported on node 1 00:08:46.970 14:02:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 729448c0-1468-47b3-9603-a04a60148a12 MY_SNAPSHOT 00:08:47.227 14:02:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=a73138e2-d167-4ef2-b05b-b4facdd68e01 00:08:47.227 14:02:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 729448c0-1468-47b3-9603-a04a60148a12 30 00:08:47.793 14:02:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone a73138e2-d167-4ef2-b05b-b4facdd68e01 MY_CLONE 00:08:47.793 14:02:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=6206ab6f-fba1-466d-9edf-44bc1232e3ca 00:08:47.793 14:02:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 6206ab6f-fba1-466d-9edf-44bc1232e3ca 00:08:48.727 14:02:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 133037 00:08:56.837 Initializing NVMe Controllers 00:08:56.837 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:56.837 Controller IO queue size 128, less than required. 00:08:56.837 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:56.837 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:08:56.837 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:08:56.837 Initialization complete. Launching workers. 00:08:56.837 ======================================================== 00:08:56.837 Latency(us) 00:08:56.837 Device Information : IOPS MiB/s Average min max 00:08:56.837 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10368.70 40.50 12351.02 1973.86 81663.02 00:08:56.837 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10451.50 40.83 12256.19 2109.08 82614.53 00:08:56.837 ======================================================== 00:08:56.837 Total : 20820.20 81.33 12303.42 1973.86 82614.53 00:08:56.837 00:08:56.837 14:03:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:56.837 14:03:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 729448c0-1468-47b3-9603-a04a60148a12 00:08:57.095 14:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 08121aa7-8005-4046-832d-a47fe9521543 00:08:57.353 14:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:08:57.353 14:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:08:57.353 14:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:08:57.353 14:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:57.353 14:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:08:57.353 14:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:57.353 14:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:08:57.353 14:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:57.353 14:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:57.353 rmmod nvme_tcp 00:08:57.353 rmmod nvme_fabrics 00:08:57.353 rmmod nvme_keyring 00:08:57.353 14:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:57.353 14:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:08:57.353 14:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:08:57.353 14:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 132731 ']' 00:08:57.353 14:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 132731 00:08:57.353 14:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 132731 ']' 00:08:57.353 14:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 132731 00:08:57.353 14:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:08:57.353 14:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:57.353 14:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 132731 00:08:57.610 14:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:57.610 14:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:57.610 14:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 132731' 00:08:57.610 killing process with pid 132731 00:08:57.610 14:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 132731 00:08:57.610 14:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 132731 00:08:57.870 14:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:57.870 14:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:57.870 14:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:57.870 14:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:57.870 14:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:57.870 14:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:57.870 14:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:57.870 14:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:59.780 14:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:59.780 00:08:59.780 real 0m19.060s 00:08:59.780 user 1m4.340s 00:08:59.780 sys 0m5.821s 00:08:59.780 14:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:59.780 14:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:59.780 ************************************ 00:08:59.780 END TEST nvmf_lvol 00:08:59.780 ************************************ 00:08:59.780 14:03:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:59.780 14:03:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:59.780 14:03:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:59.780 14:03:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:59.780 ************************************ 00:08:59.780 START TEST nvmf_lvs_grow 00:08:59.780 ************************************ 00:08:59.780 14:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:09:00.039 * Looking for test storage... 00:09:00.039 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:00.039 14:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:00.039 14:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:09:00.039 14:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:00.039 14:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:00.039 14:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:00.039 14:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:00.039 14:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:00.039 14:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:00.039 14:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:00.039 14:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:00.039 14:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:00.039 14:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:00.039 14:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:09:00.039 14:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:09:00.039 14:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:00.039 14:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:00.039 14:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:00.039 14:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:00.039 14:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:00.039 14:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:00.039 14:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:00.039 14:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:00.039 14:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.039 14:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.039 14:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.039 14:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:09:00.039 14:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.039 14:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:09:00.039 14:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:00.039 14:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:00.039 14:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:00.039 14:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:00.039 14:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:00.039 14:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:00.039 14:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:00.039 14:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:00.039 14:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:00.039 14:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:00.039 14:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:09:00.039 14:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:00.039 14:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:00.039 14:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:00.039 14:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:00.039 14:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:00.039 14:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:00.039 14:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:00.039 14:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:00.039 14:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:00.039 14:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:00.039 14:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:09:00.039 14:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:01.940 14:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:01.940 14:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:09:01.940 14:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:01.940 14:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:01.940 14:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:01.940 14:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:01.940 14:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:01.940 14:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:09:01.941 14:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:01.941 14:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:09:01.941 14:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:09:01.941 14:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:09:01.941 14:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:09:01.941 14:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:09:01.941 14:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:09:01.941 14:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:01.941 14:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:01.941 14:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:01.941 14:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:01.941 14:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:01.941 14:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:01.941 14:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:01.941 14:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:01.941 14:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:01.941 14:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:01.941 14:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:01.941 14:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:01.941 14:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:01.941 14:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:01.941 14:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:01.941 14:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:01.941 14:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:01.941 14:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:01.941 14:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:09:01.941 Found 0000:09:00.0 (0x8086 - 0x159b) 00:09:01.941 14:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:01.941 14:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:01.941 14:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:01.941 14:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:01.941 14:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:01.941 14:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:01.941 14:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:09:01.941 Found 0000:09:00.1 (0x8086 - 0x159b) 00:09:01.941 14:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:01.941 14:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:01.941 14:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:01.941 14:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:01.941 14:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:01.941 14:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:01.941 14:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:01.941 14:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:01.941 14:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:01.941 14:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:01.941 14:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:01.941 14:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:01.941 14:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:01.941 14:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:01.941 14:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:01.941 14:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:09:01.941 Found net devices under 0000:09:00.0: cvl_0_0 00:09:01.941 14:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:01.941 14:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:01.941 14:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:01.941 14:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:01.941 14:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:01.941 14:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:01.941 14:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:01.941 14:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:01.941 14:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:09:01.941 Found net devices under 0000:09:00.1: cvl_0_1 00:09:01.941 14:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:01.941 14:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:01.941 14:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:09:01.941 14:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:01.941 14:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:01.941 14:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:01.941 14:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:01.941 14:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:01.941 14:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:01.941 14:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:01.941 14:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:01.941 14:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:01.941 14:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:01.941 14:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:01.941 14:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:01.941 14:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:01.941 14:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:01.941 14:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:01.941 14:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:01.941 14:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:01.941 14:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:01.941 14:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:01.941 14:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:01.941 14:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:01.941 14:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:02.200 14:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:02.200 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:02.200 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.168 ms 00:09:02.200 00:09:02.200 --- 10.0.0.2 ping statistics --- 00:09:02.200 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:02.200 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:09:02.200 14:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:02.200 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:02.200 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.087 ms 00:09:02.200 00:09:02.200 --- 10.0.0.1 ping statistics --- 00:09:02.200 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:02.200 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:09:02.200 14:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:02.200 14:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:09:02.200 14:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:02.200 14:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:02.200 14:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:02.200 14:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:02.200 14:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:02.200 14:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:02.200 14:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:02.200 14:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:09:02.200 14:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:02.200 14:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:02.200 14:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:02.200 14:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=136932 00:09:02.200 14:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:02.200 14:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 136932 00:09:02.200 14:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 136932 ']' 00:09:02.200 14:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:02.200 14:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:02.200 14:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:02.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:02.200 14:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:02.200 14:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:02.200 [2024-07-26 14:03:10.048896] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:09:02.200 [2024-07-26 14:03:10.048979] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:02.200 EAL: No free 2048 kB hugepages reported on node 1 00:09:02.200 [2024-07-26 14:03:10.112738] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:02.200 [2024-07-26 14:03:10.215750] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:02.200 [2024-07-26 14:03:10.215804] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:02.200 [2024-07-26 14:03:10.215820] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:02.200 [2024-07-26 14:03:10.215832] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:02.200 [2024-07-26 14:03:10.215843] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:02.200 [2024-07-26 14:03:10.215877] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:02.458 14:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:02.458 14:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:09:02.458 14:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:02.458 14:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:02.458 14:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:02.458 14:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:02.458 14:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:02.716 [2024-07-26 14:03:10.627641] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:02.716 14:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:09:02.716 14:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:02.716 14:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:02.716 14:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:02.716 ************************************ 00:09:02.716 START TEST lvs_grow_clean 00:09:02.716 ************************************ 00:09:02.716 14:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:09:02.716 14:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:02.716 14:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:02.716 14:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:02.716 14:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:02.716 14:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:02.716 14:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:02.716 14:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:02.716 14:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:02.716 14:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:02.974 14:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:02.974 14:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:03.232 14:03:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=014cf55a-4d0d-43c7-ad56-4f8b0988c422 00:09:03.232 14:03:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 014cf55a-4d0d-43c7-ad56-4f8b0988c422 00:09:03.232 14:03:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:03.490 14:03:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:03.490 14:03:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:03.490 14:03:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 014cf55a-4d0d-43c7-ad56-4f8b0988c422 lvol 150 00:09:03.749 14:03:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=7883aca5-6e66-461f-b626-a0318018431f 00:09:03.749 14:03:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:03.749 14:03:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:04.007 [2024-07-26 14:03:11.912679] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:04.007 [2024-07-26 14:03:11.912770] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:04.007 true 00:09:04.007 14:03:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 014cf55a-4d0d-43c7-ad56-4f8b0988c422 00:09:04.007 14:03:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:04.265 14:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:04.265 14:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:04.523 14:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 7883aca5-6e66-461f-b626-a0318018431f 00:09:04.781 14:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:05.038 [2024-07-26 14:03:12.895664] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:05.038 14:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:05.296 14:03:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=137367 00:09:05.297 14:03:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:05.297 14:03:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:05.297 14:03:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 137367 /var/tmp/bdevperf.sock 00:09:05.297 14:03:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 137367 ']' 00:09:05.297 14:03:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:05.297 14:03:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:05.297 14:03:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:05.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:05.297 14:03:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:05.297 14:03:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:05.297 [2024-07-26 14:03:13.193872] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:09:05.297 [2024-07-26 14:03:13.193957] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137367 ] 00:09:05.297 EAL: No free 2048 kB hugepages reported on node 1 00:09:05.297 [2024-07-26 14:03:13.250431] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:05.555 [2024-07-26 14:03:13.356087] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:05.555 14:03:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:05.555 14:03:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:09:05.555 14:03:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:05.813 Nvme0n1 00:09:05.813 14:03:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:06.379 [ 00:09:06.379 { 00:09:06.379 "name": "Nvme0n1", 00:09:06.379 "aliases": [ 00:09:06.379 "7883aca5-6e66-461f-b626-a0318018431f" 00:09:06.379 ], 00:09:06.379 "product_name": "NVMe disk", 00:09:06.379 "block_size": 4096, 00:09:06.379 "num_blocks": 38912, 00:09:06.379 "uuid": "7883aca5-6e66-461f-b626-a0318018431f", 00:09:06.379 "assigned_rate_limits": { 00:09:06.379 "rw_ios_per_sec": 0, 00:09:06.379 "rw_mbytes_per_sec": 0, 00:09:06.379 "r_mbytes_per_sec": 0, 00:09:06.379 "w_mbytes_per_sec": 0 00:09:06.379 }, 00:09:06.379 "claimed": false, 00:09:06.379 "zoned": false, 00:09:06.379 "supported_io_types": { 00:09:06.379 "read": true, 00:09:06.379 "write": true, 00:09:06.379 "unmap": true, 00:09:06.379 "flush": true, 00:09:06.379 "reset": true, 00:09:06.379 "nvme_admin": true, 00:09:06.379 "nvme_io": true, 00:09:06.379 "nvme_io_md": false, 00:09:06.379 "write_zeroes": true, 00:09:06.379 "zcopy": false, 00:09:06.379 "get_zone_info": false, 00:09:06.379 "zone_management": false, 00:09:06.379 "zone_append": false, 00:09:06.379 "compare": true, 00:09:06.379 "compare_and_write": true, 00:09:06.379 "abort": true, 00:09:06.379 "seek_hole": false, 00:09:06.379 "seek_data": false, 00:09:06.379 "copy": true, 00:09:06.379 "nvme_iov_md": false 00:09:06.379 }, 00:09:06.379 "memory_domains": [ 00:09:06.379 { 00:09:06.379 "dma_device_id": "system", 00:09:06.379 "dma_device_type": 1 00:09:06.379 } 00:09:06.379 ], 00:09:06.379 "driver_specific": { 00:09:06.379 "nvme": [ 00:09:06.379 { 00:09:06.379 "trid": { 00:09:06.379 "trtype": "TCP", 00:09:06.379 "adrfam": "IPv4", 00:09:06.379 "traddr": "10.0.0.2", 00:09:06.379 "trsvcid": "4420", 00:09:06.379 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:06.379 }, 00:09:06.379 "ctrlr_data": { 00:09:06.379 "cntlid": 1, 00:09:06.379 "vendor_id": "0x8086", 00:09:06.379 "model_number": "SPDK bdev Controller", 00:09:06.379 "serial_number": "SPDK0", 00:09:06.379 "firmware_revision": "24.09", 00:09:06.379 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:06.379 "oacs": { 00:09:06.379 "security": 0, 00:09:06.379 "format": 0, 00:09:06.379 "firmware": 0, 00:09:06.379 "ns_manage": 0 00:09:06.379 }, 00:09:06.379 "multi_ctrlr": true, 00:09:06.379 "ana_reporting": false 00:09:06.379 }, 00:09:06.379 "vs": { 00:09:06.379 "nvme_version": "1.3" 00:09:06.379 }, 00:09:06.379 "ns_data": { 00:09:06.379 "id": 1, 00:09:06.379 "can_share": true 00:09:06.379 } 00:09:06.379 } 00:09:06.379 ], 00:09:06.379 "mp_policy": "active_passive" 00:09:06.380 } 00:09:06.380 } 00:09:06.380 ] 00:09:06.380 14:03:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=137501 00:09:06.380 14:03:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:06.380 14:03:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:06.380 Running I/O for 10 seconds... 00:09:07.314 Latency(us) 00:09:07.314 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:07.314 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:07.314 Nvme0n1 : 1.00 15687.00 61.28 0.00 0.00 0.00 0.00 0.00 00:09:07.314 =================================================================================================================== 00:09:07.314 Total : 15687.00 61.28 0.00 0.00 0.00 0.00 0.00 00:09:07.314 00:09:08.248 14:03:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 014cf55a-4d0d-43c7-ad56-4f8b0988c422 00:09:08.249 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:08.249 Nvme0n1 : 2.00 15768.50 61.60 0.00 0.00 0.00 0.00 0.00 00:09:08.249 =================================================================================================================== 00:09:08.249 Total : 15768.50 61.60 0.00 0.00 0.00 0.00 0.00 00:09:08.249 00:09:08.507 true 00:09:08.507 14:03:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 014cf55a-4d0d-43c7-ad56-4f8b0988c422 00:09:08.507 14:03:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:08.765 14:03:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:08.765 14:03:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:08.765 14:03:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 137501 00:09:09.332 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:09.332 Nvme0n1 : 3.00 15892.67 62.08 0.00 0.00 0.00 0.00 0.00 00:09:09.332 =================================================================================================================== 00:09:09.332 Total : 15892.67 62.08 0.00 0.00 0.00 0.00 0.00 00:09:09.332 00:09:10.266 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:10.266 Nvme0n1 : 4.00 15984.75 62.44 0.00 0.00 0.00 0.00 0.00 00:09:10.266 =================================================================================================================== 00:09:10.266 Total : 15984.75 62.44 0.00 0.00 0.00 0.00 0.00 00:09:10.266 00:09:11.642 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:11.642 Nvme0n1 : 5.00 16026.20 62.60 0.00 0.00 0.00 0.00 0.00 00:09:11.642 =================================================================================================================== 00:09:11.642 Total : 16026.20 62.60 0.00 0.00 0.00 0.00 0.00 00:09:11.642 00:09:12.577 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:12.577 Nvme0n1 : 6.00 16085.67 62.83 0.00 0.00 0.00 0.00 0.00 00:09:12.577 =================================================================================================================== 00:09:12.577 Total : 16085.67 62.83 0.00 0.00 0.00 0.00 0.00 00:09:12.577 00:09:13.511 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:13.511 Nvme0n1 : 7.00 16137.43 63.04 0.00 0.00 0.00 0.00 0.00 00:09:13.511 =================================================================================================================== 00:09:13.511 Total : 16137.43 63.04 0.00 0.00 0.00 0.00 0.00 00:09:13.511 00:09:14.486 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:14.486 Nvme0n1 : 8.00 16168.12 63.16 0.00 0.00 0.00 0.00 0.00 00:09:14.486 =================================================================================================================== 00:09:14.486 Total : 16168.12 63.16 0.00 0.00 0.00 0.00 0.00 00:09:14.486 00:09:15.420 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:15.420 Nvme0n1 : 9.00 16192.00 63.25 0.00 0.00 0.00 0.00 0.00 00:09:15.420 =================================================================================================================== 00:09:15.420 Total : 16192.00 63.25 0.00 0.00 0.00 0.00 0.00 00:09:15.420 00:09:16.354 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:16.354 Nvme0n1 : 10.00 16217.60 63.35 0.00 0.00 0.00 0.00 0.00 00:09:16.354 =================================================================================================================== 00:09:16.354 Total : 16217.60 63.35 0.00 0.00 0.00 0.00 0.00 00:09:16.354 00:09:16.354 00:09:16.354 Latency(us) 00:09:16.354 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:16.354 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:16.354 Nvme0n1 : 10.00 16223.67 63.37 0.00 0.00 7885.08 4441.88 15340.28 00:09:16.354 =================================================================================================================== 00:09:16.354 Total : 16223.67 63.37 0.00 0.00 7885.08 4441.88 15340.28 00:09:16.354 0 00:09:16.354 14:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 137367 00:09:16.354 14:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 137367 ']' 00:09:16.354 14:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 137367 00:09:16.354 14:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:09:16.354 14:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:16.354 14:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 137367 00:09:16.354 14:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:16.354 14:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:16.354 14:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 137367' 00:09:16.354 killing process with pid 137367 00:09:16.354 14:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 137367 00:09:16.354 Received shutdown signal, test time was about 10.000000 seconds 00:09:16.354 00:09:16.354 Latency(us) 00:09:16.354 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:16.354 =================================================================================================================== 00:09:16.354 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:16.354 14:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 137367 00:09:16.612 14:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:16.870 14:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:17.128 14:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 014cf55a-4d0d-43c7-ad56-4f8b0988c422 00:09:17.128 14:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:17.386 14:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:17.386 14:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:09:17.386 14:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:17.644 [2024-07-26 14:03:25.506245] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:17.644 14:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 014cf55a-4d0d-43c7-ad56-4f8b0988c422 00:09:17.644 14:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:09:17.644 14:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 014cf55a-4d0d-43c7-ad56-4f8b0988c422 00:09:17.644 14:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:17.644 14:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:17.644 14:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:17.644 14:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:17.644 14:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:17.644 14:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:17.644 14:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:17.644 14:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:17.644 14:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 014cf55a-4d0d-43c7-ad56-4f8b0988c422 00:09:17.902 request: 00:09:17.902 { 00:09:17.902 "uuid": "014cf55a-4d0d-43c7-ad56-4f8b0988c422", 00:09:17.902 "method": "bdev_lvol_get_lvstores", 00:09:17.902 "req_id": 1 00:09:17.902 } 00:09:17.902 Got JSON-RPC error response 00:09:17.902 response: 00:09:17.902 { 00:09:17.902 "code": -19, 00:09:17.902 "message": "No such device" 00:09:17.902 } 00:09:17.902 14:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:09:17.902 14:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:17.902 14:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:17.902 14:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:17.902 14:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:18.161 aio_bdev 00:09:18.161 14:03:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 7883aca5-6e66-461f-b626-a0318018431f 00:09:18.161 14:03:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=7883aca5-6e66-461f-b626-a0318018431f 00:09:18.161 14:03:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:18.161 14:03:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:09:18.161 14:03:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:18.161 14:03:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:18.161 14:03:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:18.419 14:03:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 7883aca5-6e66-461f-b626-a0318018431f -t 2000 00:09:18.677 [ 00:09:18.677 { 00:09:18.677 "name": "7883aca5-6e66-461f-b626-a0318018431f", 00:09:18.677 "aliases": [ 00:09:18.677 "lvs/lvol" 00:09:18.677 ], 00:09:18.677 "product_name": "Logical Volume", 00:09:18.677 "block_size": 4096, 00:09:18.677 "num_blocks": 38912, 00:09:18.677 "uuid": "7883aca5-6e66-461f-b626-a0318018431f", 00:09:18.677 "assigned_rate_limits": { 00:09:18.677 "rw_ios_per_sec": 0, 00:09:18.677 "rw_mbytes_per_sec": 0, 00:09:18.677 "r_mbytes_per_sec": 0, 00:09:18.677 "w_mbytes_per_sec": 0 00:09:18.677 }, 00:09:18.677 "claimed": false, 00:09:18.677 "zoned": false, 00:09:18.677 "supported_io_types": { 00:09:18.677 "read": true, 00:09:18.677 "write": true, 00:09:18.677 "unmap": true, 00:09:18.677 "flush": false, 00:09:18.677 "reset": true, 00:09:18.677 "nvme_admin": false, 00:09:18.677 "nvme_io": false, 00:09:18.677 "nvme_io_md": false, 00:09:18.677 "write_zeroes": true, 00:09:18.677 "zcopy": false, 00:09:18.677 "get_zone_info": false, 00:09:18.677 "zone_management": false, 00:09:18.677 "zone_append": false, 00:09:18.677 "compare": false, 00:09:18.677 "compare_and_write": false, 00:09:18.677 "abort": false, 00:09:18.677 "seek_hole": true, 00:09:18.677 "seek_data": true, 00:09:18.677 "copy": false, 00:09:18.677 "nvme_iov_md": false 00:09:18.677 }, 00:09:18.677 "driver_specific": { 00:09:18.677 "lvol": { 00:09:18.677 "lvol_store_uuid": "014cf55a-4d0d-43c7-ad56-4f8b0988c422", 00:09:18.677 "base_bdev": "aio_bdev", 00:09:18.677 "thin_provision": false, 00:09:18.677 "num_allocated_clusters": 38, 00:09:18.677 "snapshot": false, 00:09:18.677 "clone": false, 00:09:18.677 "esnap_clone": false 00:09:18.677 } 00:09:18.677 } 00:09:18.677 } 00:09:18.677 ] 00:09:18.677 14:03:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:09:18.677 14:03:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 014cf55a-4d0d-43c7-ad56-4f8b0988c422 00:09:18.677 14:03:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:18.935 14:03:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:18.935 14:03:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 014cf55a-4d0d-43c7-ad56-4f8b0988c422 00:09:18.935 14:03:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:19.193 14:03:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:19.193 14:03:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 7883aca5-6e66-461f-b626-a0318018431f 00:09:19.452 14:03:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 014cf55a-4d0d-43c7-ad56-4f8b0988c422 00:09:19.710 14:03:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:19.969 14:03:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:19.969 00:09:19.969 real 0m17.113s 00:09:19.969 user 0m16.565s 00:09:19.969 sys 0m1.944s 00:09:19.969 14:03:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:19.969 14:03:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:19.969 ************************************ 00:09:19.969 END TEST lvs_grow_clean 00:09:19.969 ************************************ 00:09:19.969 14:03:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:09:19.969 14:03:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:19.969 14:03:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:19.969 14:03:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:19.969 ************************************ 00:09:19.969 START TEST lvs_grow_dirty 00:09:19.969 ************************************ 00:09:19.969 14:03:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:09:19.969 14:03:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:19.969 14:03:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:19.969 14:03:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:19.969 14:03:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:19.969 14:03:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:19.969 14:03:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:19.969 14:03:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:19.969 14:03:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:19.969 14:03:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:20.227 14:03:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:20.227 14:03:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:20.485 14:03:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=6667bcc8-ca10-45cc-abc6-0aec85ad7ca0 00:09:20.485 14:03:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6667bcc8-ca10-45cc-abc6-0aec85ad7ca0 00:09:20.485 14:03:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:20.743 14:03:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:20.744 14:03:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:20.744 14:03:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 6667bcc8-ca10-45cc-abc6-0aec85ad7ca0 lvol 150 00:09:21.002 14:03:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=90219a45-7b41-4b23-bbcc-88081f12c414 00:09:21.002 14:03:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:21.002 14:03:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:21.260 [2024-07-26 14:03:29.080667] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:21.260 [2024-07-26 14:03:29.080763] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:21.260 true 00:09:21.260 14:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6667bcc8-ca10-45cc-abc6-0aec85ad7ca0 00:09:21.260 14:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:21.518 14:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:21.518 14:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:21.775 14:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 90219a45-7b41-4b23-bbcc-88081f12c414 00:09:22.033 14:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:22.291 [2024-07-26 14:03:30.063677] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:22.291 14:03:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:22.550 14:03:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=139440 00:09:22.550 14:03:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:22.550 14:03:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:22.550 14:03:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 139440 /var/tmp/bdevperf.sock 00:09:22.550 14:03:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 139440 ']' 00:09:22.550 14:03:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:22.550 14:03:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:22.550 14:03:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:22.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:22.550 14:03:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:22.550 14:03:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:22.550 [2024-07-26 14:03:30.357469] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:09:22.550 [2024-07-26 14:03:30.357570] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139440 ] 00:09:22.550 EAL: No free 2048 kB hugepages reported on node 1 00:09:22.550 [2024-07-26 14:03:30.414679] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:22.550 [2024-07-26 14:03:30.528231] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:22.808 14:03:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:22.808 14:03:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:09:22.808 14:03:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:23.066 Nvme0n1 00:09:23.066 14:03:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:23.324 [ 00:09:23.324 { 00:09:23.324 "name": "Nvme0n1", 00:09:23.324 "aliases": [ 00:09:23.324 "90219a45-7b41-4b23-bbcc-88081f12c414" 00:09:23.324 ], 00:09:23.324 "product_name": "NVMe disk", 00:09:23.324 "block_size": 4096, 00:09:23.324 "num_blocks": 38912, 00:09:23.324 "uuid": "90219a45-7b41-4b23-bbcc-88081f12c414", 00:09:23.324 "assigned_rate_limits": { 00:09:23.324 "rw_ios_per_sec": 0, 00:09:23.324 "rw_mbytes_per_sec": 0, 00:09:23.324 "r_mbytes_per_sec": 0, 00:09:23.324 "w_mbytes_per_sec": 0 00:09:23.324 }, 00:09:23.324 "claimed": false, 00:09:23.324 "zoned": false, 00:09:23.324 "supported_io_types": { 00:09:23.324 "read": true, 00:09:23.324 "write": true, 00:09:23.324 "unmap": true, 00:09:23.324 "flush": true, 00:09:23.324 "reset": true, 00:09:23.324 "nvme_admin": true, 00:09:23.324 "nvme_io": true, 00:09:23.324 "nvme_io_md": false, 00:09:23.324 "write_zeroes": true, 00:09:23.324 "zcopy": false, 00:09:23.324 "get_zone_info": false, 00:09:23.324 "zone_management": false, 00:09:23.324 "zone_append": false, 00:09:23.324 "compare": true, 00:09:23.324 "compare_and_write": true, 00:09:23.324 "abort": true, 00:09:23.324 "seek_hole": false, 00:09:23.324 "seek_data": false, 00:09:23.324 "copy": true, 00:09:23.324 "nvme_iov_md": false 00:09:23.324 }, 00:09:23.324 "memory_domains": [ 00:09:23.324 { 00:09:23.324 "dma_device_id": "system", 00:09:23.324 "dma_device_type": 1 00:09:23.324 } 00:09:23.324 ], 00:09:23.324 "driver_specific": { 00:09:23.324 "nvme": [ 00:09:23.324 { 00:09:23.324 "trid": { 00:09:23.324 "trtype": "TCP", 00:09:23.324 "adrfam": "IPv4", 00:09:23.324 "traddr": "10.0.0.2", 00:09:23.324 "trsvcid": "4420", 00:09:23.324 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:23.324 }, 00:09:23.324 "ctrlr_data": { 00:09:23.324 "cntlid": 1, 00:09:23.324 "vendor_id": "0x8086", 00:09:23.324 "model_number": "SPDK bdev Controller", 00:09:23.324 "serial_number": "SPDK0", 00:09:23.324 "firmware_revision": "24.09", 00:09:23.324 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:23.324 "oacs": { 00:09:23.324 "security": 0, 00:09:23.324 "format": 0, 00:09:23.324 "firmware": 0, 00:09:23.324 "ns_manage": 0 00:09:23.324 }, 00:09:23.325 "multi_ctrlr": true, 00:09:23.325 "ana_reporting": false 00:09:23.325 }, 00:09:23.325 "vs": { 00:09:23.325 "nvme_version": "1.3" 00:09:23.325 }, 00:09:23.325 "ns_data": { 00:09:23.325 "id": 1, 00:09:23.325 "can_share": true 00:09:23.325 } 00:09:23.325 } 00:09:23.325 ], 00:09:23.325 "mp_policy": "active_passive" 00:09:23.325 } 00:09:23.325 } 00:09:23.325 ] 00:09:23.325 14:03:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=139574 00:09:23.325 14:03:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:23.325 14:03:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:23.583 Running I/O for 10 seconds... 00:09:24.518 Latency(us) 00:09:24.518 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:24.518 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:24.518 Nvme0n1 : 1.00 15749.00 61.52 0.00 0.00 0.00 0.00 0.00 00:09:24.518 =================================================================================================================== 00:09:24.518 Total : 15749.00 61.52 0.00 0.00 0.00 0.00 0.00 00:09:24.518 00:09:25.450 14:03:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 6667bcc8-ca10-45cc-abc6-0aec85ad7ca0 00:09:25.450 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:25.450 Nvme0n1 : 2.00 15908.00 62.14 0.00 0.00 0.00 0.00 0.00 00:09:25.450 =================================================================================================================== 00:09:25.450 Total : 15908.00 62.14 0.00 0.00 0.00 0.00 0.00 00:09:25.450 00:09:25.708 true 00:09:25.708 14:03:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6667bcc8-ca10-45cc-abc6-0aec85ad7ca0 00:09:25.708 14:03:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:25.967 14:03:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:25.967 14:03:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:25.967 14:03:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 139574 00:09:26.534 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:26.534 Nvme0n1 : 3.00 15981.67 62.43 0.00 0.00 0.00 0.00 0.00 00:09:26.534 =================================================================================================================== 00:09:26.534 Total : 15981.67 62.43 0.00 0.00 0.00 0.00 0.00 00:09:26.534 00:09:27.468 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:27.468 Nvme0n1 : 4.00 16082.00 62.82 0.00 0.00 0.00 0.00 0.00 00:09:27.468 =================================================================================================================== 00:09:27.468 Total : 16082.00 62.82 0.00 0.00 0.00 0.00 0.00 00:09:27.468 00:09:28.403 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:28.403 Nvme0n1 : 5.00 16167.60 63.15 0.00 0.00 0.00 0.00 0.00 00:09:28.403 =================================================================================================================== 00:09:28.403 Total : 16167.60 63.15 0.00 0.00 0.00 0.00 0.00 00:09:28.403 00:09:29.777 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:29.777 Nvme0n1 : 6.00 16224.67 63.38 0.00 0.00 0.00 0.00 0.00 00:09:29.777 =================================================================================================================== 00:09:29.778 Total : 16224.67 63.38 0.00 0.00 0.00 0.00 0.00 00:09:29.778 00:09:30.713 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:30.713 Nvme0n1 : 7.00 16265.43 63.54 0.00 0.00 0.00 0.00 0.00 00:09:30.713 =================================================================================================================== 00:09:30.713 Total : 16265.43 63.54 0.00 0.00 0.00 0.00 0.00 00:09:30.713 00:09:31.650 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:31.650 Nvme0n1 : 8.00 16327.75 63.78 0.00 0.00 0.00 0.00 0.00 00:09:31.650 =================================================================================================================== 00:09:31.650 Total : 16327.75 63.78 0.00 0.00 0.00 0.00 0.00 00:09:31.650 00:09:32.585 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:32.585 Nvme0n1 : 9.00 16345.33 63.85 0.00 0.00 0.00 0.00 0.00 00:09:32.585 =================================================================================================================== 00:09:32.585 Total : 16345.33 63.85 0.00 0.00 0.00 0.00 0.00 00:09:32.585 00:09:33.520 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:33.520 Nvme0n1 : 10.00 16374.50 63.96 0.00 0.00 0.00 0.00 0.00 00:09:33.520 =================================================================================================================== 00:09:33.520 Total : 16374.50 63.96 0.00 0.00 0.00 0.00 0.00 00:09:33.520 00:09:33.520 00:09:33.520 Latency(us) 00:09:33.520 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:33.520 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:33.520 Nvme0n1 : 10.01 16376.15 63.97 0.00 0.00 7811.69 3373.89 15534.46 00:09:33.520 =================================================================================================================== 00:09:33.520 Total : 16376.15 63.97 0.00 0.00 7811.69 3373.89 15534.46 00:09:33.520 0 00:09:33.520 14:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 139440 00:09:33.520 14:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 139440 ']' 00:09:33.520 14:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 139440 00:09:33.520 14:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:09:33.520 14:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:33.520 14:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 139440 00:09:33.520 14:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:33.520 14:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:33.520 14:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 139440' 00:09:33.520 killing process with pid 139440 00:09:33.520 14:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 139440 00:09:33.520 Received shutdown signal, test time was about 10.000000 seconds 00:09:33.520 00:09:33.520 Latency(us) 00:09:33.520 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:33.520 =================================================================================================================== 00:09:33.520 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:33.520 14:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 139440 00:09:33.778 14:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:34.036 14:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:34.292 14:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6667bcc8-ca10-45cc-abc6-0aec85ad7ca0 00:09:34.292 14:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:34.549 14:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:34.549 14:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:09:34.549 14:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 136932 00:09:34.549 14:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 136932 00:09:34.549 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 136932 Killed "${NVMF_APP[@]}" "$@" 00:09:34.549 14:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:09:34.549 14:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:09:34.549 14:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:34.549 14:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:34.549 14:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:34.549 14:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=140908 00:09:34.549 14:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:34.549 14:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 140908 00:09:34.549 14:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 140908 ']' 00:09:34.549 14:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:34.549 14:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:34.549 14:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:34.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:34.549 14:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:34.549 14:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:34.549 [2024-07-26 14:03:42.537923] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:09:34.549 [2024-07-26 14:03:42.538021] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:34.807 EAL: No free 2048 kB hugepages reported on node 1 00:09:34.807 [2024-07-26 14:03:42.604471] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:34.807 [2024-07-26 14:03:42.712479] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:34.807 [2024-07-26 14:03:42.712555] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:34.807 [2024-07-26 14:03:42.712571] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:34.807 [2024-07-26 14:03:42.712582] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:34.807 [2024-07-26 14:03:42.712591] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:34.807 [2024-07-26 14:03:42.712630] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:34.807 14:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:34.807 14:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:09:34.807 14:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:34.807 14:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:34.807 14:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:35.065 14:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:35.065 14:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:35.065 [2024-07-26 14:03:43.069002] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:09:35.065 [2024-07-26 14:03:43.069145] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:09:35.065 [2024-07-26 14:03:43.069192] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:09:35.331 14:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:09:35.331 14:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 90219a45-7b41-4b23-bbcc-88081f12c414 00:09:35.331 14:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=90219a45-7b41-4b23-bbcc-88081f12c414 00:09:35.331 14:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:35.331 14:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:09:35.331 14:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:35.331 14:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:35.331 14:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:35.331 14:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 90219a45-7b41-4b23-bbcc-88081f12c414 -t 2000 00:09:35.592 [ 00:09:35.592 { 00:09:35.592 "name": "90219a45-7b41-4b23-bbcc-88081f12c414", 00:09:35.592 "aliases": [ 00:09:35.592 "lvs/lvol" 00:09:35.592 ], 00:09:35.592 "product_name": "Logical Volume", 00:09:35.592 "block_size": 4096, 00:09:35.592 "num_blocks": 38912, 00:09:35.592 "uuid": "90219a45-7b41-4b23-bbcc-88081f12c414", 00:09:35.592 "assigned_rate_limits": { 00:09:35.592 "rw_ios_per_sec": 0, 00:09:35.592 "rw_mbytes_per_sec": 0, 00:09:35.592 "r_mbytes_per_sec": 0, 00:09:35.592 "w_mbytes_per_sec": 0 00:09:35.592 }, 00:09:35.592 "claimed": false, 00:09:35.592 "zoned": false, 00:09:35.592 "supported_io_types": { 00:09:35.592 "read": true, 00:09:35.592 "write": true, 00:09:35.592 "unmap": true, 00:09:35.592 "flush": false, 00:09:35.592 "reset": true, 00:09:35.592 "nvme_admin": false, 00:09:35.592 "nvme_io": false, 00:09:35.592 "nvme_io_md": false, 00:09:35.592 "write_zeroes": true, 00:09:35.592 "zcopy": false, 00:09:35.592 "get_zone_info": false, 00:09:35.592 "zone_management": false, 00:09:35.592 "zone_append": false, 00:09:35.592 "compare": false, 00:09:35.592 "compare_and_write": false, 00:09:35.592 "abort": false, 00:09:35.592 "seek_hole": true, 00:09:35.592 "seek_data": true, 00:09:35.592 "copy": false, 00:09:35.592 "nvme_iov_md": false 00:09:35.592 }, 00:09:35.592 "driver_specific": { 00:09:35.592 "lvol": { 00:09:35.592 "lvol_store_uuid": "6667bcc8-ca10-45cc-abc6-0aec85ad7ca0", 00:09:35.592 "base_bdev": "aio_bdev", 00:09:35.592 "thin_provision": false, 00:09:35.592 "num_allocated_clusters": 38, 00:09:35.592 "snapshot": false, 00:09:35.592 "clone": false, 00:09:35.592 "esnap_clone": false 00:09:35.592 } 00:09:35.592 } 00:09:35.592 } 00:09:35.592 ] 00:09:35.592 14:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:09:35.592 14:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6667bcc8-ca10-45cc-abc6-0aec85ad7ca0 00:09:35.592 14:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:09:35.850 14:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:09:35.850 14:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6667bcc8-ca10-45cc-abc6-0aec85ad7ca0 00:09:35.850 14:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:09:36.108 14:03:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:09:36.108 14:03:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:36.366 [2024-07-26 14:03:44.302128] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:36.366 14:03:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6667bcc8-ca10-45cc-abc6-0aec85ad7ca0 00:09:36.366 14:03:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:09:36.366 14:03:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6667bcc8-ca10-45cc-abc6-0aec85ad7ca0 00:09:36.366 14:03:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:36.366 14:03:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:36.366 14:03:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:36.366 14:03:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:36.366 14:03:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:36.366 14:03:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:36.366 14:03:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:36.366 14:03:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:36.366 14:03:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6667bcc8-ca10-45cc-abc6-0aec85ad7ca0 00:09:36.623 request: 00:09:36.623 { 00:09:36.623 "uuid": "6667bcc8-ca10-45cc-abc6-0aec85ad7ca0", 00:09:36.623 "method": "bdev_lvol_get_lvstores", 00:09:36.623 "req_id": 1 00:09:36.623 } 00:09:36.623 Got JSON-RPC error response 00:09:36.623 response: 00:09:36.623 { 00:09:36.623 "code": -19, 00:09:36.623 "message": "No such device" 00:09:36.623 } 00:09:36.623 14:03:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:09:36.623 14:03:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:36.623 14:03:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:36.623 14:03:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:36.623 14:03:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:36.881 aio_bdev 00:09:36.881 14:03:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 90219a45-7b41-4b23-bbcc-88081f12c414 00:09:36.881 14:03:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=90219a45-7b41-4b23-bbcc-88081f12c414 00:09:36.881 14:03:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:36.881 14:03:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:09:36.881 14:03:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:36.881 14:03:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:36.881 14:03:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:37.138 14:03:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 90219a45-7b41-4b23-bbcc-88081f12c414 -t 2000 00:09:37.396 [ 00:09:37.396 { 00:09:37.396 "name": "90219a45-7b41-4b23-bbcc-88081f12c414", 00:09:37.396 "aliases": [ 00:09:37.396 "lvs/lvol" 00:09:37.396 ], 00:09:37.396 "product_name": "Logical Volume", 00:09:37.396 "block_size": 4096, 00:09:37.396 "num_blocks": 38912, 00:09:37.396 "uuid": "90219a45-7b41-4b23-bbcc-88081f12c414", 00:09:37.396 "assigned_rate_limits": { 00:09:37.396 "rw_ios_per_sec": 0, 00:09:37.396 "rw_mbytes_per_sec": 0, 00:09:37.396 "r_mbytes_per_sec": 0, 00:09:37.396 "w_mbytes_per_sec": 0 00:09:37.396 }, 00:09:37.396 "claimed": false, 00:09:37.396 "zoned": false, 00:09:37.396 "supported_io_types": { 00:09:37.396 "read": true, 00:09:37.396 "write": true, 00:09:37.396 "unmap": true, 00:09:37.396 "flush": false, 00:09:37.396 "reset": true, 00:09:37.396 "nvme_admin": false, 00:09:37.396 "nvme_io": false, 00:09:37.396 "nvme_io_md": false, 00:09:37.396 "write_zeroes": true, 00:09:37.396 "zcopy": false, 00:09:37.396 "get_zone_info": false, 00:09:37.397 "zone_management": false, 00:09:37.397 "zone_append": false, 00:09:37.397 "compare": false, 00:09:37.397 "compare_and_write": false, 00:09:37.397 "abort": false, 00:09:37.397 "seek_hole": true, 00:09:37.397 "seek_data": true, 00:09:37.397 "copy": false, 00:09:37.397 "nvme_iov_md": false 00:09:37.397 }, 00:09:37.397 "driver_specific": { 00:09:37.397 "lvol": { 00:09:37.397 "lvol_store_uuid": "6667bcc8-ca10-45cc-abc6-0aec85ad7ca0", 00:09:37.397 "base_bdev": "aio_bdev", 00:09:37.397 "thin_provision": false, 00:09:37.397 "num_allocated_clusters": 38, 00:09:37.397 "snapshot": false, 00:09:37.397 "clone": false, 00:09:37.397 "esnap_clone": false 00:09:37.397 } 00:09:37.397 } 00:09:37.397 } 00:09:37.397 ] 00:09:37.397 14:03:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:09:37.397 14:03:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6667bcc8-ca10-45cc-abc6-0aec85ad7ca0 00:09:37.397 14:03:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:37.655 14:03:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:37.655 14:03:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6667bcc8-ca10-45cc-abc6-0aec85ad7ca0 00:09:37.655 14:03:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:37.911 14:03:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:37.911 14:03:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 90219a45-7b41-4b23-bbcc-88081f12c414 00:09:38.167 14:03:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6667bcc8-ca10-45cc-abc6-0aec85ad7ca0 00:09:38.424 14:03:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:38.681 14:03:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:38.681 00:09:38.681 real 0m18.815s 00:09:38.681 user 0m47.842s 00:09:38.681 sys 0m4.454s 00:09:38.681 14:03:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:38.681 14:03:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:38.681 ************************************ 00:09:38.681 END TEST lvs_grow_dirty 00:09:38.681 ************************************ 00:09:38.681 14:03:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:09:38.681 14:03:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:09:38.681 14:03:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:09:38.681 14:03:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:09:38.681 14:03:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:09:38.681 14:03:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:09:38.681 14:03:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:09:38.682 14:03:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:09:38.682 14:03:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:09:38.682 nvmf_trace.0 00:09:38.939 14:03:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:09:38.939 14:03:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:09:38.939 14:03:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:38.939 14:03:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:09:38.939 14:03:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:38.939 14:03:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:09:38.939 14:03:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:38.939 14:03:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:38.939 rmmod nvme_tcp 00:09:38.939 rmmod nvme_fabrics 00:09:38.939 rmmod nvme_keyring 00:09:38.939 14:03:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:38.939 14:03:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:09:38.939 14:03:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:09:38.939 14:03:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 140908 ']' 00:09:38.939 14:03:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 140908 00:09:38.939 14:03:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 140908 ']' 00:09:38.939 14:03:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 140908 00:09:38.939 14:03:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:09:38.939 14:03:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:38.939 14:03:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 140908 00:09:38.939 14:03:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:38.939 14:03:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:38.939 14:03:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 140908' 00:09:38.939 killing process with pid 140908 00:09:38.939 14:03:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 140908 00:09:38.939 14:03:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 140908 00:09:39.199 14:03:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:39.199 14:03:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:39.199 14:03:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:39.199 14:03:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:39.199 14:03:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:39.199 14:03:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:39.199 14:03:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:39.199 14:03:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:41.109 14:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:41.109 00:09:41.109 real 0m41.296s 00:09:41.109 user 1m10.000s 00:09:41.109 sys 0m8.326s 00:09:41.109 14:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:41.109 14:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:41.109 ************************************ 00:09:41.109 END TEST nvmf_lvs_grow 00:09:41.109 ************************************ 00:09:41.109 14:03:49 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:41.109 14:03:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:41.109 14:03:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:41.109 14:03:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:41.368 ************************************ 00:09:41.368 START TEST nvmf_bdev_io_wait 00:09:41.368 ************************************ 00:09:41.368 14:03:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:41.368 * Looking for test storage... 00:09:41.368 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:41.368 14:03:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:41.368 14:03:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:09:41.368 14:03:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:41.368 14:03:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:41.368 14:03:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:41.368 14:03:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:41.368 14:03:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:41.368 14:03:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:41.368 14:03:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:41.368 14:03:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:41.368 14:03:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:41.368 14:03:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:41.368 14:03:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:09:41.368 14:03:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:09:41.368 14:03:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:41.368 14:03:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:41.368 14:03:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:41.368 14:03:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:41.368 14:03:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:41.368 14:03:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:41.368 14:03:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:41.368 14:03:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:41.368 14:03:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:41.368 14:03:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:41.368 14:03:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:41.368 14:03:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:09:41.368 14:03:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:41.368 14:03:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:09:41.368 14:03:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:41.368 14:03:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:41.368 14:03:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:41.368 14:03:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:41.368 14:03:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:41.368 14:03:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:41.368 14:03:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:41.368 14:03:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:41.368 14:03:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:41.368 14:03:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:41.368 14:03:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:09:41.368 14:03:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:41.368 14:03:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:41.368 14:03:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:41.368 14:03:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:41.368 14:03:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:41.368 14:03:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:41.368 14:03:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:41.368 14:03:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:41.368 14:03:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:41.368 14:03:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:41.368 14:03:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:09:41.368 14:03:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:43.273 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:43.273 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:09:43.273 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:43.273 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:43.273 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:43.273 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:43.273 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:43.273 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:09:43.273 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:43.273 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:09:43.273 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:09:43.273 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:09:43.273 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:09:43.273 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:09:43.273 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:09:43.273 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:43.273 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:43.273 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:43.273 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:43.273 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:43.273 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:43.274 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:43.274 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:43.274 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:43.274 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:43.274 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:43.274 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:43.274 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:43.274 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:43.274 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:43.274 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:43.274 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:43.274 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:43.274 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:09:43.274 Found 0000:09:00.0 (0x8086 - 0x159b) 00:09:43.274 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:43.274 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:43.274 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:43.274 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:43.274 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:43.274 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:43.274 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:09:43.274 Found 0000:09:00.1 (0x8086 - 0x159b) 00:09:43.274 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:43.274 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:43.274 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:43.274 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:43.274 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:43.274 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:43.274 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:43.274 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:43.274 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:43.274 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:43.274 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:43.274 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:43.274 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:43.274 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:43.274 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:43.274 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:09:43.274 Found net devices under 0000:09:00.0: cvl_0_0 00:09:43.274 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:43.274 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:43.274 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:43.274 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:43.274 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:43.274 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:43.274 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:43.274 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:43.274 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:09:43.274 Found net devices under 0000:09:00.1: cvl_0_1 00:09:43.274 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:43.274 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:43.274 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:09:43.274 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:43.274 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:43.274 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:43.274 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:43.274 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:43.274 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:43.274 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:43.274 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:43.274 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:43.274 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:43.274 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:43.274 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:43.274 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:43.274 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:43.274 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:43.274 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:43.532 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:43.532 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:43.532 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:43.532 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:43.532 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:43.532 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:43.532 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:43.532 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:43.533 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.125 ms 00:09:43.533 00:09:43.533 --- 10.0.0.2 ping statistics --- 00:09:43.533 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:43.533 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:09:43.533 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:43.533 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:43.533 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:09:43.533 00:09:43.533 --- 10.0.0.1 ping statistics --- 00:09:43.533 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:43.533 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:09:43.533 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:43.533 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:09:43.533 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:43.533 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:43.533 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:43.533 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:43.533 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:43.533 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:43.533 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:43.533 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:09:43.533 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:43.533 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:43.533 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:43.533 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=143437 00:09:43.533 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:09:43.533 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 143437 00:09:43.533 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 143437 ']' 00:09:43.533 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:43.533 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:43.533 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:43.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:43.533 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:43.533 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:43.533 [2024-07-26 14:03:51.457965] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:09:43.533 [2024-07-26 14:03:51.458046] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:43.533 EAL: No free 2048 kB hugepages reported on node 1 00:09:43.533 [2024-07-26 14:03:51.517299] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:43.792 [2024-07-26 14:03:51.620502] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:43.792 [2024-07-26 14:03:51.620574] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:43.792 [2024-07-26 14:03:51.620603] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:43.792 [2024-07-26 14:03:51.620615] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:43.792 [2024-07-26 14:03:51.620624] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:43.792 [2024-07-26 14:03:51.620707] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:43.792 [2024-07-26 14:03:51.620768] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:43.792 [2024-07-26 14:03:51.620790] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:43.792 [2024-07-26 14:03:51.620793] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:43.792 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:43.792 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:09:43.792 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:43.792 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:43.792 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:43.792 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:43.792 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:09:43.792 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.792 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:43.792 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.792 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:09:43.792 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.792 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:43.792 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.792 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:43.792 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.792 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:43.792 [2024-07-26 14:03:51.764922] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:43.792 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.792 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:43.792 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.792 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:43.792 Malloc0 00:09:43.792 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.792 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:43.792 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.792 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:44.051 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.051 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:44.051 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.051 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:44.051 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.051 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:44.051 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.051 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:44.051 [2024-07-26 14:03:51.825855] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:44.051 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.051 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=143463 00:09:44.051 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=143465 00:09:44.051 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:09:44.051 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:09:44.051 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:09:44.051 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:09:44.051 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:44.051 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=143467 00:09:44.051 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:44.051 { 00:09:44.051 "params": { 00:09:44.051 "name": "Nvme$subsystem", 00:09:44.051 "trtype": "$TEST_TRANSPORT", 00:09:44.051 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:44.051 "adrfam": "ipv4", 00:09:44.051 "trsvcid": "$NVMF_PORT", 00:09:44.051 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:44.051 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:44.051 "hdgst": ${hdgst:-false}, 00:09:44.051 "ddgst": ${ddgst:-false} 00:09:44.051 }, 00:09:44.051 "method": "bdev_nvme_attach_controller" 00:09:44.051 } 00:09:44.051 EOF 00:09:44.051 )") 00:09:44.051 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:09:44.051 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:09:44.051 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:09:44.051 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:09:44.051 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=143469 00:09:44.051 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:44.051 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:09:44.051 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:09:44.051 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:09:44.051 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:44.051 { 00:09:44.051 "params": { 00:09:44.051 "name": "Nvme$subsystem", 00:09:44.051 "trtype": "$TEST_TRANSPORT", 00:09:44.051 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:44.051 "adrfam": "ipv4", 00:09:44.051 "trsvcid": "$NVMF_PORT", 00:09:44.051 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:44.051 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:44.051 "hdgst": ${hdgst:-false}, 00:09:44.051 "ddgst": ${ddgst:-false} 00:09:44.051 }, 00:09:44.051 "method": "bdev_nvme_attach_controller" 00:09:44.051 } 00:09:44.051 EOF 00:09:44.051 )") 00:09:44.051 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:09:44.051 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:09:44.051 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:09:44.051 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:44.051 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:09:44.051 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:09:44.051 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:44.051 { 00:09:44.051 "params": { 00:09:44.051 "name": "Nvme$subsystem", 00:09:44.051 "trtype": "$TEST_TRANSPORT", 00:09:44.051 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:44.051 "adrfam": "ipv4", 00:09:44.051 "trsvcid": "$NVMF_PORT", 00:09:44.051 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:44.051 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:44.051 "hdgst": ${hdgst:-false}, 00:09:44.051 "ddgst": ${ddgst:-false} 00:09:44.051 }, 00:09:44.051 "method": "bdev_nvme_attach_controller" 00:09:44.051 } 00:09:44.051 EOF 00:09:44.051 )") 00:09:44.051 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:09:44.051 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:09:44.051 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:44.051 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:44.051 { 00:09:44.051 "params": { 00:09:44.051 "name": "Nvme$subsystem", 00:09:44.051 "trtype": "$TEST_TRANSPORT", 00:09:44.051 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:44.051 "adrfam": "ipv4", 00:09:44.051 "trsvcid": "$NVMF_PORT", 00:09:44.051 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:44.051 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:44.051 "hdgst": ${hdgst:-false}, 00:09:44.051 "ddgst": ${ddgst:-false} 00:09:44.051 }, 00:09:44.051 "method": "bdev_nvme_attach_controller" 00:09:44.051 } 00:09:44.051 EOF 00:09:44.051 )") 00:09:44.051 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:09:44.052 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:09:44.052 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 143463 00:09:44.052 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:09:44.052 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:09:44.052 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:09:44.052 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:09:44.052 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:09:44.052 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:09:44.052 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:44.052 "params": { 00:09:44.052 "name": "Nvme1", 00:09:44.052 "trtype": "tcp", 00:09:44.052 "traddr": "10.0.0.2", 00:09:44.052 "adrfam": "ipv4", 00:09:44.052 "trsvcid": "4420", 00:09:44.052 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:44.052 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:44.052 "hdgst": false, 00:09:44.052 "ddgst": false 00:09:44.052 }, 00:09:44.052 "method": "bdev_nvme_attach_controller" 00:09:44.052 }' 00:09:44.052 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:09:44.052 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:44.052 "params": { 00:09:44.052 "name": "Nvme1", 00:09:44.052 "trtype": "tcp", 00:09:44.052 "traddr": "10.0.0.2", 00:09:44.052 "adrfam": "ipv4", 00:09:44.052 "trsvcid": "4420", 00:09:44.052 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:44.052 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:44.052 "hdgst": false, 00:09:44.052 "ddgst": false 00:09:44.052 }, 00:09:44.052 "method": "bdev_nvme_attach_controller" 00:09:44.052 }' 00:09:44.052 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:09:44.052 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:44.052 "params": { 00:09:44.052 "name": "Nvme1", 00:09:44.052 "trtype": "tcp", 00:09:44.052 "traddr": "10.0.0.2", 00:09:44.052 "adrfam": "ipv4", 00:09:44.052 "trsvcid": "4420", 00:09:44.052 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:44.052 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:44.052 "hdgst": false, 00:09:44.052 "ddgst": false 00:09:44.052 }, 00:09:44.052 "method": "bdev_nvme_attach_controller" 00:09:44.052 }' 00:09:44.052 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:09:44.052 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:44.052 "params": { 00:09:44.052 "name": "Nvme1", 00:09:44.052 "trtype": "tcp", 00:09:44.052 "traddr": "10.0.0.2", 00:09:44.052 "adrfam": "ipv4", 00:09:44.052 "trsvcid": "4420", 00:09:44.052 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:44.052 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:44.052 "hdgst": false, 00:09:44.052 "ddgst": false 00:09:44.052 }, 00:09:44.052 "method": "bdev_nvme_attach_controller" 00:09:44.052 }' 00:09:44.052 [2024-07-26 14:03:51.874273] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:09:44.052 [2024-07-26 14:03:51.874273] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:09:44.052 [2024-07-26 14:03:51.874273] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:09:44.052 [2024-07-26 14:03:51.874280] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:09:44.052 [2024-07-26 14:03:51.874362] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-26 14:03:51.874362] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-26 14:03:51.874362] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 [2024-07-26 14:03:51.874367] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:09:44.052 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:09:44.052 --proc-type=auto ] 00:09:44.052 --proc-type=auto ] 00:09:44.052 EAL: No free 2048 kB hugepages reported on node 1 00:09:44.052 EAL: No free 2048 kB hugepages reported on node 1 00:09:44.052 [2024-07-26 14:03:52.051581] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:44.310 EAL: No free 2048 kB hugepages reported on node 1 00:09:44.310 [2024-07-26 14:03:52.154604] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:09:44.310 [2024-07-26 14:03:52.160498] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:44.310 EAL: No free 2048 kB hugepages reported on node 1 00:09:44.310 [2024-07-26 14:03:52.261436] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:09:44.310 [2024-07-26 14:03:52.265371] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:44.568 [2024-07-26 14:03:52.334631] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:44.568 [2024-07-26 14:03:52.367388] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:09:44.568 [2024-07-26 14:03:52.434207] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:09:44.827 Running I/O for 1 seconds... 00:09:44.827 Running I/O for 1 seconds... 00:09:44.827 Running I/O for 1 seconds... 00:09:44.827 Running I/O for 1 seconds... 00:09:45.770 00:09:45.770 Latency(us) 00:09:45.770 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:45.770 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:09:45.770 Nvme1n1 : 1.00 195151.40 762.31 0.00 0.00 653.34 263.96 873.81 00:09:45.770 =================================================================================================================== 00:09:45.770 Total : 195151.40 762.31 0.00 0.00 653.34 263.96 873.81 00:09:45.770 00:09:45.770 Latency(us) 00:09:45.770 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:45.770 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:09:45.770 Nvme1n1 : 1.02 6233.41 24.35 0.00 0.00 20263.71 9903.22 29127.11 00:09:45.770 =================================================================================================================== 00:09:45.770 Total : 6233.41 24.35 0.00 0.00 20263.71 9903.22 29127.11 00:09:45.770 00:09:45.770 Latency(us) 00:09:45.770 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:45.770 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:09:45.770 Nvme1n1 : 1.01 8644.22 33.77 0.00 0.00 14727.09 10000.31 25437.68 00:09:45.770 =================================================================================================================== 00:09:45.770 Total : 8644.22 33.77 0.00 0.00 14727.09 10000.31 25437.68 00:09:45.770 00:09:45.770 Latency(us) 00:09:45.770 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:45.770 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:09:45.770 Nvme1n1 : 1.00 6325.66 24.71 0.00 0.00 20175.54 5000.15 45049.93 00:09:45.770 =================================================================================================================== 00:09:45.770 Total : 6325.66 24.71 0.00 0.00 20175.54 5000.15 45049.93 00:09:46.029 14:03:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 143465 00:09:46.287 14:03:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 143467 00:09:46.287 14:03:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 143469 00:09:46.287 14:03:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:46.287 14:03:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.287 14:03:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:46.287 14:03:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.287 14:03:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:09:46.287 14:03:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:09:46.287 14:03:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:46.287 14:03:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:09:46.287 14:03:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:46.287 14:03:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:09:46.287 14:03:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:46.287 14:03:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:46.287 rmmod nvme_tcp 00:09:46.287 rmmod nvme_fabrics 00:09:46.287 rmmod nvme_keyring 00:09:46.287 14:03:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:46.287 14:03:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:09:46.287 14:03:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:09:46.287 14:03:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 143437 ']' 00:09:46.287 14:03:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 143437 00:09:46.287 14:03:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 143437 ']' 00:09:46.287 14:03:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 143437 00:09:46.287 14:03:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:09:46.287 14:03:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:46.287 14:03:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 143437 00:09:46.287 14:03:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:46.287 14:03:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:46.287 14:03:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 143437' 00:09:46.287 killing process with pid 143437 00:09:46.287 14:03:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 143437 00:09:46.287 14:03:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 143437 00:09:46.546 14:03:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:46.546 14:03:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:46.546 14:03:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:46.546 14:03:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:46.546 14:03:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:46.546 14:03:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:46.546 14:03:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:46.546 14:03:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:48.459 14:03:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:48.459 00:09:48.459 real 0m7.338s 00:09:48.459 user 0m17.644s 00:09:48.459 sys 0m3.470s 00:09:48.459 14:03:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:48.459 14:03:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:48.459 ************************************ 00:09:48.459 END TEST nvmf_bdev_io_wait 00:09:48.459 ************************************ 00:09:48.719 14:03:56 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:48.719 14:03:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:48.719 14:03:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:48.719 14:03:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:48.719 ************************************ 00:09:48.719 START TEST nvmf_queue_depth 00:09:48.719 ************************************ 00:09:48.719 14:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:48.719 * Looking for test storage... 00:09:48.719 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:48.719 14:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:48.719 14:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:09:48.719 14:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:48.719 14:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:48.719 14:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:48.719 14:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:48.719 14:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:48.719 14:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:48.719 14:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:48.719 14:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:48.719 14:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:48.719 14:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:48.719 14:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:09:48.719 14:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:09:48.719 14:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:48.719 14:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:48.719 14:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:48.719 14:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:48.719 14:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:48.719 14:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:48.719 14:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:48.719 14:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:48.719 14:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:48.719 14:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:48.719 14:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:48.719 14:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:09:48.719 14:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:48.719 14:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:09:48.719 14:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:48.719 14:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:48.719 14:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:48.719 14:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:48.719 14:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:48.719 14:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:48.719 14:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:48.719 14:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:48.719 14:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:09:48.719 14:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:09:48.719 14:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:48.719 14:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:09:48.719 14:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:48.719 14:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:48.719 14:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:48.719 14:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:48.719 14:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:48.719 14:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:48.719 14:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:48.719 14:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:48.719 14:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:48.719 14:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:48.719 14:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:09:48.719 14:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:51.258 14:03:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:51.258 14:03:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:09:51.258 14:03:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:51.258 14:03:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:51.258 14:03:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:51.258 14:03:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:51.258 14:03:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:51.258 14:03:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:09:51.258 14:03:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:51.258 14:03:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:09:51.258 14:03:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:09:51.258 14:03:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:09:51.258 14:03:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:09:51.258 14:03:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:09:51.258 14:03:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:09:51.258 14:03:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:51.258 14:03:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:51.258 14:03:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:51.258 14:03:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:51.258 14:03:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:51.258 14:03:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:51.258 14:03:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:51.258 14:03:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:51.258 14:03:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:51.258 14:03:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:51.258 14:03:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:51.258 14:03:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:51.258 14:03:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:51.258 14:03:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:51.258 14:03:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:51.258 14:03:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:51.258 14:03:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:51.258 14:03:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:51.258 14:03:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:09:51.258 Found 0000:09:00.0 (0x8086 - 0x159b) 00:09:51.258 14:03:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:51.258 14:03:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:51.258 14:03:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:51.258 14:03:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:51.258 14:03:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:51.258 14:03:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:51.258 14:03:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:09:51.258 Found 0000:09:00.1 (0x8086 - 0x159b) 00:09:51.258 14:03:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:51.258 14:03:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:51.258 14:03:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:51.258 14:03:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:51.258 14:03:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:51.258 14:03:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:51.258 14:03:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:51.259 14:03:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:51.259 14:03:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:51.259 14:03:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:51.259 14:03:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:51.259 14:03:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:51.259 14:03:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:51.259 14:03:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:51.259 14:03:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:51.259 14:03:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:09:51.259 Found net devices under 0000:09:00.0: cvl_0_0 00:09:51.259 14:03:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:51.259 14:03:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:51.259 14:03:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:51.259 14:03:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:51.259 14:03:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:51.259 14:03:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:51.259 14:03:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:51.259 14:03:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:51.259 14:03:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:09:51.259 Found net devices under 0000:09:00.1: cvl_0_1 00:09:51.259 14:03:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:51.259 14:03:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:51.259 14:03:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:09:51.259 14:03:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:51.259 14:03:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:51.259 14:03:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:51.259 14:03:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:51.259 14:03:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:51.259 14:03:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:51.259 14:03:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:51.259 14:03:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:51.259 14:03:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:51.259 14:03:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:51.259 14:03:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:51.259 14:03:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:51.259 14:03:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:51.259 14:03:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:51.259 14:03:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:51.259 14:03:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:51.259 14:03:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:51.259 14:03:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:51.259 14:03:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:51.259 14:03:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:51.259 14:03:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:51.259 14:03:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:51.259 14:03:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:51.259 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:51.259 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.198 ms 00:09:51.259 00:09:51.259 --- 10.0.0.2 ping statistics --- 00:09:51.259 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:51.259 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:09:51.259 14:03:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:51.259 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:51.259 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.096 ms 00:09:51.259 00:09:51.259 --- 10.0.0.1 ping statistics --- 00:09:51.259 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:51.259 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:09:51.259 14:03:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:51.259 14:03:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:09:51.259 14:03:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:51.259 14:03:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:51.259 14:03:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:51.259 14:03:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:51.259 14:03:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:51.259 14:03:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:51.259 14:03:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:51.259 14:03:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:51.259 14:03:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:51.259 14:03:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:51.259 14:03:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:51.259 14:03:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=145710 00:09:51.259 14:03:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:51.259 14:03:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 145710 00:09:51.259 14:03:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 145710 ']' 00:09:51.259 14:03:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:51.259 14:03:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:51.259 14:03:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:51.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:51.259 14:03:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:51.259 14:03:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:51.259 [2024-07-26 14:03:58.925329] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:09:51.259 [2024-07-26 14:03:58.925415] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:51.259 EAL: No free 2048 kB hugepages reported on node 1 00:09:51.259 [2024-07-26 14:03:58.989658] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:51.259 [2024-07-26 14:03:59.101947] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:51.259 [2024-07-26 14:03:59.102002] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:51.259 [2024-07-26 14:03:59.102030] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:51.259 [2024-07-26 14:03:59.102042] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:51.259 [2024-07-26 14:03:59.102052] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:51.259 [2024-07-26 14:03:59.102082] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:51.259 14:03:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:51.259 14:03:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:09:51.259 14:03:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:51.259 14:03:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:51.259 14:03:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:51.259 14:03:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:51.259 14:03:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:51.259 14:03:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.259 14:03:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:51.259 [2024-07-26 14:03:59.235097] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:51.259 14:03:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.259 14:03:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:51.259 14:03:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.259 14:03:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:51.259 Malloc0 00:09:51.259 14:03:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.259 14:03:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:51.259 14:03:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.518 14:03:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:51.518 14:03:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.518 14:03:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:51.518 14:03:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.518 14:03:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:51.518 14:03:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.518 14:03:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:51.518 14:03:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.518 14:03:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:51.518 [2024-07-26 14:03:59.294063] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:51.518 14:03:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.518 14:03:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=145838 00:09:51.518 14:03:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:09:51.518 14:03:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:51.518 14:03:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 145838 /var/tmp/bdevperf.sock 00:09:51.518 14:03:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 145838 ']' 00:09:51.518 14:03:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:51.518 14:03:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:51.518 14:03:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:51.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:51.518 14:03:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:51.518 14:03:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:51.518 [2024-07-26 14:03:59.336355] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:09:51.519 [2024-07-26 14:03:59.336429] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145838 ] 00:09:51.519 EAL: No free 2048 kB hugepages reported on node 1 00:09:51.519 [2024-07-26 14:03:59.393068] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:51.519 [2024-07-26 14:03:59.497918] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:51.777 14:03:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:51.777 14:03:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:09:51.777 14:03:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:51.777 14:03:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.777 14:03:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:51.777 NVMe0n1 00:09:51.777 14:03:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.777 14:03:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:51.777 Running I/O for 10 seconds... 00:10:03.982 00:10:03.982 Latency(us) 00:10:03.982 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:03.982 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:10:03.982 Verification LBA range: start 0x0 length 0x4000 00:10:03.982 NVMe0n1 : 10.09 8998.78 35.15 0.00 0.00 113286.47 22136.60 72235.24 00:10:03.982 =================================================================================================================== 00:10:03.982 Total : 8998.78 35.15 0.00 0.00 113286.47 22136.60 72235.24 00:10:03.982 0 00:10:03.982 14:04:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 145838 00:10:03.982 14:04:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 145838 ']' 00:10:03.982 14:04:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 145838 00:10:03.982 14:04:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:10:03.982 14:04:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:03.982 14:04:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 145838 00:10:03.982 14:04:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:03.982 14:04:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:03.982 14:04:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 145838' 00:10:03.982 killing process with pid 145838 00:10:03.982 14:04:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 145838 00:10:03.982 Received shutdown signal, test time was about 10.000000 seconds 00:10:03.982 00:10:03.982 Latency(us) 00:10:03.982 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:03.982 =================================================================================================================== 00:10:03.982 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:03.982 14:04:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 145838 00:10:03.982 14:04:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:10:03.982 14:04:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:10:03.982 14:04:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:03.982 14:04:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:10:03.982 14:04:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:03.982 14:04:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:10:03.982 14:04:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:03.982 14:04:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:03.982 rmmod nvme_tcp 00:10:03.982 rmmod nvme_fabrics 00:10:03.982 rmmod nvme_keyring 00:10:03.982 14:04:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:03.982 14:04:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:10:03.982 14:04:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:10:03.982 14:04:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 145710 ']' 00:10:03.982 14:04:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 145710 00:10:03.982 14:04:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 145710 ']' 00:10:03.982 14:04:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 145710 00:10:03.982 14:04:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:10:03.982 14:04:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:03.982 14:04:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 145710 00:10:03.982 14:04:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:10:03.982 14:04:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:10:03.983 14:04:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 145710' 00:10:03.983 killing process with pid 145710 00:10:03.983 14:04:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 145710 00:10:03.983 14:04:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 145710 00:10:03.983 14:04:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:03.983 14:04:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:03.983 14:04:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:03.983 14:04:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:03.983 14:04:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:03.983 14:04:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:03.983 14:04:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:03.983 14:04:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:04.924 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:04.924 00:10:04.924 real 0m16.054s 00:10:04.924 user 0m22.499s 00:10:04.924 sys 0m3.081s 00:10:04.924 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:04.924 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:04.924 ************************************ 00:10:04.924 END TEST nvmf_queue_depth 00:10:04.924 ************************************ 00:10:04.924 14:04:12 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:04.924 14:04:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:04.924 14:04:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:04.924 14:04:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:04.924 ************************************ 00:10:04.924 START TEST nvmf_target_multipath 00:10:04.924 ************************************ 00:10:04.924 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:04.924 * Looking for test storage... 00:10:04.924 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:04.924 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:04.924 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:10:04.924 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:04.924 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:04.924 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:04.924 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:04.924 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:04.924 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:04.924 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:04.924 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:04.924 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:04.924 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:04.924 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:10:04.924 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:10:04.924 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:04.924 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:04.924 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:04.924 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:04.924 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:04.924 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:04.924 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:04.924 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:04.924 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.924 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.924 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.924 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:10:04.924 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.924 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:10:04.924 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:04.924 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:04.924 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:04.924 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:04.924 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:04.924 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:04.924 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:04.924 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:04.924 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:04.924 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:04.924 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:10:04.924 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:04.924 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:10:04.924 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:04.924 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:04.924 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:04.924 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:04.924 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:04.924 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:04.924 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:04.924 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:04.924 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:04.924 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:04.924 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:10:04.924 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:06.827 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:06.827 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:10:06.827 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:06.827 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:06.827 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:06.827 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:06.827 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:06.827 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:10:06.827 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:06.827 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:10:06.827 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:10:06.827 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:10:06.827 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:10:06.827 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:10:06.827 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:10:06.827 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:06.827 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:06.827 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:06.827 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:06.827 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:06.827 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:06.827 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:06.827 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:06.827 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:06.827 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:06.827 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:06.827 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:06.827 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:06.827 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:06.827 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:06.827 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:06.827 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:06.827 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:06.827 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:10:06.827 Found 0000:09:00.0 (0x8086 - 0x159b) 00:10:06.827 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:06.827 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:06.828 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:06.828 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:06.828 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:06.828 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:06.828 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:10:06.828 Found 0000:09:00.1 (0x8086 - 0x159b) 00:10:06.828 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:06.828 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:06.828 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:06.828 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:06.828 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:06.828 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:06.828 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:06.828 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:06.828 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:06.828 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:06.828 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:06.828 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:06.828 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:06.828 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:06.828 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:06.828 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:10:06.828 Found net devices under 0000:09:00.0: cvl_0_0 00:10:06.828 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:06.828 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:06.828 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:06.828 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:06.828 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:06.828 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:06.828 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:06.828 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:06.828 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:10:06.828 Found net devices under 0000:09:00.1: cvl_0_1 00:10:06.828 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:06.828 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:06.828 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:10:06.828 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:06.828 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:06.828 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:06.828 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:06.828 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:06.828 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:06.828 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:06.828 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:06.828 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:06.828 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:06.828 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:06.828 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:06.828 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:07.087 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:07.087 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:07.087 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:07.087 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:07.087 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:07.087 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:07.087 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:07.087 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:07.087 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:07.087 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:07.087 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:07.087 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.244 ms 00:10:07.087 00:10:07.087 --- 10.0.0.2 ping statistics --- 00:10:07.087 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:07.087 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:10:07.087 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:07.087 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:07.087 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.157 ms 00:10:07.087 00:10:07.087 --- 10.0.0.1 ping statistics --- 00:10:07.087 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:07.087 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:10:07.087 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:07.087 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:10:07.087 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:07.087 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:07.087 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:07.087 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:07.087 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:07.087 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:07.087 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:07.087 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:10:07.087 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:10:07.087 only one NIC for nvmf test 00:10:07.087 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:10:07.087 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:07.087 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:10:07.087 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:07.087 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:10:07.087 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:07.087 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:07.087 rmmod nvme_tcp 00:10:07.087 rmmod nvme_fabrics 00:10:07.087 rmmod nvme_keyring 00:10:07.087 14:04:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:07.087 14:04:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:10:07.087 14:04:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:10:07.087 14:04:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:10:07.087 14:04:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:07.087 14:04:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:07.087 14:04:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:07.087 14:04:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:07.087 14:04:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:07.087 14:04:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:07.087 14:04:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:07.087 14:04:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:09.626 14:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:09.626 14:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:10:09.626 14:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:10:09.626 14:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:09.626 14:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:10:09.626 14:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:09.626 14:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:10:09.626 14:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:09.626 14:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:09.626 14:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:09.626 14:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:10:09.626 14:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:10:09.626 14:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:10:09.626 14:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:09.626 14:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:09.626 14:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:09.627 14:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:09.627 14:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:09.627 14:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:09.627 14:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:09.627 14:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:09.627 14:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:09.627 00:10:09.627 real 0m4.491s 00:10:09.627 user 0m0.859s 00:10:09.627 sys 0m1.613s 00:10:09.627 14:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:09.627 14:04:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:09.627 ************************************ 00:10:09.627 END TEST nvmf_target_multipath 00:10:09.627 ************************************ 00:10:09.627 14:04:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:09.627 14:04:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:09.627 14:04:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:09.627 14:04:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:09.627 ************************************ 00:10:09.627 START TEST nvmf_zcopy 00:10:09.627 ************************************ 00:10:09.627 14:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:09.627 * Looking for test storage... 00:10:09.627 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:09.627 14:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:09.627 14:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:10:09.627 14:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:09.627 14:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:09.627 14:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:09.627 14:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:09.627 14:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:09.627 14:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:09.627 14:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:09.627 14:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:09.627 14:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:09.627 14:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:09.627 14:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:10:09.627 14:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:10:09.627 14:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:09.627 14:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:09.627 14:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:09.627 14:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:09.627 14:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:09.627 14:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:09.627 14:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:09.627 14:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:09.627 14:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:09.627 14:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:09.627 14:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:09.627 14:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:10:09.627 14:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:09.627 14:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:10:09.627 14:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:09.627 14:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:09.627 14:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:09.627 14:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:09.627 14:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:09.627 14:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:09.627 14:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:09.627 14:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:09.627 14:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:10:09.627 14:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:09.627 14:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:09.627 14:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:09.627 14:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:09.627 14:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:09.627 14:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:09.627 14:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:09.627 14:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:09.627 14:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:09.627 14:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:09.627 14:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:10:09.627 14:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:11.529 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:11.529 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:10:11.529 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:11.529 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:11.529 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:11.529 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:11.529 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:11.529 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:10:11.529 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:11.529 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:10:11.529 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:10:11.529 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:10:11.529 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:10:11.529 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:10:11.529 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:10:11.529 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:11.529 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:11.529 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:11.529 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:11.529 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:11.529 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:11.529 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:11.529 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:11.529 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:11.529 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:11.529 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:11.529 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:11.529 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:11.529 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:11.529 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:11.529 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:11.529 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:11.529 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:11.529 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:10:11.529 Found 0000:09:00.0 (0x8086 - 0x159b) 00:10:11.529 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:11.529 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:11.529 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:11.529 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:11.529 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:11.529 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:11.529 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:10:11.529 Found 0000:09:00.1 (0x8086 - 0x159b) 00:10:11.529 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:11.529 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:11.529 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:11.529 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:11.529 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:11.529 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:11.529 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:11.529 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:11.530 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:11.530 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:11.530 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:11.530 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:11.530 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:11.530 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:11.530 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:11.530 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:10:11.530 Found net devices under 0000:09:00.0: cvl_0_0 00:10:11.530 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:11.530 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:11.530 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:11.530 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:11.530 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:11.530 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:11.530 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:11.530 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:11.530 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:10:11.530 Found net devices under 0000:09:00.1: cvl_0_1 00:10:11.530 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:11.530 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:11.530 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:10:11.530 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:11.530 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:11.530 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:11.530 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:11.530 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:11.530 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:11.530 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:11.530 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:11.530 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:11.530 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:11.530 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:11.530 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:11.530 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:11.530 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:11.530 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:11.530 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:11.530 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:11.530 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:11.530 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:11.530 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:11.530 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:11.530 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:11.530 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:11.530 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:11.530 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.191 ms 00:10:11.530 00:10:11.530 --- 10.0.0.2 ping statistics --- 00:10:11.530 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:11.530 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:10:11.530 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:11.530 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:11.530 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:10:11.530 00:10:11.530 --- 10.0.0.1 ping statistics --- 00:10:11.530 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:11.530 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:10:11.530 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:11.530 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:10:11.530 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:11.530 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:11.530 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:11.530 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:11.530 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:11.530 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:11.530 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:11.530 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:10:11.530 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:11.530 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:11.530 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:11.530 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=150927 00:10:11.530 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:11.530 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 150927 00:10:11.530 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 150927 ']' 00:10:11.530 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:11.530 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:11.530 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:11.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:11.530 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:11.530 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:11.530 [2024-07-26 14:04:19.458156] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:10:11.530 [2024-07-26 14:04:19.458234] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:11.530 EAL: No free 2048 kB hugepages reported on node 1 00:10:11.530 [2024-07-26 14:04:19.523479] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:11.789 [2024-07-26 14:04:19.625031] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:11.789 [2024-07-26 14:04:19.625085] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:11.789 [2024-07-26 14:04:19.625114] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:11.789 [2024-07-26 14:04:19.625125] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:11.789 [2024-07-26 14:04:19.625135] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:11.789 [2024-07-26 14:04:19.625161] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:11.789 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:11.789 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:10:11.789 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:11.789 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:11.789 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:11.789 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:11.789 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:10:11.789 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:10:11.789 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.789 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:11.789 [2024-07-26 14:04:19.768098] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:11.789 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.789 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:11.789 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.789 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:11.789 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.789 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:11.789 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.789 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:11.789 [2024-07-26 14:04:19.784298] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:11.789 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.789 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:11.789 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.789 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:11.790 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.790 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:10:11.790 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.790 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:12.047 malloc0 00:10:12.047 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.047 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:10:12.047 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.047 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:12.047 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.047 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:10:12.047 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:10:12.047 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:10:12.047 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:10:12.047 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:12.047 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:12.047 { 00:10:12.047 "params": { 00:10:12.047 "name": "Nvme$subsystem", 00:10:12.047 "trtype": "$TEST_TRANSPORT", 00:10:12.047 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:12.047 "adrfam": "ipv4", 00:10:12.047 "trsvcid": "$NVMF_PORT", 00:10:12.047 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:12.047 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:12.047 "hdgst": ${hdgst:-false}, 00:10:12.047 "ddgst": ${ddgst:-false} 00:10:12.047 }, 00:10:12.047 "method": "bdev_nvme_attach_controller" 00:10:12.047 } 00:10:12.047 EOF 00:10:12.047 )") 00:10:12.047 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:10:12.047 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:10:12.048 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:10:12.048 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:12.048 "params": { 00:10:12.048 "name": "Nvme1", 00:10:12.048 "trtype": "tcp", 00:10:12.048 "traddr": "10.0.0.2", 00:10:12.048 "adrfam": "ipv4", 00:10:12.048 "trsvcid": "4420", 00:10:12.048 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:12.048 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:12.048 "hdgst": false, 00:10:12.048 "ddgst": false 00:10:12.048 }, 00:10:12.048 "method": "bdev_nvme_attach_controller" 00:10:12.048 }' 00:10:12.048 [2024-07-26 14:04:19.885087] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:10:12.048 [2024-07-26 14:04:19.885170] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid151043 ] 00:10:12.048 EAL: No free 2048 kB hugepages reported on node 1 00:10:12.048 [2024-07-26 14:04:19.944433] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:12.048 [2024-07-26 14:04:20.055658] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:12.305 Running I/O for 10 seconds... 00:10:24.503 00:10:24.503 Latency(us) 00:10:24.503 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:24.503 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:10:24.503 Verification LBA range: start 0x0 length 0x1000 00:10:24.503 Nvme1n1 : 10.01 5596.82 43.73 0.00 0.00 22809.52 1699.08 33399.09 00:10:24.503 =================================================================================================================== 00:10:24.503 Total : 5596.82 43.73 0.00 0.00 22809.52 1699.08 33399.09 00:10:24.503 14:04:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=152250 00:10:24.503 14:04:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:10:24.503 14:04:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:24.503 14:04:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:10:24.503 14:04:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:10:24.503 14:04:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:10:24.503 14:04:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:10:24.503 14:04:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:24.503 14:04:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:24.503 { 00:10:24.503 "params": { 00:10:24.503 "name": "Nvme$subsystem", 00:10:24.503 "trtype": "$TEST_TRANSPORT", 00:10:24.503 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:24.503 "adrfam": "ipv4", 00:10:24.503 "trsvcid": "$NVMF_PORT", 00:10:24.503 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:24.503 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:24.503 "hdgst": ${hdgst:-false}, 00:10:24.503 "ddgst": ${ddgst:-false} 00:10:24.503 }, 00:10:24.503 "method": "bdev_nvme_attach_controller" 00:10:24.503 } 00:10:24.503 EOF 00:10:24.503 )") 00:10:24.503 14:04:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:10:24.503 [2024-07-26 14:04:30.549087] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.503 [2024-07-26 14:04:30.549130] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.504 14:04:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:10:24.504 14:04:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:10:24.504 14:04:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:24.504 "params": { 00:10:24.504 "name": "Nvme1", 00:10:24.504 "trtype": "tcp", 00:10:24.504 "traddr": "10.0.0.2", 00:10:24.504 "adrfam": "ipv4", 00:10:24.504 "trsvcid": "4420", 00:10:24.504 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:24.504 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:24.504 "hdgst": false, 00:10:24.504 "ddgst": false 00:10:24.504 }, 00:10:24.504 "method": "bdev_nvme_attach_controller" 00:10:24.504 }' 00:10:24.504 [2024-07-26 14:04:30.557046] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.504 [2024-07-26 14:04:30.557068] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.504 [2024-07-26 14:04:30.565067] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.504 [2024-07-26 14:04:30.565088] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.504 [2024-07-26 14:04:30.573091] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.504 [2024-07-26 14:04:30.573111] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.504 [2024-07-26 14:04:30.581113] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.504 [2024-07-26 14:04:30.581134] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.504 [2024-07-26 14:04:30.585443] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:10:24.504 [2024-07-26 14:04:30.585516] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid152250 ] 00:10:24.504 [2024-07-26 14:04:30.589135] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.504 [2024-07-26 14:04:30.589156] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.504 [2024-07-26 14:04:30.597156] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.504 [2024-07-26 14:04:30.597176] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.504 [2024-07-26 14:04:30.605178] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.504 [2024-07-26 14:04:30.605198] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.504 [2024-07-26 14:04:30.613198] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.504 [2024-07-26 14:04:30.613218] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.504 EAL: No free 2048 kB hugepages reported on node 1 00:10:24.504 [2024-07-26 14:04:30.621219] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.504 [2024-07-26 14:04:30.621239] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.504 [2024-07-26 14:04:30.629241] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.504 [2024-07-26 14:04:30.629260] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.504 [2024-07-26 14:04:30.637262] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.504 [2024-07-26 14:04:30.637281] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.504 [2024-07-26 14:04:30.644917] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:24.504 [2024-07-26 14:04:30.645284] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.504 [2024-07-26 14:04:30.645317] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.504 [2024-07-26 14:04:30.653349] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.504 [2024-07-26 14:04:30.653387] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.504 [2024-07-26 14:04:30.661355] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.504 [2024-07-26 14:04:30.661389] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.504 [2024-07-26 14:04:30.669348] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.504 [2024-07-26 14:04:30.669369] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.504 [2024-07-26 14:04:30.677369] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.504 [2024-07-26 14:04:30.677388] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.504 [2024-07-26 14:04:30.685391] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.504 [2024-07-26 14:04:30.685411] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.504 [2024-07-26 14:04:30.693412] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.504 [2024-07-26 14:04:30.693431] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.504 [2024-07-26 14:04:30.701431] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.504 [2024-07-26 14:04:30.701451] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.504 [2024-07-26 14:04:30.709488] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.504 [2024-07-26 14:04:30.709544] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.504 [2024-07-26 14:04:30.717493] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.504 [2024-07-26 14:04:30.717540] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.504 [2024-07-26 14:04:30.725497] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.504 [2024-07-26 14:04:30.725538] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.504 [2024-07-26 14:04:30.733536] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.504 [2024-07-26 14:04:30.733556] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.504 [2024-07-26 14:04:30.741561] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.504 [2024-07-26 14:04:30.741581] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.504 [2024-07-26 14:04:30.749596] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.504 [2024-07-26 14:04:30.749617] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.504 [2024-07-26 14:04:30.757616] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.504 [2024-07-26 14:04:30.757637] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.504 [2024-07-26 14:04:30.758606] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:24.504 [2024-07-26 14:04:30.765648] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.504 [2024-07-26 14:04:30.765669] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.504 [2024-07-26 14:04:30.773678] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.504 [2024-07-26 14:04:30.773705] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.504 [2024-07-26 14:04:30.781700] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.504 [2024-07-26 14:04:30.781737] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.504 [2024-07-26 14:04:30.789726] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.504 [2024-07-26 14:04:30.789764] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.504 [2024-07-26 14:04:30.797750] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.504 [2024-07-26 14:04:30.797816] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.504 [2024-07-26 14:04:30.805786] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.504 [2024-07-26 14:04:30.805850] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.504 [2024-07-26 14:04:30.813821] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.504 [2024-07-26 14:04:30.813859] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.504 [2024-07-26 14:04:30.821780] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.504 [2024-07-26 14:04:30.821833] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.504 [2024-07-26 14:04:30.829849] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.504 [2024-07-26 14:04:30.829901] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.504 [2024-07-26 14:04:30.837898] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.504 [2024-07-26 14:04:30.837937] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.504 [2024-07-26 14:04:30.845894] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.504 [2024-07-26 14:04:30.845923] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.504 [2024-07-26 14:04:30.853888] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.504 [2024-07-26 14:04:30.853909] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.504 [2024-07-26 14:04:30.861884] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.504 [2024-07-26 14:04:30.861905] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.504 [2024-07-26 14:04:30.869930] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.504 [2024-07-26 14:04:30.869953] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.504 [2024-07-26 14:04:30.877958] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.504 [2024-07-26 14:04:30.877981] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.504 [2024-07-26 14:04:30.885963] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.504 [2024-07-26 14:04:30.885985] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.504 [2024-07-26 14:04:30.894002] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.504 [2024-07-26 14:04:30.894025] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.504 [2024-07-26 14:04:30.902001] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.504 [2024-07-26 14:04:30.902022] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.504 [2024-07-26 14:04:30.910023] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.504 [2024-07-26 14:04:30.910043] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.504 [2024-07-26 14:04:30.918044] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.504 [2024-07-26 14:04:30.918064] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.505 [2024-07-26 14:04:30.926065] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.505 [2024-07-26 14:04:30.926085] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.505 [2024-07-26 14:04:30.934086] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.505 [2024-07-26 14:04:30.934105] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.505 [2024-07-26 14:04:30.942114] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.505 [2024-07-26 14:04:30.942136] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.505 [2024-07-26 14:04:30.950140] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.505 [2024-07-26 14:04:30.950163] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.505 [2024-07-26 14:04:30.958161] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.505 [2024-07-26 14:04:30.958183] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.505 [2024-07-26 14:04:30.966180] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.505 [2024-07-26 14:04:30.966201] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.505 [2024-07-26 14:04:31.009756] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.505 [2024-07-26 14:04:31.009791] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.505 [2024-07-26 14:04:31.014318] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.505 [2024-07-26 14:04:31.014341] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.505 Running I/O for 5 seconds... 00:10:24.505 [2024-07-26 14:04:31.022335] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.505 [2024-07-26 14:04:31.022356] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.505 [2024-07-26 14:04:31.034798] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.505 [2024-07-26 14:04:31.034827] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.505 [2024-07-26 14:04:31.045077] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.505 [2024-07-26 14:04:31.045105] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.505 [2024-07-26 14:04:31.055919] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.505 [2024-07-26 14:04:31.055946] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.505 [2024-07-26 14:04:31.068482] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.505 [2024-07-26 14:04:31.068510] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.505 [2024-07-26 14:04:31.078635] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.505 [2024-07-26 14:04:31.078663] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.505 [2024-07-26 14:04:31.088951] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.505 [2024-07-26 14:04:31.088980] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.505 [2024-07-26 14:04:31.099264] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.505 [2024-07-26 14:04:31.099292] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.505 [2024-07-26 14:04:31.109718] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.505 [2024-07-26 14:04:31.109747] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.505 [2024-07-26 14:04:31.119886] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.505 [2024-07-26 14:04:31.119913] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.505 [2024-07-26 14:04:31.129958] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.505 [2024-07-26 14:04:31.129985] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.505 [2024-07-26 14:04:31.140583] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.505 [2024-07-26 14:04:31.140610] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.505 [2024-07-26 14:04:31.150925] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.505 [2024-07-26 14:04:31.150952] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.505 [2024-07-26 14:04:31.161285] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.505 [2024-07-26 14:04:31.161312] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.505 [2024-07-26 14:04:31.172093] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.505 [2024-07-26 14:04:31.172121] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.505 [2024-07-26 14:04:31.182236] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.505 [2024-07-26 14:04:31.182263] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.505 [2024-07-26 14:04:31.192248] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.505 [2024-07-26 14:04:31.192275] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.505 [2024-07-26 14:04:31.202376] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.505 [2024-07-26 14:04:31.202409] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.505 [2024-07-26 14:04:31.212967] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.505 [2024-07-26 14:04:31.212993] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.505 [2024-07-26 14:04:31.223813] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.505 [2024-07-26 14:04:31.223841] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.505 [2024-07-26 14:04:31.234600] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.505 [2024-07-26 14:04:31.234627] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.505 [2024-07-26 14:04:31.246910] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.505 [2024-07-26 14:04:31.246938] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.505 [2024-07-26 14:04:31.256902] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.505 [2024-07-26 14:04:31.256930] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.505 [2024-07-26 14:04:31.266813] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.505 [2024-07-26 14:04:31.266840] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.505 [2024-07-26 14:04:31.277077] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.505 [2024-07-26 14:04:31.277104] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.505 [2024-07-26 14:04:31.287402] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.505 [2024-07-26 14:04:31.287429] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.505 [2024-07-26 14:04:31.297264] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.505 [2024-07-26 14:04:31.297306] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.505 [2024-07-26 14:04:31.307169] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.505 [2024-07-26 14:04:31.307196] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.505 [2024-07-26 14:04:31.317522] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.505 [2024-07-26 14:04:31.317558] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.505 [2024-07-26 14:04:31.327834] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.505 [2024-07-26 14:04:31.327861] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.505 [2024-07-26 14:04:31.338493] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.505 [2024-07-26 14:04:31.338521] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.505 [2024-07-26 14:04:31.348604] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.505 [2024-07-26 14:04:31.348632] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.505 [2024-07-26 14:04:31.358703] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.505 [2024-07-26 14:04:31.358730] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.505 [2024-07-26 14:04:31.369291] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.505 [2024-07-26 14:04:31.369319] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.505 [2024-07-26 14:04:31.381598] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.505 [2024-07-26 14:04:31.381626] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.505 [2024-07-26 14:04:31.391314] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.505 [2024-07-26 14:04:31.391342] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.505 [2024-07-26 14:04:31.401924] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.505 [2024-07-26 14:04:31.401952] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.505 [2024-07-26 14:04:31.412243] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.505 [2024-07-26 14:04:31.412271] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.505 [2024-07-26 14:04:31.422648] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.505 [2024-07-26 14:04:31.422675] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.505 [2024-07-26 14:04:31.435538] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.505 [2024-07-26 14:04:31.435565] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.505 [2024-07-26 14:04:31.447076] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.505 [2024-07-26 14:04:31.447103] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.505 [2024-07-26 14:04:31.455782] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.505 [2024-07-26 14:04:31.455810] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.505 [2024-07-26 14:04:31.467224] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.505 [2024-07-26 14:04:31.467251] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.505 [2024-07-26 14:04:31.479722] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.505 [2024-07-26 14:04:31.479750] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.505 [2024-07-26 14:04:31.489339] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.505 [2024-07-26 14:04:31.489367] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.506 [2024-07-26 14:04:31.499863] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.506 [2024-07-26 14:04:31.499891] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.506 [2024-07-26 14:04:31.510455] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.506 [2024-07-26 14:04:31.510482] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.506 [2024-07-26 14:04:31.520475] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.506 [2024-07-26 14:04:31.520501] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.506 [2024-07-26 14:04:31.530912] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.506 [2024-07-26 14:04:31.530940] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.506 [2024-07-26 14:04:31.541106] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.506 [2024-07-26 14:04:31.541133] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.506 [2024-07-26 14:04:31.551406] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.506 [2024-07-26 14:04:31.551434] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.506 [2024-07-26 14:04:31.561444] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.506 [2024-07-26 14:04:31.561471] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.506 [2024-07-26 14:04:31.571785] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.506 [2024-07-26 14:04:31.571812] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.506 [2024-07-26 14:04:31.582249] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.506 [2024-07-26 14:04:31.582276] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.506 [2024-07-26 14:04:31.592788] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.506 [2024-07-26 14:04:31.592815] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.506 [2024-07-26 14:04:31.603140] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.506 [2024-07-26 14:04:31.603167] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.506 [2024-07-26 14:04:31.613541] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.506 [2024-07-26 14:04:31.613568] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.506 [2024-07-26 14:04:31.623876] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.506 [2024-07-26 14:04:31.623904] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.506 [2024-07-26 14:04:31.634557] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.506 [2024-07-26 14:04:31.634584] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.506 [2024-07-26 14:04:31.647386] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.506 [2024-07-26 14:04:31.647413] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.506 [2024-07-26 14:04:31.658948] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.506 [2024-07-26 14:04:31.658975] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.506 [2024-07-26 14:04:31.668294] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.506 [2024-07-26 14:04:31.668321] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.506 [2024-07-26 14:04:31.679377] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.506 [2024-07-26 14:04:31.679405] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.506 [2024-07-26 14:04:31.692506] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.506 [2024-07-26 14:04:31.692543] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.506 [2024-07-26 14:04:31.702591] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.506 [2024-07-26 14:04:31.702621] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.506 [2024-07-26 14:04:31.713082] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.506 [2024-07-26 14:04:31.713109] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.506 [2024-07-26 14:04:31.723288] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.506 [2024-07-26 14:04:31.723315] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.506 [2024-07-26 14:04:31.733082] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.506 [2024-07-26 14:04:31.733109] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.506 [2024-07-26 14:04:31.742993] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.506 [2024-07-26 14:04:31.743020] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.506 [2024-07-26 14:04:31.753433] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.506 [2024-07-26 14:04:31.753459] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.506 [2024-07-26 14:04:31.763630] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.506 [2024-07-26 14:04:31.763657] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.506 [2024-07-26 14:04:31.774216] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.506 [2024-07-26 14:04:31.774243] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.506 [2024-07-26 14:04:31.785017] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.506 [2024-07-26 14:04:31.785044] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.506 [2024-07-26 14:04:31.797520] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.506 [2024-07-26 14:04:31.797554] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.506 [2024-07-26 14:04:31.807370] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.506 [2024-07-26 14:04:31.807396] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.506 [2024-07-26 14:04:31.818024] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.506 [2024-07-26 14:04:31.818051] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.506 [2024-07-26 14:04:31.828961] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.506 [2024-07-26 14:04:31.828988] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.506 [2024-07-26 14:04:31.841678] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.506 [2024-07-26 14:04:31.841705] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.506 [2024-07-26 14:04:31.851708] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.506 [2024-07-26 14:04:31.851735] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.506 [2024-07-26 14:04:31.862195] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.506 [2024-07-26 14:04:31.862222] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.506 [2024-07-26 14:04:31.872576] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.506 [2024-07-26 14:04:31.872603] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.506 [2024-07-26 14:04:31.882609] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.506 [2024-07-26 14:04:31.882635] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.506 [2024-07-26 14:04:31.893121] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.506 [2024-07-26 14:04:31.893148] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.506 [2024-07-26 14:04:31.903398] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.506 [2024-07-26 14:04:31.903425] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.506 [2024-07-26 14:04:31.913695] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.506 [2024-07-26 14:04:31.913723] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.506 [2024-07-26 14:04:31.924462] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.506 [2024-07-26 14:04:31.924490] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.506 [2024-07-26 14:04:31.934733] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.506 [2024-07-26 14:04:31.934761] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.506 [2024-07-26 14:04:31.945276] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.506 [2024-07-26 14:04:31.945304] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.506 [2024-07-26 14:04:31.955884] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.506 [2024-07-26 14:04:31.955911] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.506 [2024-07-26 14:04:31.966269] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.506 [2024-07-26 14:04:31.966296] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.506 [2024-07-26 14:04:31.976494] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.506 [2024-07-26 14:04:31.976521] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.506 [2024-07-26 14:04:31.986707] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.506 [2024-07-26 14:04:31.986734] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.506 [2024-07-26 14:04:31.997402] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.506 [2024-07-26 14:04:31.997438] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.506 [2024-07-26 14:04:32.009511] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.506 [2024-07-26 14:04:32.009549] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.506 [2024-07-26 14:04:32.018708] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.506 [2024-07-26 14:04:32.018736] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.506 [2024-07-26 14:04:32.029291] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.506 [2024-07-26 14:04:32.029318] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.506 [2024-07-26 14:04:32.039857] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.506 [2024-07-26 14:04:32.039885] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.506 [2024-07-26 14:04:32.053366] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.506 [2024-07-26 14:04:32.053394] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.507 [2024-07-26 14:04:32.063587] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.507 [2024-07-26 14:04:32.063615] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.507 [2024-07-26 14:04:32.074199] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.507 [2024-07-26 14:04:32.074226] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.507 [2024-07-26 14:04:32.086819] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.507 [2024-07-26 14:04:32.086846] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.507 [2024-07-26 14:04:32.096966] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.507 [2024-07-26 14:04:32.096993] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.507 [2024-07-26 14:04:32.107619] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.507 [2024-07-26 14:04:32.107646] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.507 [2024-07-26 14:04:32.121205] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.507 [2024-07-26 14:04:32.121233] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.507 [2024-07-26 14:04:32.133075] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.507 [2024-07-26 14:04:32.133102] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.507 [2024-07-26 14:04:32.141730] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.507 [2024-07-26 14:04:32.141756] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.507 [2024-07-26 14:04:32.154720] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.507 [2024-07-26 14:04:32.154748] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.507 [2024-07-26 14:04:32.165112] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.507 [2024-07-26 14:04:32.165140] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.507 [2024-07-26 14:04:32.175449] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.507 [2024-07-26 14:04:32.175476] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.507 [2024-07-26 14:04:32.185639] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.507 [2024-07-26 14:04:32.185667] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.507 [2024-07-26 14:04:32.195753] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.507 [2024-07-26 14:04:32.195780] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.507 [2024-07-26 14:04:32.206514] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.507 [2024-07-26 14:04:32.206556] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.507 [2024-07-26 14:04:32.219623] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.507 [2024-07-26 14:04:32.219650] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.507 [2024-07-26 14:04:32.231796] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.507 [2024-07-26 14:04:32.231823] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.507 [2024-07-26 14:04:32.240623] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.507 [2024-07-26 14:04:32.240650] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.507 [2024-07-26 14:04:32.251906] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.507 [2024-07-26 14:04:32.251932] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.507 [2024-07-26 14:04:32.265380] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.507 [2024-07-26 14:04:32.265408] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.507 [2024-07-26 14:04:32.277467] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.507 [2024-07-26 14:04:32.277494] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.507 [2024-07-26 14:04:32.287099] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.507 [2024-07-26 14:04:32.287126] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.507 [2024-07-26 14:04:32.299501] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.507 [2024-07-26 14:04:32.299539] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.507 [2024-07-26 14:04:32.311079] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.507 [2024-07-26 14:04:32.311106] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.507 [2024-07-26 14:04:32.319947] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.507 [2024-07-26 14:04:32.319975] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.507 [2024-07-26 14:04:32.331391] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.507 [2024-07-26 14:04:32.331419] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.507 [2024-07-26 14:04:32.341370] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.507 [2024-07-26 14:04:32.341396] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.507 [2024-07-26 14:04:32.351330] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.507 [2024-07-26 14:04:32.351357] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.507 [2024-07-26 14:04:32.362093] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.507 [2024-07-26 14:04:32.362120] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.507 [2024-07-26 14:04:32.374463] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.507 [2024-07-26 14:04:32.374490] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.507 [2024-07-26 14:04:32.384433] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.507 [2024-07-26 14:04:32.384460] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.507 [2024-07-26 14:04:32.394685] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.507 [2024-07-26 14:04:32.394712] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.507 [2024-07-26 14:04:32.405045] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.507 [2024-07-26 14:04:32.405072] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.507 [2024-07-26 14:04:32.415265] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.507 [2024-07-26 14:04:32.415300] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.507 [2024-07-26 14:04:32.425577] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.507 [2024-07-26 14:04:32.425615] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.507 [2024-07-26 14:04:32.435669] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.507 [2024-07-26 14:04:32.435697] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.507 [2024-07-26 14:04:32.445951] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.507 [2024-07-26 14:04:32.445978] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.507 [2024-07-26 14:04:32.458622] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.507 [2024-07-26 14:04:32.458650] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.507 [2024-07-26 14:04:32.470298] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.507 [2024-07-26 14:04:32.470324] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.507 [2024-07-26 14:04:32.479239] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.507 [2024-07-26 14:04:32.479266] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.507 [2024-07-26 14:04:32.490218] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.507 [2024-07-26 14:04:32.490246] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.507 [2024-07-26 14:04:32.504068] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.507 [2024-07-26 14:04:32.504096] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.507 [2024-07-26 14:04:32.514408] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.507 [2024-07-26 14:04:32.514435] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.765 [2024-07-26 14:04:32.524666] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.765 [2024-07-26 14:04:32.524693] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.765 [2024-07-26 14:04:32.535166] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.766 [2024-07-26 14:04:32.535194] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.766 [2024-07-26 14:04:32.545732] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.766 [2024-07-26 14:04:32.545759] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.766 [2024-07-26 14:04:32.556046] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.766 [2024-07-26 14:04:32.556074] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.766 [2024-07-26 14:04:32.566011] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.766 [2024-07-26 14:04:32.566038] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.766 [2024-07-26 14:04:32.576577] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.766 [2024-07-26 14:04:32.576605] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.766 [2024-07-26 14:04:32.587045] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.766 [2024-07-26 14:04:32.587072] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.766 [2024-07-26 14:04:32.597307] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.766 [2024-07-26 14:04:32.597334] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.766 [2024-07-26 14:04:32.607921] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.766 [2024-07-26 14:04:32.607948] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.766 [2024-07-26 14:04:32.620365] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.766 [2024-07-26 14:04:32.620399] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.766 [2024-07-26 14:04:32.630298] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.766 [2024-07-26 14:04:32.630325] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.766 [2024-07-26 14:04:32.640654] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.766 [2024-07-26 14:04:32.640681] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.766 [2024-07-26 14:04:32.650709] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.766 [2024-07-26 14:04:32.650736] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.766 [2024-07-26 14:04:32.660519] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.766 [2024-07-26 14:04:32.660554] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.766 [2024-07-26 14:04:32.670804] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.766 [2024-07-26 14:04:32.670831] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.766 [2024-07-26 14:04:32.681380] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.766 [2024-07-26 14:04:32.681408] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.766 [2024-07-26 14:04:32.693827] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.766 [2024-07-26 14:04:32.693854] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.766 [2024-07-26 14:04:32.703575] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.766 [2024-07-26 14:04:32.703606] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.766 [2024-07-26 14:04:32.713672] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.766 [2024-07-26 14:04:32.713700] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.766 [2024-07-26 14:04:32.723609] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.766 [2024-07-26 14:04:32.723637] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.766 [2024-07-26 14:04:32.733683] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.766 [2024-07-26 14:04:32.733710] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.766 [2024-07-26 14:04:32.744281] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.766 [2024-07-26 14:04:32.744308] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.766 [2024-07-26 14:04:32.756857] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.766 [2024-07-26 14:04:32.756884] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.766 [2024-07-26 14:04:32.766621] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.766 [2024-07-26 14:04:32.766647] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.766 [2024-07-26 14:04:32.776834] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.766 [2024-07-26 14:04:32.776861] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.024 [2024-07-26 14:04:32.787218] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.024 [2024-07-26 14:04:32.787245] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.024 [2024-07-26 14:04:32.797776] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.024 [2024-07-26 14:04:32.797803] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.024 [2024-07-26 14:04:32.808597] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.024 [2024-07-26 14:04:32.808624] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.024 [2024-07-26 14:04:32.821807] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.024 [2024-07-26 14:04:32.821842] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.024 [2024-07-26 14:04:32.832101] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.024 [2024-07-26 14:04:32.832144] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.024 [2024-07-26 14:04:32.842620] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.024 [2024-07-26 14:04:32.842647] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.024 [2024-07-26 14:04:32.854922] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.024 [2024-07-26 14:04:32.854949] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.024 [2024-07-26 14:04:32.864516] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.024 [2024-07-26 14:04:32.864552] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.024 [2024-07-26 14:04:32.874815] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.024 [2024-07-26 14:04:32.874842] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.024 [2024-07-26 14:04:32.885725] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.024 [2024-07-26 14:04:32.885752] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.024 [2024-07-26 14:04:32.896429] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.024 [2024-07-26 14:04:32.896456] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.024 [2024-07-26 14:04:32.907102] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.024 [2024-07-26 14:04:32.907130] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.024 [2024-07-26 14:04:32.919525] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.024 [2024-07-26 14:04:32.919559] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.024 [2024-07-26 14:04:32.929702] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.024 [2024-07-26 14:04:32.929729] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.024 [2024-07-26 14:04:32.940025] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.024 [2024-07-26 14:04:32.940052] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.024 [2024-07-26 14:04:32.950879] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.025 [2024-07-26 14:04:32.950906] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.025 [2024-07-26 14:04:32.961364] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.025 [2024-07-26 14:04:32.961391] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.025 [2024-07-26 14:04:32.971923] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.025 [2024-07-26 14:04:32.971950] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.025 [2024-07-26 14:04:32.982658] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.025 [2024-07-26 14:04:32.982685] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.025 [2024-07-26 14:04:32.995261] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.025 [2024-07-26 14:04:32.995288] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.025 [2024-07-26 14:04:33.004797] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.025 [2024-07-26 14:04:33.004825] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.025 [2024-07-26 14:04:33.015623] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.025 [2024-07-26 14:04:33.015651] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.025 [2024-07-26 14:04:33.026178] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.025 [2024-07-26 14:04:33.026206] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.025 [2024-07-26 14:04:33.036466] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.025 [2024-07-26 14:04:33.036492] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.283 [2024-07-26 14:04:33.046780] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.283 [2024-07-26 14:04:33.046808] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.283 [2024-07-26 14:04:33.057335] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.283 [2024-07-26 14:04:33.057363] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.283 [2024-07-26 14:04:33.067702] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.283 [2024-07-26 14:04:33.067730] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.283 [2024-07-26 14:04:33.080164] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.283 [2024-07-26 14:04:33.080192] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.283 [2024-07-26 14:04:33.089934] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.283 [2024-07-26 14:04:33.089961] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.283 [2024-07-26 14:04:33.101947] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.283 [2024-07-26 14:04:33.101975] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.283 [2024-07-26 14:04:33.111998] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.283 [2024-07-26 14:04:33.112026] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.283 [2024-07-26 14:04:33.122290] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.283 [2024-07-26 14:04:33.122318] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.283 [2024-07-26 14:04:33.132329] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.283 [2024-07-26 14:04:33.132357] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.283 [2024-07-26 14:04:33.142433] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.283 [2024-07-26 14:04:33.142461] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.283 [2024-07-26 14:04:33.152487] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.283 [2024-07-26 14:04:33.152515] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.283 [2024-07-26 14:04:33.162697] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.283 [2024-07-26 14:04:33.162725] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.283 [2024-07-26 14:04:33.173481] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.283 [2024-07-26 14:04:33.173508] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.283 [2024-07-26 14:04:33.184201] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.283 [2024-07-26 14:04:33.184229] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.283 [2024-07-26 14:04:33.194715] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.283 [2024-07-26 14:04:33.194743] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.283 [2024-07-26 14:04:33.207446] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.283 [2024-07-26 14:04:33.207473] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.283 [2024-07-26 14:04:33.217685] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.283 [2024-07-26 14:04:33.217713] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.283 [2024-07-26 14:04:33.227966] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.283 [2024-07-26 14:04:33.227993] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.283 [2024-07-26 14:04:33.238743] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.283 [2024-07-26 14:04:33.238771] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.283 [2024-07-26 14:04:33.249432] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.283 [2024-07-26 14:04:33.249459] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.283 [2024-07-26 14:04:33.259956] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.283 [2024-07-26 14:04:33.259983] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.283 [2024-07-26 14:04:33.272453] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.283 [2024-07-26 14:04:33.272481] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.283 [2024-07-26 14:04:33.282598] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.283 [2024-07-26 14:04:33.282625] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.283 [2024-07-26 14:04:33.292909] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.283 [2024-07-26 14:04:33.292937] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.541 [2024-07-26 14:04:33.303346] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.541 [2024-07-26 14:04:33.303374] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.541 [2024-07-26 14:04:33.313851] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.541 [2024-07-26 14:04:33.313878] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.541 [2024-07-26 14:04:33.324728] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.541 [2024-07-26 14:04:33.324755] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.541 [2024-07-26 14:04:33.335289] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.541 [2024-07-26 14:04:33.335317] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.541 [2024-07-26 14:04:33.345768] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.541 [2024-07-26 14:04:33.345795] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.541 [2024-07-26 14:04:33.356300] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.541 [2024-07-26 14:04:33.356327] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.541 [2024-07-26 14:04:33.366626] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.541 [2024-07-26 14:04:33.366654] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.541 [2024-07-26 14:04:33.377270] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.541 [2024-07-26 14:04:33.377296] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.541 [2024-07-26 14:04:33.387500] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.541 [2024-07-26 14:04:33.387534] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.541 [2024-07-26 14:04:33.398178] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.541 [2024-07-26 14:04:33.398205] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.541 [2024-07-26 14:04:33.408320] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.541 [2024-07-26 14:04:33.408347] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.541 [2024-07-26 14:04:33.418358] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.541 [2024-07-26 14:04:33.418385] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.541 [2024-07-26 14:04:33.428418] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.541 [2024-07-26 14:04:33.428446] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.542 [2024-07-26 14:04:33.438855] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.542 [2024-07-26 14:04:33.438892] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.542 [2024-07-26 14:04:33.451647] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.542 [2024-07-26 14:04:33.451674] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.542 [2024-07-26 14:04:33.461984] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.542 [2024-07-26 14:04:33.462010] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.542 [2024-07-26 14:04:33.472359] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.542 [2024-07-26 14:04:33.472387] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.542 [2024-07-26 14:04:33.482705] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.542 [2024-07-26 14:04:33.482744] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.542 [2024-07-26 14:04:33.493317] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.542 [2024-07-26 14:04:33.493344] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.542 [2024-07-26 14:04:33.505381] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.542 [2024-07-26 14:04:33.505409] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.542 [2024-07-26 14:04:33.514883] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.542 [2024-07-26 14:04:33.514910] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.542 [2024-07-26 14:04:33.525385] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.542 [2024-07-26 14:04:33.525427] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.542 [2024-07-26 14:04:33.539096] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.542 [2024-07-26 14:04:33.539123] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.542 [2024-07-26 14:04:33.550977] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.542 [2024-07-26 14:04:33.551003] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.800 [2024-07-26 14:04:33.560251] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.800 [2024-07-26 14:04:33.560278] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.800 [2024-07-26 14:04:33.571317] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.800 [2024-07-26 14:04:33.571344] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.800 [2024-07-26 14:04:33.583554] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.800 [2024-07-26 14:04:33.583581] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.800 [2024-07-26 14:04:33.595165] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.800 [2024-07-26 14:04:33.595192] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.800 [2024-07-26 14:04:33.603566] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.800 [2024-07-26 14:04:33.603593] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.800 [2024-07-26 14:04:33.616285] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.800 [2024-07-26 14:04:33.616312] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.800 [2024-07-26 14:04:33.628052] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.800 [2024-07-26 14:04:33.628086] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.800 [2024-07-26 14:04:33.637121] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.800 [2024-07-26 14:04:33.637149] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.800 [2024-07-26 14:04:33.648686] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.800 [2024-07-26 14:04:33.648713] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.800 [2024-07-26 14:04:33.661342] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.800 [2024-07-26 14:04:33.661369] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.800 [2024-07-26 14:04:33.671183] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.800 [2024-07-26 14:04:33.671211] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.800 [2024-07-26 14:04:33.681630] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.800 [2024-07-26 14:04:33.681657] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.800 [2024-07-26 14:04:33.692289] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.800 [2024-07-26 14:04:33.692317] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.800 [2024-07-26 14:04:33.704541] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.800 [2024-07-26 14:04:33.704568] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.800 [2024-07-26 14:04:33.713936] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.800 [2024-07-26 14:04:33.713963] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.800 [2024-07-26 14:04:33.724864] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.800 [2024-07-26 14:04:33.724891] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.800 [2024-07-26 14:04:33.735206] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.800 [2024-07-26 14:04:33.735233] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.800 [2024-07-26 14:04:33.745513] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.800 [2024-07-26 14:04:33.745549] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.800 [2024-07-26 14:04:33.755948] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.800 [2024-07-26 14:04:33.755975] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.800 [2024-07-26 14:04:33.766656] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.800 [2024-07-26 14:04:33.766683] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.800 [2024-07-26 14:04:33.776803] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.800 [2024-07-26 14:04:33.776830] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.800 [2024-07-26 14:04:33.787318] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.800 [2024-07-26 14:04:33.787345] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.800 [2024-07-26 14:04:33.798313] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.800 [2024-07-26 14:04:33.798340] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.800 [2024-07-26 14:04:33.809075] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.800 [2024-07-26 14:04:33.809101] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.059 [2024-07-26 14:04:33.821551] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.059 [2024-07-26 14:04:33.821578] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.059 [2024-07-26 14:04:33.831618] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.059 [2024-07-26 14:04:33.831651] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.059 [2024-07-26 14:04:33.842089] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.059 [2024-07-26 14:04:33.842116] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.059 [2024-07-26 14:04:33.852493] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.059 [2024-07-26 14:04:33.852521] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.059 [2024-07-26 14:04:33.862929] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.059 [2024-07-26 14:04:33.862967] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.059 [2024-07-26 14:04:33.873465] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.059 [2024-07-26 14:04:33.873492] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.059 [2024-07-26 14:04:33.883826] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.059 [2024-07-26 14:04:33.883852] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.059 [2024-07-26 14:04:33.894557] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.059 [2024-07-26 14:04:33.894585] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.059 [2024-07-26 14:04:33.905095] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.059 [2024-07-26 14:04:33.905122] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.059 [2024-07-26 14:04:33.918893] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.059 [2024-07-26 14:04:33.918919] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.059 [2024-07-26 14:04:33.929080] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.059 [2024-07-26 14:04:33.929106] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.059 [2024-07-26 14:04:33.939570] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.059 [2024-07-26 14:04:33.939597] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.059 [2024-07-26 14:04:33.950177] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.059 [2024-07-26 14:04:33.950204] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.059 [2024-07-26 14:04:33.960176] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.059 [2024-07-26 14:04:33.960202] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.059 [2024-07-26 14:04:33.970367] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.059 [2024-07-26 14:04:33.970395] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.059 [2024-07-26 14:04:33.980060] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.059 [2024-07-26 14:04:33.980087] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.059 [2024-07-26 14:04:33.990083] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.059 [2024-07-26 14:04:33.990110] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.059 [2024-07-26 14:04:34.000128] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.059 [2024-07-26 14:04:34.000156] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.059 [2024-07-26 14:04:34.010399] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.059 [2024-07-26 14:04:34.010427] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.059 [2024-07-26 14:04:34.020607] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.059 [2024-07-26 14:04:34.020635] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.059 [2024-07-26 14:04:34.030487] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.059 [2024-07-26 14:04:34.030521] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.059 [2024-07-26 14:04:34.040933] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.059 [2024-07-26 14:04:34.040960] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.059 [2024-07-26 14:04:34.053515] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.059 [2024-07-26 14:04:34.053551] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.059 [2024-07-26 14:04:34.063407] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.059 [2024-07-26 14:04:34.063434] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.059 [2024-07-26 14:04:34.073546] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.059 [2024-07-26 14:04:34.073573] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.318 [2024-07-26 14:04:34.083728] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.318 [2024-07-26 14:04:34.083755] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.318 [2024-07-26 14:04:34.094410] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.318 [2024-07-26 14:04:34.094437] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.318 [2024-07-26 14:04:34.104997] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.318 [2024-07-26 14:04:34.105024] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.318 [2024-07-26 14:04:34.115717] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.318 [2024-07-26 14:04:34.115744] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.318 [2024-07-26 14:04:34.129215] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.318 [2024-07-26 14:04:34.129242] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.318 [2024-07-26 14:04:34.139356] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.318 [2024-07-26 14:04:34.139383] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.318 [2024-07-26 14:04:34.149747] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.318 [2024-07-26 14:04:34.149774] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.318 [2024-07-26 14:04:34.160351] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.318 [2024-07-26 14:04:34.160378] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.318 [2024-07-26 14:04:34.171042] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.318 [2024-07-26 14:04:34.171070] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.318 [2024-07-26 14:04:34.181468] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.318 [2024-07-26 14:04:34.181495] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.318 [2024-07-26 14:04:34.194493] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.318 [2024-07-26 14:04:34.194521] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.318 [2024-07-26 14:04:34.204407] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.318 [2024-07-26 14:04:34.204436] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.318 [2024-07-26 14:04:34.214838] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.318 [2024-07-26 14:04:34.214867] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.318 [2024-07-26 14:04:34.225435] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.318 [2024-07-26 14:04:34.225462] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.318 [2024-07-26 14:04:34.235853] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.318 [2024-07-26 14:04:34.235888] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.318 [2024-07-26 14:04:34.246316] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.318 [2024-07-26 14:04:34.246344] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.318 [2024-07-26 14:04:34.256432] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.318 [2024-07-26 14:04:34.256460] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.318 [2024-07-26 14:04:34.266646] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.318 [2024-07-26 14:04:34.266674] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.318 [2024-07-26 14:04:34.277013] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.318 [2024-07-26 14:04:34.277040] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.318 [2024-07-26 14:04:34.287134] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.318 [2024-07-26 14:04:34.287162] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.318 [2024-07-26 14:04:34.297591] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.318 [2024-07-26 14:04:34.297619] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.318 [2024-07-26 14:04:34.307977] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.318 [2024-07-26 14:04:34.308006] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.318 [2024-07-26 14:04:34.318468] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.318 [2024-07-26 14:04:34.318495] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.318 [2024-07-26 14:04:34.328391] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.318 [2024-07-26 14:04:34.328418] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.577 [2024-07-26 14:04:34.338556] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.577 [2024-07-26 14:04:34.338592] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.577 [2024-07-26 14:04:34.348687] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.577 [2024-07-26 14:04:34.348714] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.577 [2024-07-26 14:04:34.359063] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.577 [2024-07-26 14:04:34.359090] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.577 [2024-07-26 14:04:34.371106] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.577 [2024-07-26 14:04:34.371133] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.577 [2024-07-26 14:04:34.381143] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.577 [2024-07-26 14:04:34.381170] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.577 [2024-07-26 14:04:34.391184] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.577 [2024-07-26 14:04:34.391211] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.577 [2024-07-26 14:04:34.401575] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.577 [2024-07-26 14:04:34.401603] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.577 [2024-07-26 14:04:34.412386] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.577 [2024-07-26 14:04:34.412413] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.577 [2024-07-26 14:04:34.422941] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.577 [2024-07-26 14:04:34.422968] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.577 [2024-07-26 14:04:34.435088] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.577 [2024-07-26 14:04:34.435126] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.577 [2024-07-26 14:04:34.444834] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.577 [2024-07-26 14:04:34.444863] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.577 [2024-07-26 14:04:34.455019] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.577 [2024-07-26 14:04:34.455047] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.577 [2024-07-26 14:04:34.465440] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.577 [2024-07-26 14:04:34.465467] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.577 [2024-07-26 14:04:34.475535] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.577 [2024-07-26 14:04:34.475562] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.577 [2024-07-26 14:04:34.485826] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.577 [2024-07-26 14:04:34.485852] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.577 [2024-07-26 14:04:34.496615] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.577 [2024-07-26 14:04:34.496643] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.577 [2024-07-26 14:04:34.508980] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.577 [2024-07-26 14:04:34.509007] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.577 [2024-07-26 14:04:34.518947] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.577 [2024-07-26 14:04:34.518974] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.577 [2024-07-26 14:04:34.529207] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.577 [2024-07-26 14:04:34.529233] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.577 [2024-07-26 14:04:34.539870] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.577 [2024-07-26 14:04:34.539898] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.577 [2024-07-26 14:04:34.552323] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.577 [2024-07-26 14:04:34.552351] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.577 [2024-07-26 14:04:34.562393] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.577 [2024-07-26 14:04:34.562420] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.577 [2024-07-26 14:04:34.572574] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.577 [2024-07-26 14:04:34.572602] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.577 [2024-07-26 14:04:34.582718] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.577 [2024-07-26 14:04:34.582745] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.577 [2024-07-26 14:04:34.593390] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.577 [2024-07-26 14:04:34.593416] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.836 [2024-07-26 14:04:34.605913] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.836 [2024-07-26 14:04:34.605940] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.836 [2024-07-26 14:04:34.615779] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.836 [2024-07-26 14:04:34.615806] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.836 [2024-07-26 14:04:34.626415] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.836 [2024-07-26 14:04:34.626442] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.836 [2024-07-26 14:04:34.638875] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.836 [2024-07-26 14:04:34.638903] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.836 [2024-07-26 14:04:34.650943] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.836 [2024-07-26 14:04:34.650971] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.836 [2024-07-26 14:04:34.659607] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.836 [2024-07-26 14:04:34.659634] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.836 [2024-07-26 14:04:34.670872] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.836 [2024-07-26 14:04:34.670899] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.836 [2024-07-26 14:04:34.681226] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.836 [2024-07-26 14:04:34.681253] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.836 [2024-07-26 14:04:34.691219] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.836 [2024-07-26 14:04:34.691245] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.836 [2024-07-26 14:04:34.701599] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.836 [2024-07-26 14:04:34.701626] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.836 [2024-07-26 14:04:34.711830] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.836 [2024-07-26 14:04:34.711857] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.836 [2024-07-26 14:04:34.722142] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.836 [2024-07-26 14:04:34.722169] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.836 [2024-07-26 14:04:34.732443] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.836 [2024-07-26 14:04:34.732471] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.836 [2024-07-26 14:04:34.742831] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.836 [2024-07-26 14:04:34.742858] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.836 [2024-07-26 14:04:34.753140] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.836 [2024-07-26 14:04:34.753167] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.836 [2024-07-26 14:04:34.763508] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.836 [2024-07-26 14:04:34.763542] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.836 [2024-07-26 14:04:34.773842] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.836 [2024-07-26 14:04:34.773869] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.836 [2024-07-26 14:04:34.784202] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.836 [2024-07-26 14:04:34.784229] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.836 [2024-07-26 14:04:34.794518] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.836 [2024-07-26 14:04:34.794552] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.836 [2024-07-26 14:04:34.804893] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.836 [2024-07-26 14:04:34.804920] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.836 [2024-07-26 14:04:34.816763] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.836 [2024-07-26 14:04:34.816791] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.836 [2024-07-26 14:04:34.828017] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.836 [2024-07-26 14:04:34.828045] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.836 [2024-07-26 14:04:34.836654] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.836 [2024-07-26 14:04:34.836680] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.836 [2024-07-26 14:04:34.849508] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.836 [2024-07-26 14:04:34.849544] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.095 [2024-07-26 14:04:34.859487] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.095 [2024-07-26 14:04:34.859515] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.095 [2024-07-26 14:04:34.869800] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.095 [2024-07-26 14:04:34.869828] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.095 [2024-07-26 14:04:34.881799] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.095 [2024-07-26 14:04:34.881826] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.095 [2024-07-26 14:04:34.891235] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.095 [2024-07-26 14:04:34.891262] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.095 [2024-07-26 14:04:34.901464] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.095 [2024-07-26 14:04:34.901491] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.095 [2024-07-26 14:04:34.912177] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.095 [2024-07-26 14:04:34.912205] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.095 [2024-07-26 14:04:34.924663] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.095 [2024-07-26 14:04:34.924691] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.095 [2024-07-26 14:04:34.934852] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.095 [2024-07-26 14:04:34.934879] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.095 [2024-07-26 14:04:34.944963] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.095 [2024-07-26 14:04:34.944990] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.095 [2024-07-26 14:04:34.955569] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.095 [2024-07-26 14:04:34.955596] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.095 [2024-07-26 14:04:34.967814] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.095 [2024-07-26 14:04:34.967841] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.095 [2024-07-26 14:04:34.977495] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.095 [2024-07-26 14:04:34.977522] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.095 [2024-07-26 14:04:34.987894] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.095 [2024-07-26 14:04:34.987921] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.095 [2024-07-26 14:04:34.998021] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.095 [2024-07-26 14:04:34.998048] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.095 [2024-07-26 14:04:35.008245] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.095 [2024-07-26 14:04:35.008272] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.095 [2024-07-26 14:04:35.018458] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.095 [2024-07-26 14:04:35.018485] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.095 [2024-07-26 14:04:35.029011] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.095 [2024-07-26 14:04:35.029038] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.095 [2024-07-26 14:04:35.041238] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.095 [2024-07-26 14:04:35.041266] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.096 [2024-07-26 14:04:35.050715] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.096 [2024-07-26 14:04:35.050743] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.096 [2024-07-26 14:04:35.062954] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.096 [2024-07-26 14:04:35.062981] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.096 [2024-07-26 14:04:35.074575] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.096 [2024-07-26 14:04:35.074602] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.096 [2024-07-26 14:04:35.083664] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.096 [2024-07-26 14:04:35.083690] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.096 [2024-07-26 14:04:35.094940] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.096 [2024-07-26 14:04:35.094966] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.096 [2024-07-26 14:04:35.107693] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.096 [2024-07-26 14:04:35.107720] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.355 [2024-07-26 14:04:35.117630] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.355 [2024-07-26 14:04:35.117657] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.355 [2024-07-26 14:04:35.127917] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.355 [2024-07-26 14:04:35.127944] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.355 [2024-07-26 14:04:35.138039] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.355 [2024-07-26 14:04:35.138067] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.355 [2024-07-26 14:04:35.148347] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.355 [2024-07-26 14:04:35.148374] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.355 [2024-07-26 14:04:35.158547] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.355 [2024-07-26 14:04:35.158575] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.355 [2024-07-26 14:04:35.168639] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.355 [2024-07-26 14:04:35.168667] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.355 [2024-07-26 14:04:35.179466] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.355 [2024-07-26 14:04:35.179494] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.355 [2024-07-26 14:04:35.191707] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.355 [2024-07-26 14:04:35.191735] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.355 [2024-07-26 14:04:35.201374] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.355 [2024-07-26 14:04:35.201402] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.355 [2024-07-26 14:04:35.212111] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.355 [2024-07-26 14:04:35.212138] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.355 [2024-07-26 14:04:35.222856] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.355 [2024-07-26 14:04:35.222882] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.355 [2024-07-26 14:04:35.235559] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.355 [2024-07-26 14:04:35.235594] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.355 [2024-07-26 14:04:35.245757] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.355 [2024-07-26 14:04:35.245785] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.355 [2024-07-26 14:04:35.260010] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.355 [2024-07-26 14:04:35.260039] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.355 [2024-07-26 14:04:35.269909] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.355 [2024-07-26 14:04:35.269936] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.355 [2024-07-26 14:04:35.279901] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.355 [2024-07-26 14:04:35.279928] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.355 [2024-07-26 14:04:35.289987] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.355 [2024-07-26 14:04:35.290014] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.355 [2024-07-26 14:04:35.300734] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.355 [2024-07-26 14:04:35.300763] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.355 [2024-07-26 14:04:35.313322] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.355 [2024-07-26 14:04:35.313350] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.355 [2024-07-26 14:04:35.322469] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.355 [2024-07-26 14:04:35.322497] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.355 [2024-07-26 14:04:35.335494] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.355 [2024-07-26 14:04:35.335522] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.355 [2024-07-26 14:04:35.345565] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.355 [2024-07-26 14:04:35.345604] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.355 [2024-07-26 14:04:35.356114] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.355 [2024-07-26 14:04:35.356142] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.355 [2024-07-26 14:04:35.366434] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.355 [2024-07-26 14:04:35.366462] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.614 [2024-07-26 14:04:35.376637] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.614 [2024-07-26 14:04:35.376665] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.614 [2024-07-26 14:04:35.386722] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.614 [2024-07-26 14:04:35.386749] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.614 [2024-07-26 14:04:35.397095] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.614 [2024-07-26 14:04:35.397123] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.614 [2024-07-26 14:04:35.407244] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.614 [2024-07-26 14:04:35.407271] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.614 [2024-07-26 14:04:35.417506] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.614 [2024-07-26 14:04:35.417544] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.614 [2024-07-26 14:04:35.427672] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.614 [2024-07-26 14:04:35.427700] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.614 [2024-07-26 14:04:35.437846] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.614 [2024-07-26 14:04:35.437881] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.614 [2024-07-26 14:04:35.448140] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.614 [2024-07-26 14:04:35.448168] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.614 [2024-07-26 14:04:35.459037] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.614 [2024-07-26 14:04:35.459065] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.614 [2024-07-26 14:04:35.469763] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.614 [2024-07-26 14:04:35.469800] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.614 [2024-07-26 14:04:35.480358] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.614 [2024-07-26 14:04:35.480385] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.614 [2024-07-26 14:04:35.492925] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.614 [2024-07-26 14:04:35.492952] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.614 [2024-07-26 14:04:35.502798] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.614 [2024-07-26 14:04:35.502824] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.614 [2024-07-26 14:04:35.512954] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.614 [2024-07-26 14:04:35.512980] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.614 [2024-07-26 14:04:35.523204] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.614 [2024-07-26 14:04:35.523231] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.614 [2024-07-26 14:04:35.533574] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.614 [2024-07-26 14:04:35.533601] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.614 [2024-07-26 14:04:35.543916] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.614 [2024-07-26 14:04:35.543943] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.614 [2024-07-26 14:04:35.553992] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.614 [2024-07-26 14:04:35.554019] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.614 [2024-07-26 14:04:35.564286] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.614 [2024-07-26 14:04:35.564313] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.614 [2024-07-26 14:04:35.574513] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.614 [2024-07-26 14:04:35.574549] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.614 [2024-07-26 14:04:35.585143] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.614 [2024-07-26 14:04:35.585171] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.615 [2024-07-26 14:04:35.596218] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.615 [2024-07-26 14:04:35.596246] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.615 [2024-07-26 14:04:35.606270] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.615 [2024-07-26 14:04:35.606296] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.615 [2024-07-26 14:04:35.616615] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.615 [2024-07-26 14:04:35.616641] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.615 [2024-07-26 14:04:35.626849] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.615 [2024-07-26 14:04:35.626876] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.873 [2024-07-26 14:04:35.637026] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.873 [2024-07-26 14:04:35.637060] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.873 [2024-07-26 14:04:35.647274] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.873 [2024-07-26 14:04:35.647300] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.873 [2024-07-26 14:04:35.657394] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.873 [2024-07-26 14:04:35.657421] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.873 [2024-07-26 14:04:35.667645] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.873 [2024-07-26 14:04:35.667672] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.873 [2024-07-26 14:04:35.678195] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.873 [2024-07-26 14:04:35.678223] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.873 [2024-07-26 14:04:35.688729] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.873 [2024-07-26 14:04:35.688756] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.873 [2024-07-26 14:04:35.699093] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.873 [2024-07-26 14:04:35.699119] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.873 [2024-07-26 14:04:35.709291] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.873 [2024-07-26 14:04:35.709318] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.873 [2024-07-26 14:04:35.719720] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.873 [2024-07-26 14:04:35.719748] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.873 [2024-07-26 14:04:35.729877] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.873 [2024-07-26 14:04:35.729904] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.873 [2024-07-26 14:04:35.739954] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.873 [2024-07-26 14:04:35.739981] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.873 [2024-07-26 14:04:35.750167] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.873 [2024-07-26 14:04:35.750194] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.873 [2024-07-26 14:04:35.760325] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.874 [2024-07-26 14:04:35.760352] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.874 [2024-07-26 14:04:35.770750] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.874 [2024-07-26 14:04:35.770780] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.874 [2024-07-26 14:04:35.781363] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.874 [2024-07-26 14:04:35.781390] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.874 [2024-07-26 14:04:35.793589] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.874 [2024-07-26 14:04:35.793616] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.874 [2024-07-26 14:04:35.803328] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.874 [2024-07-26 14:04:35.803356] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.874 [2024-07-26 14:04:35.814091] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.874 [2024-07-26 14:04:35.814118] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.874 [2024-07-26 14:04:35.824348] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.874 [2024-07-26 14:04:35.824375] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.874 [2024-07-26 14:04:35.834724] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.874 [2024-07-26 14:04:35.834759] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.874 [2024-07-26 14:04:35.845308] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.874 [2024-07-26 14:04:35.845335] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.874 [2024-07-26 14:04:35.855809] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.874 [2024-07-26 14:04:35.855836] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.874 [2024-07-26 14:04:35.866507] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.874 [2024-07-26 14:04:35.866542] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.874 [2024-07-26 14:04:35.877123] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.874 [2024-07-26 14:04:35.877150] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.874 [2024-07-26 14:04:35.889555] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.874 [2024-07-26 14:04:35.889582] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.132 [2024-07-26 14:04:35.899137] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.132 [2024-07-26 14:04:35.899164] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.132 [2024-07-26 14:04:35.909736] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.132 [2024-07-26 14:04:35.909764] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.132 [2024-07-26 14:04:35.920088] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.132 [2024-07-26 14:04:35.920116] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.132 [2024-07-26 14:04:35.930288] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.132 [2024-07-26 14:04:35.930315] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.132 [2024-07-26 14:04:35.940643] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.132 [2024-07-26 14:04:35.940671] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.132 [2024-07-26 14:04:35.950661] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.132 [2024-07-26 14:04:35.950688] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.132 [2024-07-26 14:04:35.961154] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.132 [2024-07-26 14:04:35.961180] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.132 [2024-07-26 14:04:35.971548] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.132 [2024-07-26 14:04:35.971576] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.132 [2024-07-26 14:04:35.982195] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.132 [2024-07-26 14:04:35.982223] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.132 [2024-07-26 14:04:35.992784] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.132 [2024-07-26 14:04:35.992811] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.132 [2024-07-26 14:04:36.005083] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.132 [2024-07-26 14:04:36.005111] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.132 [2024-07-26 14:04:36.014487] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.132 [2024-07-26 14:04:36.014516] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.133 [2024-07-26 14:04:36.024661] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.133 [2024-07-26 14:04:36.024688] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.133 [2024-07-26 14:04:36.034871] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.133 [2024-07-26 14:04:36.034904] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.133 [2024-07-26 14:04:36.042090] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.133 [2024-07-26 14:04:36.042118] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.133 00:10:28.133 Latency(us) 00:10:28.133 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:28.133 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:10:28.133 Nvme1n1 : 5.01 12205.07 95.35 0.00 0.00 10474.38 4563.25 21554.06 00:10:28.133 =================================================================================================================== 00:10:28.133 Total : 12205.07 95.35 0.00 0.00 10474.38 4563.25 21554.06 00:10:28.133 [2024-07-26 14:04:36.047167] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.133 [2024-07-26 14:04:36.047192] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.133 [2024-07-26 14:04:36.055194] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.133 [2024-07-26 14:04:36.055217] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.133 [2024-07-26 14:04:36.063198] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.133 [2024-07-26 14:04:36.063218] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.133 [2024-07-26 14:04:36.071297] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.133 [2024-07-26 14:04:36.071345] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.133 [2024-07-26 14:04:36.079308] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.133 [2024-07-26 14:04:36.079355] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.133 [2024-07-26 14:04:36.087346] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.133 [2024-07-26 14:04:36.087400] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.133 [2024-07-26 14:04:36.095357] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.133 [2024-07-26 14:04:36.095406] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.133 [2024-07-26 14:04:36.103385] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.133 [2024-07-26 14:04:36.103435] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.133 [2024-07-26 14:04:36.111402] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.133 [2024-07-26 14:04:36.111452] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.133 [2024-07-26 14:04:36.119428] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.133 [2024-07-26 14:04:36.119480] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.133 [2024-07-26 14:04:36.127445] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.133 [2024-07-26 14:04:36.127491] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.133 [2024-07-26 14:04:36.135473] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.133 [2024-07-26 14:04:36.135522] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.133 [2024-07-26 14:04:36.143492] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.133 [2024-07-26 14:04:36.143548] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.392 [2024-07-26 14:04:36.151509] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.392 [2024-07-26 14:04:36.151575] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.392 [2024-07-26 14:04:36.159551] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.392 [2024-07-26 14:04:36.159611] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.392 [2024-07-26 14:04:36.167566] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.392 [2024-07-26 14:04:36.167620] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.392 [2024-07-26 14:04:36.175555] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.392 [2024-07-26 14:04:36.175604] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.392 [2024-07-26 14:04:36.183552] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.392 [2024-07-26 14:04:36.183573] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.392 [2024-07-26 14:04:36.191583] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.392 [2024-07-26 14:04:36.191604] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.392 [2024-07-26 14:04:36.199596] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.392 [2024-07-26 14:04:36.199617] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.392 [2024-07-26 14:04:36.207622] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.392 [2024-07-26 14:04:36.207659] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.392 [2024-07-26 14:04:36.215713] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.392 [2024-07-26 14:04:36.215763] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.392 [2024-07-26 14:04:36.223709] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.392 [2024-07-26 14:04:36.223758] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.392 [2024-07-26 14:04:36.231676] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.392 [2024-07-26 14:04:36.231698] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.392 [2024-07-26 14:04:36.239691] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.392 [2024-07-26 14:04:36.239712] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.392 [2024-07-26 14:04:36.247711] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.392 [2024-07-26 14:04:36.247732] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.392 [2024-07-26 14:04:36.255734] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.392 [2024-07-26 14:04:36.255754] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.392 [2024-07-26 14:04:36.263811] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.392 [2024-07-26 14:04:36.263853] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.392 [2024-07-26 14:04:36.271841] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.392 [2024-07-26 14:04:36.271891] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.392 [2024-07-26 14:04:36.279850] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.392 [2024-07-26 14:04:36.279907] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.392 [2024-07-26 14:04:36.287840] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.392 [2024-07-26 14:04:36.287860] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.392 [2024-07-26 14:04:36.295860] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.392 [2024-07-26 14:04:36.295894] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.392 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (152250) - No such process 00:10:28.392 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 152250 00:10:28.392 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:28.392 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.392 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:28.392 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.392 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:28.392 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.392 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:28.392 delay0 00:10:28.392 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.392 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:10:28.392 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.392 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:28.392 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.393 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:10:28.393 EAL: No free 2048 kB hugepages reported on node 1 00:10:28.651 [2024-07-26 14:04:36.413330] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:35.210 Initializing NVMe Controllers 00:10:35.210 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:35.210 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:35.210 Initialization complete. Launching workers. 00:10:35.210 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 300, failed: 6569 00:10:35.210 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 6823, failed to submit 46 00:10:35.210 success 6688, unsuccess 135, failed 0 00:10:35.210 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:10:35.210 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:10:35.210 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:35.210 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:10:35.210 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:35.210 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:10:35.210 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:35.210 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:35.210 rmmod nvme_tcp 00:10:35.210 rmmod nvme_fabrics 00:10:35.210 rmmod nvme_keyring 00:10:35.210 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:35.210 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:10:35.210 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:10:35.210 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 150927 ']' 00:10:35.210 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 150927 00:10:35.210 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 150927 ']' 00:10:35.210 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 150927 00:10:35.210 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:10:35.210 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:35.210 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 150927 00:10:35.210 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:10:35.210 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:10:35.210 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 150927' 00:10:35.210 killing process with pid 150927 00:10:35.210 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 150927 00:10:35.210 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 150927 00:10:35.210 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:35.210 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:35.210 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:35.210 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:35.210 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:35.210 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:35.210 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:35.211 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:37.117 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:37.117 00:10:37.117 real 0m27.882s 00:10:37.117 user 0m40.564s 00:10:37.117 sys 0m8.264s 00:10:37.117 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:37.117 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:37.117 ************************************ 00:10:37.117 END TEST nvmf_zcopy 00:10:37.117 ************************************ 00:10:37.117 14:04:45 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:37.117 14:04:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:37.117 14:04:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:37.117 14:04:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:37.117 ************************************ 00:10:37.117 START TEST nvmf_nmic 00:10:37.117 ************************************ 00:10:37.117 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:37.376 * Looking for test storage... 00:10:37.376 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:37.376 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:37.376 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:10:37.376 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:37.376 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:37.376 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:37.376 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:37.376 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:37.376 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:37.376 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:37.376 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:37.376 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:37.376 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:37.376 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:10:37.376 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:10:37.376 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:37.376 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:37.376 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:37.376 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:37.376 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:37.376 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:37.376 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:37.376 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:37.376 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.376 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.376 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.376 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:10:37.376 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.376 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:10:37.376 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:37.376 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:37.376 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:37.376 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:37.376 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:37.376 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:37.376 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:37.376 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:37.376 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:37.376 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:37.376 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:10:37.376 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:37.376 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:37.376 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:37.376 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:37.376 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:37.376 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:37.376 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:37.376 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:37.376 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:37.376 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:37.376 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:10:37.377 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:39.277 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:39.277 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:10:39.277 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:39.277 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:39.277 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:39.277 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:39.277 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:39.277 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:10:39.277 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:39.277 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:10:39.277 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:10:39.277 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:10:39.277 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:10:39.277 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:10:39.277 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:10:39.277 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:39.278 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:39.278 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:39.278 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:39.278 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:39.278 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:39.278 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:39.278 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:39.278 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:39.278 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:39.278 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:39.278 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:39.278 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:39.278 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:39.278 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:39.278 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:39.278 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:39.278 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:39.278 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:10:39.278 Found 0000:09:00.0 (0x8086 - 0x159b) 00:10:39.278 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:39.278 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:39.278 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:39.278 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:39.278 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:39.278 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:39.278 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:10:39.278 Found 0000:09:00.1 (0x8086 - 0x159b) 00:10:39.278 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:39.278 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:39.278 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:39.278 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:39.278 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:39.278 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:39.278 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:39.278 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:39.278 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:39.278 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:39.278 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:39.278 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:39.278 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:39.278 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:39.278 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:39.278 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:10:39.278 Found net devices under 0000:09:00.0: cvl_0_0 00:10:39.278 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:39.278 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:39.278 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:39.278 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:39.278 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:39.278 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:39.278 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:39.278 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:39.278 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:10:39.278 Found net devices under 0000:09:00.1: cvl_0_1 00:10:39.278 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:39.278 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:39.278 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:10:39.278 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:39.278 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:39.278 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:39.278 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:39.278 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:39.278 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:39.278 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:39.278 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:39.278 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:39.278 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:39.278 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:39.278 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:39.278 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:39.278 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:39.278 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:39.278 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:39.536 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:39.536 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:39.536 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:39.537 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:39.537 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:39.537 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:39.537 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:39.537 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:39.537 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.183 ms 00:10:39.537 00:10:39.537 --- 10.0.0.2 ping statistics --- 00:10:39.537 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:39.537 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:10:39.537 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:39.537 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:39.537 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.070 ms 00:10:39.537 00:10:39.537 --- 10.0.0.1 ping statistics --- 00:10:39.537 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:39.537 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:10:39.537 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:39.537 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:10:39.537 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:39.537 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:39.537 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:39.537 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:39.537 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:39.537 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:39.537 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:39.537 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:39.537 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:39.537 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:39.537 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:39.537 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=155632 00:10:39.537 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:39.537 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 155632 00:10:39.537 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 155632 ']' 00:10:39.537 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:39.537 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:39.537 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:39.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:39.537 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:39.537 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:39.537 [2024-07-26 14:04:47.458860] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:10:39.537 [2024-07-26 14:04:47.458942] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:39.537 EAL: No free 2048 kB hugepages reported on node 1 00:10:39.537 [2024-07-26 14:04:47.521830] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:39.795 [2024-07-26 14:04:47.628532] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:39.795 [2024-07-26 14:04:47.628579] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:39.795 [2024-07-26 14:04:47.628608] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:39.795 [2024-07-26 14:04:47.628620] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:39.795 [2024-07-26 14:04:47.628630] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:39.795 [2024-07-26 14:04:47.628692] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:39.795 [2024-07-26 14:04:47.628747] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:39.795 [2024-07-26 14:04:47.628797] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:39.795 [2024-07-26 14:04:47.628799] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:39.795 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:39.795 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:10:39.795 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:39.795 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:39.795 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:39.795 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:39.795 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:39.795 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.795 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:39.795 [2024-07-26 14:04:47.793096] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:39.795 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.795 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:39.795 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.795 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:40.054 Malloc0 00:10:40.054 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.054 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:40.054 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.054 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:40.054 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.054 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:40.054 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.054 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:40.054 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.054 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:40.054 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.054 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:40.054 [2024-07-26 14:04:47.846423] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:40.054 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.054 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:40.054 test case1: single bdev can't be used in multiple subsystems 00:10:40.054 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:40.054 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.054 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:40.054 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.054 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:40.054 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.054 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:40.054 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.054 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:10:40.054 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:40.054 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.054 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:40.054 [2024-07-26 14:04:47.870265] bdev.c:8111:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:40.054 [2024-07-26 14:04:47.870300] subsystem.c:2087:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:40.054 [2024-07-26 14:04:47.870316] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.054 request: 00:10:40.054 { 00:10:40.054 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:40.054 "namespace": { 00:10:40.054 "bdev_name": "Malloc0", 00:10:40.054 "no_auto_visible": false 00:10:40.054 }, 00:10:40.054 "method": "nvmf_subsystem_add_ns", 00:10:40.054 "req_id": 1 00:10:40.054 } 00:10:40.054 Got JSON-RPC error response 00:10:40.054 response: 00:10:40.054 { 00:10:40.054 "code": -32602, 00:10:40.054 "message": "Invalid parameters" 00:10:40.054 } 00:10:40.054 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:40.054 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:10:40.054 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:40.054 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:40.054 Adding namespace failed - expected result. 00:10:40.054 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:40.054 test case2: host connect to nvmf target in multiple paths 00:10:40.054 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:10:40.054 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.054 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:40.054 [2024-07-26 14:04:47.878373] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:10:40.054 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.054 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:40.619 14:04:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:10:41.553 14:04:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:41.553 14:04:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:10:41.553 14:04:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:41.553 14:04:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:41.553 14:04:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:10:43.448 14:04:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:43.448 14:04:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:43.448 14:04:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:43.448 14:04:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:43.448 14:04:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:43.448 14:04:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:10:43.448 14:04:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:43.448 [global] 00:10:43.448 thread=1 00:10:43.448 invalidate=1 00:10:43.448 rw=write 00:10:43.448 time_based=1 00:10:43.448 runtime=1 00:10:43.448 ioengine=libaio 00:10:43.448 direct=1 00:10:43.448 bs=4096 00:10:43.448 iodepth=1 00:10:43.448 norandommap=0 00:10:43.448 numjobs=1 00:10:43.448 00:10:43.448 verify_dump=1 00:10:43.448 verify_backlog=512 00:10:43.448 verify_state_save=0 00:10:43.448 do_verify=1 00:10:43.448 verify=crc32c-intel 00:10:43.448 [job0] 00:10:43.448 filename=/dev/nvme0n1 00:10:43.448 Could not set queue depth (nvme0n1) 00:10:43.706 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:43.706 fio-3.35 00:10:43.706 Starting 1 thread 00:10:45.077 00:10:45.077 job0: (groupid=0, jobs=1): err= 0: pid=156271: Fri Jul 26 14:04:52 2024 00:10:45.077 read: IOPS=22, BW=91.7KiB/s (93.9kB/s)(92.0KiB/1003msec) 00:10:45.077 slat (nsec): min=14496, max=48631, avg=22491.91, stdev=9972.59 00:10:45.077 clat (usec): min=241, max=41302, avg=39213.48, stdev=8496.01 00:10:45.077 lat (usec): min=260, max=41319, avg=39235.97, stdev=8496.60 00:10:45.077 clat percentiles (usec): 00:10:45.077 | 1.00th=[ 241], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:10:45.077 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:45.077 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:45.077 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:45.077 | 99.99th=[41157] 00:10:45.077 write: IOPS=510, BW=2042KiB/s (2091kB/s)(2048KiB/1003msec); 0 zone resets 00:10:45.077 slat (nsec): min=8652, max=55771, avg=19304.80, stdev=7355.44 00:10:45.077 clat (usec): min=132, max=319, avg=171.31, stdev=16.70 00:10:45.077 lat (usec): min=142, max=358, avg=190.62, stdev=19.53 00:10:45.077 clat percentiles (usec): 00:10:45.077 | 1.00th=[ 137], 5.00th=[ 145], 10.00th=[ 153], 20.00th=[ 161], 00:10:45.077 | 30.00th=[ 163], 40.00th=[ 169], 50.00th=[ 172], 60.00th=[ 176], 00:10:45.077 | 70.00th=[ 178], 80.00th=[ 182], 90.00th=[ 186], 95.00th=[ 190], 00:10:45.077 | 99.00th=[ 217], 99.50th=[ 265], 99.90th=[ 322], 99.95th=[ 322], 00:10:45.077 | 99.99th=[ 322] 00:10:45.077 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:10:45.077 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:45.077 lat (usec) : 250=95.33%, 500=0.56% 00:10:45.077 lat (msec) : 50=4.11% 00:10:45.077 cpu : usr=0.80%, sys=1.20%, ctx=535, majf=0, minf=2 00:10:45.077 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:45.077 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:45.077 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:45.077 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:45.077 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:45.077 00:10:45.077 Run status group 0 (all jobs): 00:10:45.077 READ: bw=91.7KiB/s (93.9kB/s), 91.7KiB/s-91.7KiB/s (93.9kB/s-93.9kB/s), io=92.0KiB (94.2kB), run=1003-1003msec 00:10:45.077 WRITE: bw=2042KiB/s (2091kB/s), 2042KiB/s-2042KiB/s (2091kB/s-2091kB/s), io=2048KiB (2097kB), run=1003-1003msec 00:10:45.077 00:10:45.077 Disk stats (read/write): 00:10:45.077 nvme0n1: ios=70/512, merge=0/0, ticks=810/82, in_queue=892, util=91.98% 00:10:45.077 14:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:45.077 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:45.077 14:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:45.077 14:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:10:45.077 14:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:45.077 14:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:45.077 14:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:45.077 14:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:45.077 14:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:10:45.077 14:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:45.077 14:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:45.077 14:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:45.078 14:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:10:45.078 14:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:45.078 14:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:10:45.078 14:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:45.078 14:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:45.078 rmmod nvme_tcp 00:10:45.078 rmmod nvme_fabrics 00:10:45.078 rmmod nvme_keyring 00:10:45.078 14:04:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:45.078 14:04:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:10:45.078 14:04:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:10:45.078 14:04:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 155632 ']' 00:10:45.078 14:04:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 155632 00:10:45.078 14:04:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 155632 ']' 00:10:45.078 14:04:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 155632 00:10:45.078 14:04:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:10:45.078 14:04:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:45.078 14:04:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 155632 00:10:45.078 14:04:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:45.078 14:04:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:45.078 14:04:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 155632' 00:10:45.078 killing process with pid 155632 00:10:45.078 14:04:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 155632 00:10:45.078 14:04:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 155632 00:10:45.644 14:04:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:45.644 14:04:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:45.644 14:04:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:45.644 14:04:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:45.644 14:04:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:45.644 14:04:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:45.644 14:04:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:45.644 14:04:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:47.550 14:04:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:47.550 00:10:47.550 real 0m10.314s 00:10:47.550 user 0m23.664s 00:10:47.550 sys 0m2.625s 00:10:47.550 14:04:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:47.550 14:04:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:47.550 ************************************ 00:10:47.550 END TEST nvmf_nmic 00:10:47.550 ************************************ 00:10:47.550 14:04:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:47.550 14:04:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:47.550 14:04:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:47.550 14:04:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:47.550 ************************************ 00:10:47.550 START TEST nvmf_fio_target 00:10:47.550 ************************************ 00:10:47.550 14:04:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:47.550 * Looking for test storage... 00:10:47.550 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:47.550 14:04:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:47.550 14:04:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:10:47.550 14:04:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:47.550 14:04:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:47.550 14:04:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:47.550 14:04:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:47.550 14:04:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:47.550 14:04:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:47.550 14:04:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:47.550 14:04:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:47.550 14:04:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:47.550 14:04:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:47.550 14:04:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:10:47.550 14:04:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:10:47.550 14:04:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:47.550 14:04:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:47.550 14:04:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:47.550 14:04:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:47.550 14:04:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:47.550 14:04:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:47.550 14:04:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:47.550 14:04:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:47.550 14:04:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:47.550 14:04:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:47.550 14:04:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:47.550 14:04:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:10:47.550 14:04:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:47.550 14:04:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:10:47.551 14:04:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:47.551 14:04:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:47.551 14:04:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:47.551 14:04:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:47.551 14:04:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:47.551 14:04:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:47.551 14:04:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:47.551 14:04:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:47.551 14:04:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:47.551 14:04:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:47.551 14:04:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:47.551 14:04:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:10:47.551 14:04:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:47.551 14:04:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:47.551 14:04:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:47.551 14:04:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:47.551 14:04:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:47.551 14:04:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:47.551 14:04:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:47.551 14:04:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:47.551 14:04:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:47.551 14:04:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:47.551 14:04:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:10:47.551 14:04:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:50.078 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:50.078 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:10:50.078 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:50.078 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:50.078 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:50.078 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:50.078 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:50.078 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:10:50.078 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:50.078 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:10:50.078 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:10:50.078 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:10:50.078 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:10:50.078 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:10:50.078 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:10:50.078 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:50.078 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:50.078 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:50.078 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:50.078 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:50.078 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:50.078 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:50.078 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:50.078 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:50.078 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:50.078 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:50.078 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:50.078 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:50.078 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:50.078 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:50.078 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:50.078 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:50.079 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:50.079 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:10:50.079 Found 0000:09:00.0 (0x8086 - 0x159b) 00:10:50.079 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:50.079 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:50.079 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:50.079 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:50.079 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:50.079 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:50.079 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:10:50.079 Found 0000:09:00.1 (0x8086 - 0x159b) 00:10:50.079 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:50.079 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:50.079 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:50.079 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:50.079 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:50.079 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:50.079 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:50.079 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:50.079 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:50.079 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:50.079 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:50.079 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:50.079 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:50.079 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:50.079 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:50.079 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:10:50.079 Found net devices under 0000:09:00.0: cvl_0_0 00:10:50.079 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:50.079 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:50.079 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:50.079 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:50.079 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:50.079 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:50.079 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:50.079 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:50.079 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:10:50.079 Found net devices under 0000:09:00.1: cvl_0_1 00:10:50.079 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:50.079 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:50.079 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:10:50.079 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:50.079 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:50.079 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:50.079 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:50.079 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:50.079 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:50.079 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:50.079 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:50.079 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:50.079 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:50.079 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:50.079 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:50.079 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:50.079 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:50.079 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:50.079 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:50.079 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:50.079 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:50.079 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:50.079 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:50.079 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:50.079 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:50.079 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:50.079 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:50.079 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.249 ms 00:10:50.079 00:10:50.079 --- 10.0.0.2 ping statistics --- 00:10:50.079 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:50.079 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:10:50.079 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:50.079 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:50.079 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:10:50.079 00:10:50.079 --- 10.0.0.1 ping statistics --- 00:10:50.079 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:50.079 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:10:50.079 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:50.079 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:10:50.079 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:50.079 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:50.079 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:50.079 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:50.079 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:50.079 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:50.079 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:50.079 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:50.079 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:50.079 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:50.079 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:50.079 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=158350 00:10:50.079 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 158350 00:10:50.079 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:50.079 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 158350 ']' 00:10:50.079 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:50.079 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:50.079 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:50.079 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:50.079 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:50.079 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:50.079 [2024-07-26 14:04:57.760947] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:10:50.079 [2024-07-26 14:04:57.761022] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:50.079 EAL: No free 2048 kB hugepages reported on node 1 00:10:50.079 [2024-07-26 14:04:57.819242] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:50.079 [2024-07-26 14:04:57.920764] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:50.079 [2024-07-26 14:04:57.920839] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:50.079 [2024-07-26 14:04:57.920852] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:50.079 [2024-07-26 14:04:57.920862] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:50.079 [2024-07-26 14:04:57.920872] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:50.079 [2024-07-26 14:04:57.921003] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:50.079 [2024-07-26 14:04:57.921123] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:50.080 [2024-07-26 14:04:57.921191] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:50.080 [2024-07-26 14:04:57.921194] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:50.080 14:04:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:50.080 14:04:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:10:50.080 14:04:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:50.080 14:04:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:50.080 14:04:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:50.080 14:04:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:50.080 14:04:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:50.337 [2024-07-26 14:04:58.321759] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:50.337 14:04:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:50.595 14:04:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:50.595 14:04:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:50.853 14:04:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:50.853 14:04:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:51.110 14:04:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:51.111 14:04:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:51.677 14:04:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:51.677 14:04:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:51.677 14:04:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:51.935 14:04:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:51.935 14:04:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:52.193 14:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:52.193 14:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:52.451 14:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:52.451 14:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:52.708 14:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:52.966 14:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:52.966 14:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:53.223 14:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:53.224 14:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:53.481 14:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:53.738 [2024-07-26 14:05:01.659087] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:53.738 14:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:53.995 14:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:54.253 14:05:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:54.818 14:05:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:54.818 14:05:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:10:54.818 14:05:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:54.818 14:05:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:10:54.818 14:05:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:10:54.818 14:05:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:10:57.361 14:05:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:57.361 14:05:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:57.361 14:05:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:57.361 14:05:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:10:57.361 14:05:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:57.361 14:05:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:10:57.361 14:05:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:57.361 [global] 00:10:57.361 thread=1 00:10:57.361 invalidate=1 00:10:57.361 rw=write 00:10:57.361 time_based=1 00:10:57.361 runtime=1 00:10:57.361 ioengine=libaio 00:10:57.361 direct=1 00:10:57.361 bs=4096 00:10:57.361 iodepth=1 00:10:57.361 norandommap=0 00:10:57.361 numjobs=1 00:10:57.361 00:10:57.361 verify_dump=1 00:10:57.361 verify_backlog=512 00:10:57.361 verify_state_save=0 00:10:57.361 do_verify=1 00:10:57.361 verify=crc32c-intel 00:10:57.361 [job0] 00:10:57.361 filename=/dev/nvme0n1 00:10:57.361 [job1] 00:10:57.361 filename=/dev/nvme0n2 00:10:57.361 [job2] 00:10:57.361 filename=/dev/nvme0n3 00:10:57.361 [job3] 00:10:57.361 filename=/dev/nvme0n4 00:10:57.361 Could not set queue depth (nvme0n1) 00:10:57.361 Could not set queue depth (nvme0n2) 00:10:57.361 Could not set queue depth (nvme0n3) 00:10:57.361 Could not set queue depth (nvme0n4) 00:10:57.361 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:57.361 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:57.361 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:57.361 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:57.361 fio-3.35 00:10:57.361 Starting 4 threads 00:10:58.294 00:10:58.294 job0: (groupid=0, jobs=1): err= 0: pid=159431: Fri Jul 26 14:05:06 2024 00:10:58.294 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:10:58.294 slat (nsec): min=5640, max=38491, avg=13796.49, stdev=4909.63 00:10:58.294 clat (usec): min=185, max=2506, avg=239.50, stdev=61.77 00:10:58.294 lat (usec): min=192, max=2514, avg=253.29, stdev=63.16 00:10:58.294 clat percentiles (usec): 00:10:58.294 | 1.00th=[ 194], 5.00th=[ 202], 10.00th=[ 208], 20.00th=[ 217], 00:10:58.294 | 30.00th=[ 225], 40.00th=[ 229], 50.00th=[ 235], 60.00th=[ 239], 00:10:58.294 | 70.00th=[ 243], 80.00th=[ 249], 90.00th=[ 265], 95.00th=[ 314], 00:10:58.294 | 99.00th=[ 375], 99.50th=[ 400], 99.90th=[ 701], 99.95th=[ 832], 00:10:58.294 | 99.99th=[ 2507] 00:10:58.294 write: IOPS=2262, BW=9051KiB/s (9268kB/s)(9060KiB/1001msec); 0 zone resets 00:10:58.294 slat (nsec): min=6483, max=66543, avg=16931.75, stdev=6698.06 00:10:58.294 clat (usec): min=131, max=652, avg=187.24, stdev=32.12 00:10:58.294 lat (usec): min=140, max=677, avg=204.17, stdev=34.18 00:10:58.294 clat percentiles (usec): 00:10:58.294 | 1.00th=[ 141], 5.00th=[ 149], 10.00th=[ 155], 20.00th=[ 165], 00:10:58.294 | 30.00th=[ 172], 40.00th=[ 178], 50.00th=[ 184], 60.00th=[ 190], 00:10:58.294 | 70.00th=[ 194], 80.00th=[ 202], 90.00th=[ 223], 95.00th=[ 245], 00:10:58.294 | 99.00th=[ 297], 99.50th=[ 347], 99.90th=[ 437], 99.95th=[ 445], 00:10:58.294 | 99.99th=[ 652] 00:10:58.294 bw ( KiB/s): min= 8192, max= 8192, per=39.95%, avg=8192.00, stdev= 0.00, samples=1 00:10:58.294 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:58.294 lat (usec) : 250=89.75%, 500=10.13%, 750=0.07%, 1000=0.02% 00:10:58.294 lat (msec) : 4=0.02% 00:10:58.294 cpu : usr=5.40%, sys=8.50%, ctx=4316, majf=0, minf=1 00:10:58.294 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:58.294 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:58.294 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:58.294 issued rwts: total=2048,2265,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:58.294 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:58.294 job1: (groupid=0, jobs=1): err= 0: pid=159433: Fri Jul 26 14:05:06 2024 00:10:58.294 read: IOPS=143, BW=573KiB/s (586kB/s)(596KiB/1041msec) 00:10:58.294 slat (nsec): min=12584, max=54463, avg=20958.62, stdev=7873.75 00:10:58.294 clat (usec): min=229, max=41106, avg=6057.01, stdev=14184.34 00:10:58.294 lat (usec): min=262, max=41124, avg=6077.97, stdev=14186.27 00:10:58.294 clat percentiles (usec): 00:10:58.294 | 1.00th=[ 231], 5.00th=[ 241], 10.00th=[ 255], 20.00th=[ 289], 00:10:58.294 | 30.00th=[ 302], 40.00th=[ 310], 50.00th=[ 322], 60.00th=[ 330], 00:10:58.294 | 70.00th=[ 363], 80.00th=[ 392], 90.00th=[41157], 95.00th=[41157], 00:10:58.294 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:58.294 | 99.99th=[41157] 00:10:58.294 write: IOPS=491, BW=1967KiB/s (2015kB/s)(2048KiB/1041msec); 0 zone resets 00:10:58.294 slat (usec): min=5, max=12118, avg=39.67, stdev=534.88 00:10:58.294 clat (usec): min=162, max=754, avg=218.74, stdev=35.75 00:10:58.294 lat (usec): min=169, max=12329, avg=258.41, stdev=535.68 00:10:58.294 clat percentiles (usec): 00:10:58.294 | 1.00th=[ 167], 5.00th=[ 180], 10.00th=[ 190], 20.00th=[ 198], 00:10:58.294 | 30.00th=[ 204], 40.00th=[ 210], 50.00th=[ 215], 60.00th=[ 221], 00:10:58.294 | 70.00th=[ 227], 80.00th=[ 235], 90.00th=[ 249], 95.00th=[ 269], 00:10:58.294 | 99.00th=[ 302], 99.50th=[ 306], 99.90th=[ 758], 99.95th=[ 758], 00:10:58.294 | 99.99th=[ 758] 00:10:58.294 bw ( KiB/s): min= 4096, max= 4096, per=19.97%, avg=4096.00, stdev= 0.00, samples=1 00:10:58.294 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:58.294 lat (usec) : 250=71.86%, 500=24.66%, 1000=0.15% 00:10:58.294 lat (msec) : 4=0.15%, 50=3.18% 00:10:58.294 cpu : usr=0.29%, sys=1.35%, ctx=663, majf=0, minf=2 00:10:58.294 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:58.294 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:58.294 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:58.294 issued rwts: total=149,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:58.295 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:58.295 job2: (groupid=0, jobs=1): err= 0: pid=159435: Fri Jul 26 14:05:06 2024 00:10:58.295 read: IOPS=1974, BW=7896KiB/s (8086kB/s)(7904KiB/1001msec) 00:10:58.295 slat (nsec): min=5773, max=52775, avg=14098.90, stdev=5209.70 00:10:58.295 clat (usec): min=201, max=523, avg=255.56, stdev=36.85 00:10:58.295 lat (usec): min=207, max=530, avg=269.66, stdev=37.19 00:10:58.295 clat percentiles (usec): 00:10:58.295 | 1.00th=[ 210], 5.00th=[ 219], 10.00th=[ 225], 20.00th=[ 235], 00:10:58.295 | 30.00th=[ 241], 40.00th=[ 245], 50.00th=[ 249], 60.00th=[ 253], 00:10:58.295 | 70.00th=[ 258], 80.00th=[ 265], 90.00th=[ 293], 95.00th=[ 318], 00:10:58.295 | 99.00th=[ 441], 99.50th=[ 474], 99.90th=[ 506], 99.95th=[ 523], 00:10:58.295 | 99.99th=[ 523] 00:10:58.295 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:10:58.295 slat (nsec): min=7644, max=61830, avg=18103.59, stdev=6625.48 00:10:58.295 clat (usec): min=138, max=1214, avg=201.06, stdev=33.26 00:10:58.295 lat (usec): min=148, max=1237, avg=219.16, stdev=35.80 00:10:58.295 clat percentiles (usec): 00:10:58.295 | 1.00th=[ 159], 5.00th=[ 167], 10.00th=[ 172], 20.00th=[ 180], 00:10:58.295 | 30.00th=[ 190], 40.00th=[ 194], 50.00th=[ 198], 60.00th=[ 202], 00:10:58.295 | 70.00th=[ 208], 80.00th=[ 217], 90.00th=[ 235], 95.00th=[ 247], 00:10:58.295 | 99.00th=[ 269], 99.50th=[ 277], 99.90th=[ 371], 99.95th=[ 461], 00:10:58.295 | 99.99th=[ 1221] 00:10:58.295 bw ( KiB/s): min= 8192, max= 8192, per=39.95%, avg=8192.00, stdev= 0.00, samples=1 00:10:58.295 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:58.295 lat (usec) : 250=75.15%, 500=24.73%, 750=0.10% 00:10:58.295 lat (msec) : 2=0.02% 00:10:58.295 cpu : usr=5.40%, sys=8.10%, ctx=4025, majf=0, minf=1 00:10:58.295 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:58.295 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:58.295 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:58.295 issued rwts: total=1976,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:58.295 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:58.295 job3: (groupid=0, jobs=1): err= 0: pid=159436: Fri Jul 26 14:05:06 2024 00:10:58.295 read: IOPS=401, BW=1606KiB/s (1645kB/s)(1608KiB/1001msec) 00:10:58.295 slat (nsec): min=6348, max=36382, avg=11179.87, stdev=6481.80 00:10:58.295 clat (usec): min=197, max=42012, avg=2138.62, stdev=8600.97 00:10:58.295 lat (usec): min=205, max=42030, avg=2149.80, stdev=8604.55 00:10:58.295 clat percentiles (usec): 00:10:58.295 | 1.00th=[ 206], 5.00th=[ 212], 10.00th=[ 217], 20.00th=[ 225], 00:10:58.295 | 30.00th=[ 231], 40.00th=[ 239], 50.00th=[ 265], 60.00th=[ 281], 00:10:58.295 | 70.00th=[ 293], 80.00th=[ 318], 90.00th=[ 461], 95.00th=[ 545], 00:10:58.295 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:58.295 | 99.99th=[42206] 00:10:58.295 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:10:58.295 slat (nsec): min=6229, max=63342, avg=17506.32, stdev=8437.19 00:10:58.295 clat (usec): min=185, max=883, avg=241.17, stdev=46.83 00:10:58.295 lat (usec): min=198, max=910, avg=258.68, stdev=46.58 00:10:58.295 clat percentiles (usec): 00:10:58.295 | 1.00th=[ 192], 5.00th=[ 200], 10.00th=[ 210], 20.00th=[ 223], 00:10:58.295 | 30.00th=[ 231], 40.00th=[ 233], 50.00th=[ 235], 60.00th=[ 239], 00:10:58.295 | 70.00th=[ 245], 80.00th=[ 253], 90.00th=[ 262], 95.00th=[ 289], 00:10:58.295 | 99.00th=[ 379], 99.50th=[ 553], 99.90th=[ 881], 99.95th=[ 881], 00:10:58.295 | 99.99th=[ 881] 00:10:58.295 bw ( KiB/s): min= 4096, max= 4096, per=19.97%, avg=4096.00, stdev= 0.00, samples=1 00:10:58.295 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:58.295 lat (usec) : 250=62.80%, 500=34.03%, 750=1.09%, 1000=0.11% 00:10:58.295 lat (msec) : 50=1.97% 00:10:58.295 cpu : usr=0.40%, sys=2.00%, ctx=915, majf=0, minf=1 00:10:58.295 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:58.295 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:58.295 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:58.295 issued rwts: total=402,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:58.295 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:58.295 00:10:58.295 Run status group 0 (all jobs): 00:10:58.295 READ: bw=17.2MiB/s (18.0MB/s), 573KiB/s-8184KiB/s (586kB/s-8380kB/s), io=17.9MiB (18.7MB), run=1001-1041msec 00:10:58.295 WRITE: bw=20.0MiB/s (21.0MB/s), 1967KiB/s-9051KiB/s (2015kB/s-9268kB/s), io=20.8MiB (21.9MB), run=1001-1041msec 00:10:58.295 00:10:58.295 Disk stats (read/write): 00:10:58.295 nvme0n1: ios=1588/2039, merge=0/0, ticks=641/373, in_queue=1014, util=97.29% 00:10:58.295 nvme0n2: ios=169/512, merge=0/0, ticks=1682/112, in_queue=1794, util=97.45% 00:10:58.295 nvme0n3: ios=1536/1869, merge=0/0, ticks=363/361, in_queue=724, util=88.85% 00:10:58.295 nvme0n4: ios=159/512, merge=0/0, ticks=1428/119, in_queue=1547, util=97.35% 00:10:58.553 14:05:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:58.553 [global] 00:10:58.553 thread=1 00:10:58.553 invalidate=1 00:10:58.553 rw=randwrite 00:10:58.553 time_based=1 00:10:58.553 runtime=1 00:10:58.553 ioengine=libaio 00:10:58.553 direct=1 00:10:58.553 bs=4096 00:10:58.553 iodepth=1 00:10:58.553 norandommap=0 00:10:58.553 numjobs=1 00:10:58.553 00:10:58.553 verify_dump=1 00:10:58.553 verify_backlog=512 00:10:58.553 verify_state_save=0 00:10:58.553 do_verify=1 00:10:58.553 verify=crc32c-intel 00:10:58.553 [job0] 00:10:58.553 filename=/dev/nvme0n1 00:10:58.553 [job1] 00:10:58.553 filename=/dev/nvme0n2 00:10:58.553 [job2] 00:10:58.553 filename=/dev/nvme0n3 00:10:58.553 [job3] 00:10:58.553 filename=/dev/nvme0n4 00:10:58.553 Could not set queue depth (nvme0n1) 00:10:58.553 Could not set queue depth (nvme0n2) 00:10:58.553 Could not set queue depth (nvme0n3) 00:10:58.553 Could not set queue depth (nvme0n4) 00:10:58.553 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:58.553 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:58.553 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:58.553 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:58.553 fio-3.35 00:10:58.553 Starting 4 threads 00:10:59.927 00:10:59.927 job0: (groupid=0, jobs=1): err= 0: pid=159660: Fri Jul 26 14:05:07 2024 00:10:59.927 read: IOPS=1913, BW=7653KiB/s (7837kB/s)(7684KiB/1004msec) 00:10:59.927 slat (nsec): min=4642, max=37681, avg=11021.41, stdev=4782.99 00:10:59.927 clat (usec): min=170, max=41024, avg=286.57, stdev=1607.51 00:10:59.927 lat (usec): min=175, max=41042, avg=297.59, stdev=1607.68 00:10:59.927 clat percentiles (usec): 00:10:59.927 | 1.00th=[ 180], 5.00th=[ 186], 10.00th=[ 192], 20.00th=[ 204], 00:10:59.927 | 30.00th=[ 212], 40.00th=[ 219], 50.00th=[ 223], 60.00th=[ 229], 00:10:59.927 | 70.00th=[ 235], 80.00th=[ 239], 90.00th=[ 251], 95.00th=[ 262], 00:10:59.927 | 99.00th=[ 297], 99.50th=[ 338], 99.90th=[41157], 99.95th=[41157], 00:10:59.927 | 99.99th=[41157] 00:10:59.927 write: IOPS=2039, BW=8159KiB/s (8355kB/s)(8192KiB/1004msec); 0 zone resets 00:10:59.927 slat (nsec): min=7181, max=56221, avg=18736.26, stdev=6285.01 00:10:59.927 clat (usec): min=138, max=356, avg=183.56, stdev=21.13 00:10:59.927 lat (usec): min=147, max=372, avg=202.30, stdev=22.64 00:10:59.927 clat percentiles (usec): 00:10:59.927 | 1.00th=[ 147], 5.00th=[ 153], 10.00th=[ 159], 20.00th=[ 169], 00:10:59.927 | 30.00th=[ 174], 40.00th=[ 178], 50.00th=[ 182], 60.00th=[ 186], 00:10:59.927 | 70.00th=[ 192], 80.00th=[ 198], 90.00th=[ 210], 95.00th=[ 225], 00:10:59.927 | 99.00th=[ 243], 99.50th=[ 247], 99.90th=[ 330], 99.95th=[ 351], 00:10:59.927 | 99.99th=[ 359] 00:10:59.927 bw ( KiB/s): min= 7304, max= 9080, per=36.76%, avg=8192.00, stdev=1255.82, samples=2 00:10:59.927 iops : min= 1826, max= 2270, avg=2048.00, stdev=313.96, samples=2 00:10:59.928 lat (usec) : 250=94.94%, 500=4.99% 00:10:59.928 lat (msec) : 50=0.08% 00:10:59.928 cpu : usr=4.89%, sys=7.08%, ctx=3973, majf=0, minf=1 00:10:59.928 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:59.928 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:59.928 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:59.928 issued rwts: total=1921,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:59.928 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:59.928 job1: (groupid=0, jobs=1): err= 0: pid=159661: Fri Jul 26 14:05:07 2024 00:10:59.928 read: IOPS=1023, BW=4095KiB/s (4193kB/s)(4140KiB/1011msec) 00:10:59.928 slat (nsec): min=5466, max=60246, avg=12098.54, stdev=4660.74 00:10:59.928 clat (usec): min=178, max=41042, avg=664.29, stdev=4180.95 00:10:59.928 lat (usec): min=190, max=41058, avg=676.39, stdev=4181.71 00:10:59.928 clat percentiles (usec): 00:10:59.928 | 1.00th=[ 184], 5.00th=[ 190], 10.00th=[ 194], 20.00th=[ 200], 00:10:59.928 | 30.00th=[ 204], 40.00th=[ 208], 50.00th=[ 215], 60.00th=[ 221], 00:10:59.928 | 70.00th=[ 235], 80.00th=[ 273], 90.00th=[ 302], 95.00th=[ 338], 00:10:59.928 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:59.928 | 99.99th=[41157] 00:10:59.928 write: IOPS=1519, BW=6077KiB/s (6223kB/s)(6144KiB/1011msec); 0 zone resets 00:10:59.928 slat (nsec): min=6582, max=57201, avg=13405.35, stdev=4901.32 00:10:59.928 clat (usec): min=131, max=597, avg=182.75, stdev=37.17 00:10:59.928 lat (usec): min=138, max=613, avg=196.16, stdev=37.47 00:10:59.928 clat percentiles (usec): 00:10:59.928 | 1.00th=[ 139], 5.00th=[ 143], 10.00th=[ 147], 20.00th=[ 151], 00:10:59.928 | 30.00th=[ 157], 40.00th=[ 163], 50.00th=[ 174], 60.00th=[ 190], 00:10:59.928 | 70.00th=[ 202], 80.00th=[ 210], 90.00th=[ 225], 95.00th=[ 239], 00:10:59.928 | 99.00th=[ 306], 99.50th=[ 330], 99.90th=[ 441], 99.95th=[ 594], 00:10:59.928 | 99.99th=[ 594] 00:10:59.928 bw ( KiB/s): min= 2424, max= 9864, per=27.57%, avg=6144.00, stdev=5260.87, samples=2 00:10:59.928 iops : min= 606, max= 2466, avg=1536.00, stdev=1315.22, samples=2 00:10:59.928 lat (usec) : 250=88.64%, 500=10.81%, 750=0.12% 00:10:59.928 lat (msec) : 50=0.43% 00:10:59.928 cpu : usr=1.88%, sys=3.27%, ctx=2572, majf=0, minf=2 00:10:59.928 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:59.928 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:59.928 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:59.928 issued rwts: total=1035,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:59.928 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:59.928 job2: (groupid=0, jobs=1): err= 0: pid=159662: Fri Jul 26 14:05:07 2024 00:10:59.928 read: IOPS=1387, BW=5550KiB/s (5684kB/s)(5556KiB/1001msec) 00:10:59.928 slat (nsec): min=5390, max=45159, avg=12307.41, stdev=6085.28 00:10:59.928 clat (usec): min=182, max=40988, avg=476.23, stdev=3019.48 00:10:59.928 lat (usec): min=188, max=41020, avg=488.54, stdev=3020.38 00:10:59.928 clat percentiles (usec): 00:10:59.928 | 1.00th=[ 192], 5.00th=[ 198], 10.00th=[ 204], 20.00th=[ 210], 00:10:59.928 | 30.00th=[ 217], 40.00th=[ 225], 50.00th=[ 235], 60.00th=[ 243], 00:10:59.928 | 70.00th=[ 249], 80.00th=[ 255], 90.00th=[ 269], 95.00th=[ 285], 00:10:59.928 | 99.00th=[ 347], 99.50th=[40633], 99.90th=[41157], 99.95th=[41157], 00:10:59.928 | 99.99th=[41157] 00:10:59.928 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:10:59.928 slat (nsec): min=7135, max=45635, avg=12035.95, stdev=5080.50 00:10:59.928 clat (usec): min=142, max=629, avg=190.35, stdev=34.40 00:10:59.928 lat (usec): min=151, max=646, avg=202.39, stdev=36.29 00:10:59.928 clat percentiles (usec): 00:10:59.928 | 1.00th=[ 147], 5.00th=[ 153], 10.00th=[ 159], 20.00th=[ 165], 00:10:59.928 | 30.00th=[ 169], 40.00th=[ 176], 50.00th=[ 182], 60.00th=[ 196], 00:10:59.928 | 70.00th=[ 204], 80.00th=[ 212], 90.00th=[ 227], 95.00th=[ 237], 00:10:59.928 | 99.00th=[ 322], 99.50th=[ 343], 99.90th=[ 482], 99.95th=[ 627], 00:10:59.928 | 99.99th=[ 627] 00:10:59.928 bw ( KiB/s): min= 5288, max= 5288, per=23.73%, avg=5288.00, stdev= 0.00, samples=1 00:10:59.928 iops : min= 1322, max= 1322, avg=1322.00, stdev= 0.00, samples=1 00:10:59.928 lat (usec) : 250=84.92%, 500=14.70%, 750=0.03% 00:10:59.928 lat (msec) : 10=0.03%, 20=0.03%, 50=0.27% 00:10:59.928 cpu : usr=1.90%, sys=4.90%, ctx=2925, majf=0, minf=1 00:10:59.928 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:59.928 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:59.928 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:59.928 issued rwts: total=1389,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:59.928 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:59.928 job3: (groupid=0, jobs=1): err= 0: pid=159663: Fri Jul 26 14:05:07 2024 00:10:59.928 read: IOPS=202, BW=811KiB/s (831kB/s)(812KiB/1001msec) 00:10:59.928 slat (nsec): min=5578, max=52262, avg=13893.76, stdev=6575.36 00:10:59.928 clat (usec): min=196, max=41898, avg=4330.66, stdev=12189.15 00:10:59.928 lat (usec): min=203, max=41914, avg=4344.55, stdev=12189.99 00:10:59.928 clat percentiles (usec): 00:10:59.928 | 1.00th=[ 202], 5.00th=[ 210], 10.00th=[ 212], 20.00th=[ 219], 00:10:59.928 | 30.00th=[ 223], 40.00th=[ 229], 50.00th=[ 233], 60.00th=[ 243], 00:10:59.928 | 70.00th=[ 265], 80.00th=[ 326], 90.00th=[13173], 95.00th=[41157], 00:10:59.928 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:10:59.928 | 99.99th=[41681] 00:10:59.928 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:10:59.928 slat (nsec): min=12587, max=63258, avg=15022.36, stdev=3497.71 00:10:59.928 clat (usec): min=171, max=421, avg=210.08, stdev=15.39 00:10:59.928 lat (usec): min=186, max=435, avg=225.10, stdev=15.88 00:10:59.928 clat percentiles (usec): 00:10:59.928 | 1.00th=[ 176], 5.00th=[ 192], 10.00th=[ 198], 20.00th=[ 202], 00:10:59.928 | 30.00th=[ 204], 40.00th=[ 206], 50.00th=[ 210], 60.00th=[ 212], 00:10:59.928 | 70.00th=[ 215], 80.00th=[ 219], 90.00th=[ 225], 95.00th=[ 229], 00:10:59.928 | 99.00th=[ 253], 99.50th=[ 265], 99.90th=[ 420], 99.95th=[ 420], 00:10:59.928 | 99.99th=[ 420] 00:10:59.928 bw ( KiB/s): min= 4096, max= 4096, per=18.38%, avg=4096.00, stdev= 0.00, samples=1 00:10:59.928 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:59.928 lat (usec) : 250=89.93%, 500=6.85%, 750=0.28% 00:10:59.928 lat (msec) : 20=0.14%, 50=2.80% 00:10:59.928 cpu : usr=0.50%, sys=1.00%, ctx=717, majf=0, minf=1 00:10:59.928 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:59.928 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:59.928 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:59.928 issued rwts: total=203,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:59.928 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:59.928 00:10:59.928 Run status group 0 (all jobs): 00:10:59.928 READ: bw=17.6MiB/s (18.4MB/s), 811KiB/s-7653KiB/s (831kB/s-7837kB/s), io=17.8MiB (18.6MB), run=1001-1011msec 00:10:59.928 WRITE: bw=21.8MiB/s (22.8MB/s), 2046KiB/s-8159KiB/s (2095kB/s-8355kB/s), io=22.0MiB (23.1MB), run=1001-1011msec 00:10:59.928 00:10:59.928 Disk stats (read/write): 00:10:59.928 nvme0n1: ios=1664/2048, merge=0/0, ticks=587/351, in_queue=938, util=99.70% 00:10:59.928 nvme0n2: ios=1072/1536, merge=0/0, ticks=870/271, in_queue=1141, util=98.36% 00:10:59.928 nvme0n3: ios=1018/1024, merge=0/0, ticks=557/205, in_queue=762, util=87.89% 00:10:59.928 nvme0n4: ios=92/512, merge=0/0, ticks=1598/102, in_queue=1700, util=96.27% 00:10:59.928 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:59.928 [global] 00:10:59.928 thread=1 00:10:59.928 invalidate=1 00:10:59.928 rw=write 00:10:59.928 time_based=1 00:10:59.928 runtime=1 00:10:59.928 ioengine=libaio 00:10:59.928 direct=1 00:10:59.928 bs=4096 00:10:59.928 iodepth=128 00:10:59.928 norandommap=0 00:10:59.928 numjobs=1 00:10:59.928 00:10:59.928 verify_dump=1 00:10:59.928 verify_backlog=512 00:10:59.928 verify_state_save=0 00:10:59.928 do_verify=1 00:10:59.928 verify=crc32c-intel 00:10:59.928 [job0] 00:10:59.928 filename=/dev/nvme0n1 00:10:59.928 [job1] 00:10:59.928 filename=/dev/nvme0n2 00:10:59.928 [job2] 00:10:59.928 filename=/dev/nvme0n3 00:10:59.928 [job3] 00:10:59.928 filename=/dev/nvme0n4 00:10:59.928 Could not set queue depth (nvme0n1) 00:10:59.928 Could not set queue depth (nvme0n2) 00:10:59.928 Could not set queue depth (nvme0n3) 00:10:59.928 Could not set queue depth (nvme0n4) 00:11:00.187 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:00.187 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:00.187 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:00.187 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:00.187 fio-3.35 00:11:00.187 Starting 4 threads 00:11:01.563 00:11:01.563 job0: (groupid=0, jobs=1): err= 0: pid=159895: Fri Jul 26 14:05:09 2024 00:11:01.563 read: IOPS=3660, BW=14.3MiB/s (15.0MB/s)(14.3MiB/1002msec) 00:11:01.563 slat (usec): min=3, max=19225, avg=129.55, stdev=888.56 00:11:01.563 clat (usec): min=629, max=61522, avg=15445.30, stdev=10417.24 00:11:01.563 lat (usec): min=3965, max=61549, avg=15574.84, stdev=10511.68 00:11:01.563 clat percentiles (usec): 00:11:01.563 | 1.00th=[ 4686], 5.00th=[ 9241], 10.00th=[ 9372], 20.00th=[10028], 00:11:01.563 | 30.00th=[10552], 40.00th=[10945], 50.00th=[11469], 60.00th=[12125], 00:11:01.563 | 70.00th=[12780], 80.00th=[15795], 90.00th=[34866], 95.00th=[39584], 00:11:01.563 | 99.00th=[51643], 99.50th=[57410], 99.90th=[60031], 99.95th=[60031], 00:11:01.563 | 99.99th=[61604] 00:11:01.563 write: IOPS=4087, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1002msec); 0 zone resets 00:11:01.563 slat (usec): min=4, max=14406, avg=118.36, stdev=640.25 00:11:01.563 clat (usec): min=7055, max=61523, avg=16951.66, stdev=11402.18 00:11:01.563 lat (usec): min=7072, max=61529, avg=17070.02, stdev=11469.83 00:11:01.563 clat percentiles (usec): 00:11:01.563 | 1.00th=[ 7308], 5.00th=[ 8979], 10.00th=[ 9241], 20.00th=[10290], 00:11:01.563 | 30.00th=[10814], 40.00th=[11469], 50.00th=[11994], 60.00th=[12387], 00:11:01.563 | 70.00th=[15926], 80.00th=[22676], 90.00th=[38011], 95.00th=[45876], 00:11:01.563 | 99.00th=[54789], 99.50th=[55837], 99.90th=[57934], 99.95th=[57934], 00:11:01.563 | 99.99th=[61604] 00:11:01.563 bw ( KiB/s): min=16032, max=16416, per=25.20%, avg=16224.00, stdev=271.53, samples=2 00:11:01.563 iops : min= 4008, max= 4104, avg=4056.00, stdev=67.88, samples=2 00:11:01.563 lat (usec) : 750=0.01% 00:11:01.563 lat (msec) : 4=0.03%, 10=18.04%, 20=61.75%, 50=17.48%, 100=2.69% 00:11:01.563 cpu : usr=5.19%, sys=8.79%, ctx=454, majf=0, minf=17 00:11:01.563 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:11:01.563 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:01.563 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:01.563 issued rwts: total=3668,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:01.563 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:01.563 job1: (groupid=0, jobs=1): err= 0: pid=159896: Fri Jul 26 14:05:09 2024 00:11:01.563 read: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec) 00:11:01.563 slat (usec): min=2, max=17719, avg=118.29, stdev=813.93 00:11:01.563 clat (usec): min=8231, max=50684, avg=14888.26, stdev=6345.26 00:11:01.563 lat (usec): min=8241, max=50715, avg=15006.56, stdev=6397.08 00:11:01.563 clat percentiles (usec): 00:11:01.563 | 1.00th=[ 8455], 5.00th=[ 9634], 10.00th=[10421], 20.00th=[11076], 00:11:01.563 | 30.00th=[11469], 40.00th=[11863], 50.00th=[12387], 60.00th=[13304], 00:11:01.563 | 70.00th=[14353], 80.00th=[17433], 90.00th=[23200], 95.00th=[28443], 00:11:01.563 | 99.00th=[39584], 99.50th=[39584], 99.90th=[40109], 99.95th=[45876], 00:11:01.563 | 99.99th=[50594] 00:11:01.563 write: IOPS=4527, BW=17.7MiB/s (18.5MB/s)(17.8MiB/1004msec); 0 zone resets 00:11:01.563 slat (usec): min=3, max=19840, avg=103.96, stdev=636.77 00:11:01.563 clat (usec): min=405, max=41940, avg=14520.93, stdev=6245.51 00:11:01.563 lat (usec): min=424, max=41982, avg=14624.88, stdev=6294.57 00:11:01.563 clat percentiles (usec): 00:11:01.563 | 1.00th=[ 3130], 5.00th=[ 7570], 10.00th=[ 9634], 20.00th=[10552], 00:11:01.563 | 30.00th=[10945], 40.00th=[11207], 50.00th=[11731], 60.00th=[13698], 00:11:01.563 | 70.00th=[14615], 80.00th=[21627], 90.00th=[22938], 95.00th=[26608], 00:11:01.563 | 99.00th=[34341], 99.50th=[37487], 99.90th=[38536], 99.95th=[40109], 00:11:01.563 | 99.99th=[41681] 00:11:01.563 bw ( KiB/s): min=16384, max=18968, per=27.46%, avg=17676.00, stdev=1827.16, samples=2 00:11:01.563 iops : min= 4096, max= 4742, avg=4419.00, stdev=456.79, samples=2 00:11:01.563 lat (usec) : 500=0.02% 00:11:01.563 lat (msec) : 4=0.80%, 10=9.79%, 20=69.36%, 50=20.02%, 100=0.01% 00:11:01.563 cpu : usr=3.79%, sys=6.58%, ctx=541, majf=0, minf=9 00:11:01.563 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:11:01.563 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:01.563 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:01.563 issued rwts: total=4096,4546,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:01.563 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:01.564 job2: (groupid=0, jobs=1): err= 0: pid=159897: Fri Jul 26 14:05:09 2024 00:11:01.564 read: IOPS=4316, BW=16.9MiB/s (17.7MB/s)(16.9MiB/1002msec) 00:11:01.564 slat (usec): min=2, max=14794, avg=110.52, stdev=635.99 00:11:01.564 clat (usec): min=662, max=43914, avg=14192.35, stdev=4859.03 00:11:01.564 lat (usec): min=3660, max=43923, avg=14302.87, stdev=4894.14 00:11:01.564 clat percentiles (usec): 00:11:01.564 | 1.00th=[ 5407], 5.00th=[ 9896], 10.00th=[11207], 20.00th=[12256], 00:11:01.564 | 30.00th=[12518], 40.00th=[12780], 50.00th=[13042], 60.00th=[13829], 00:11:01.564 | 70.00th=[14484], 80.00th=[15270], 90.00th=[17433], 95.00th=[18744], 00:11:01.564 | 99.00th=[41681], 99.50th=[43254], 99.90th=[43779], 99.95th=[43779], 00:11:01.564 | 99.99th=[43779] 00:11:01.564 write: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec); 0 zone resets 00:11:01.564 slat (usec): min=3, max=27362, avg=106.84, stdev=772.81 00:11:01.564 clat (usec): min=5079, max=54853, avg=14185.76, stdev=5652.51 00:11:01.564 lat (usec): min=5085, max=54869, avg=14292.60, stdev=5700.67 00:11:01.564 clat percentiles (usec): 00:11:01.564 | 1.00th=[ 6915], 5.00th=[ 9896], 10.00th=[11076], 20.00th=[11994], 00:11:01.564 | 30.00th=[12256], 40.00th=[12518], 50.00th=[12911], 60.00th=[13566], 00:11:01.564 | 70.00th=[14091], 80.00th=[14746], 90.00th=[17171], 95.00th=[22414], 00:11:01.564 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[49546], 00:11:01.564 | 99.99th=[54789] 00:11:01.564 bw ( KiB/s): min=17976, max=18925, per=28.66%, avg=18450.50, stdev=671.04, samples=2 00:11:01.564 iops : min= 4494, max= 4731, avg=4612.50, stdev=167.58, samples=2 00:11:01.564 lat (usec) : 750=0.01% 00:11:01.564 lat (msec) : 4=0.27%, 10=5.18%, 20=89.09%, 50=5.43%, 100=0.02% 00:11:01.564 cpu : usr=3.10%, sys=6.39%, ctx=355, majf=0, minf=9 00:11:01.564 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:11:01.564 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:01.564 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:01.564 issued rwts: total=4325,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:01.564 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:01.564 job3: (groupid=0, jobs=1): err= 0: pid=159898: Fri Jul 26 14:05:09 2024 00:11:01.564 read: IOPS=3296, BW=12.9MiB/s (13.5MB/s)(13.5MiB/1046msec) 00:11:01.564 slat (usec): min=2, max=21897, avg=112.90, stdev=1004.77 00:11:01.564 clat (usec): min=5125, max=53724, avg=18246.12, stdev=9675.74 00:11:01.564 lat (usec): min=5135, max=57978, avg=18359.02, stdev=9741.03 00:11:01.564 clat percentiles (usec): 00:11:01.564 | 1.00th=[ 7177], 5.00th=[ 9765], 10.00th=[10028], 20.00th=[11994], 00:11:01.564 | 30.00th=[13173], 40.00th=[14615], 50.00th=[15139], 60.00th=[15795], 00:11:01.564 | 70.00th=[18744], 80.00th=[24249], 90.00th=[28705], 95.00th=[43779], 00:11:01.564 | 99.00th=[53216], 99.50th=[53740], 99.90th=[53740], 99.95th=[53740], 00:11:01.564 | 99.99th=[53740] 00:11:01.564 write: IOPS=3426, BW=13.4MiB/s (14.0MB/s)(14.0MiB/1046msec); 0 zone resets 00:11:01.564 slat (usec): min=3, max=17714, avg=121.92, stdev=762.52 00:11:01.564 clat (usec): min=1256, max=67928, avg=19482.68, stdev=12009.45 00:11:01.564 lat (usec): min=1279, max=67935, avg=19604.60, stdev=12098.76 00:11:01.564 clat percentiles (usec): 00:11:01.564 | 1.00th=[ 3163], 5.00th=[ 5014], 10.00th=[ 6718], 20.00th=[10683], 00:11:01.564 | 30.00th=[12911], 40.00th=[14091], 50.00th=[19006], 60.00th=[21627], 00:11:01.564 | 70.00th=[22414], 80.00th=[23200], 90.00th=[31327], 95.00th=[47449], 00:11:01.564 | 99.00th=[65274], 99.50th=[66323], 99.90th=[67634], 99.95th=[67634], 00:11:01.564 | 99.99th=[67634] 00:11:01.564 bw ( KiB/s): min=14120, max=14552, per=22.27%, avg=14336.00, stdev=305.47, samples=2 00:11:01.564 iops : min= 3530, max= 3638, avg=3584.00, stdev=76.37, samples=2 00:11:01.564 lat (msec) : 2=0.23%, 4=1.01%, 10=11.48%, 20=50.71%, 50=32.78% 00:11:01.564 lat (msec) : 100=3.80% 00:11:01.564 cpu : usr=2.30%, sys=5.45%, ctx=370, majf=0, minf=15 00:11:01.564 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:11:01.564 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:01.564 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:01.564 issued rwts: total=3448,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:01.564 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:01.564 00:11:01.564 Run status group 0 (all jobs): 00:11:01.564 READ: bw=58.0MiB/s (60.8MB/s), 12.9MiB/s-16.9MiB/s (13.5MB/s-17.7MB/s), io=60.7MiB (63.6MB), run=1002-1046msec 00:11:01.564 WRITE: bw=62.9MiB/s (65.9MB/s), 13.4MiB/s-18.0MiB/s (14.0MB/s-18.8MB/s), io=65.8MiB (69.0MB), run=1002-1046msec 00:11:01.564 00:11:01.564 Disk stats (read/write): 00:11:01.564 nvme0n1: ios=3086/3073, merge=0/0, ticks=17689/16862, in_queue=34551, util=96.99% 00:11:01.564 nvme0n2: ios=3630/3615, merge=0/0, ticks=29209/33473, in_queue=62682, util=96.85% 00:11:01.564 nvme0n3: ios=3637/3866, merge=0/0, ticks=25993/26688, in_queue=52681, util=96.64% 00:11:01.564 nvme0n4: ios=2605/2889, merge=0/0, ticks=45212/59017, in_queue=104229, util=97.04% 00:11:01.564 14:05:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:11:01.564 [global] 00:11:01.564 thread=1 00:11:01.564 invalidate=1 00:11:01.564 rw=randwrite 00:11:01.564 time_based=1 00:11:01.564 runtime=1 00:11:01.564 ioengine=libaio 00:11:01.564 direct=1 00:11:01.564 bs=4096 00:11:01.564 iodepth=128 00:11:01.564 norandommap=0 00:11:01.564 numjobs=1 00:11:01.564 00:11:01.564 verify_dump=1 00:11:01.564 verify_backlog=512 00:11:01.564 verify_state_save=0 00:11:01.564 do_verify=1 00:11:01.564 verify=crc32c-intel 00:11:01.564 [job0] 00:11:01.564 filename=/dev/nvme0n1 00:11:01.564 [job1] 00:11:01.564 filename=/dev/nvme0n2 00:11:01.564 [job2] 00:11:01.564 filename=/dev/nvme0n3 00:11:01.564 [job3] 00:11:01.564 filename=/dev/nvme0n4 00:11:01.564 Could not set queue depth (nvme0n1) 00:11:01.564 Could not set queue depth (nvme0n2) 00:11:01.564 Could not set queue depth (nvme0n3) 00:11:01.564 Could not set queue depth (nvme0n4) 00:11:01.564 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:01.564 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:01.564 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:01.564 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:01.564 fio-3.35 00:11:01.564 Starting 4 threads 00:11:02.939 00:11:02.939 job0: (groupid=0, jobs=1): err= 0: pid=160240: Fri Jul 26 14:05:10 2024 00:11:02.939 read: IOPS=5089, BW=19.9MiB/s (20.8MB/s)(20.0MiB/1006msec) 00:11:02.939 slat (usec): min=2, max=10595, avg=92.70, stdev=617.64 00:11:02.939 clat (usec): min=3163, max=35704, avg=12170.07, stdev=4197.96 00:11:02.939 lat (usec): min=3168, max=35710, avg=12262.77, stdev=4241.81 00:11:02.939 clat percentiles (usec): 00:11:02.939 | 1.00th=[ 6587], 5.00th=[ 8094], 10.00th=[ 8717], 20.00th=[ 9241], 00:11:02.939 | 30.00th=[ 9896], 40.00th=[10421], 50.00th=[10552], 60.00th=[11338], 00:11:02.939 | 70.00th=[13304], 80.00th=[14484], 90.00th=[17171], 95.00th=[21627], 00:11:02.939 | 99.00th=[27919], 99.50th=[32375], 99.90th=[34341], 99.95th=[35914], 00:11:02.939 | 99.99th=[35914] 00:11:02.939 write: IOPS=5111, BW=20.0MiB/s (20.9MB/s)(20.1MiB/1006msec); 0 zone resets 00:11:02.939 slat (usec): min=3, max=11049, avg=82.33, stdev=505.58 00:11:02.939 clat (usec): min=362, max=72280, avg=12706.56, stdev=9379.35 00:11:02.939 lat (usec): min=402, max=72288, avg=12788.88, stdev=9421.87 00:11:02.939 clat percentiles (usec): 00:11:02.939 | 1.00th=[ 3425], 5.00th=[ 3916], 10.00th=[ 5604], 20.00th=[ 7898], 00:11:02.939 | 30.00th=[ 8848], 40.00th=[ 9503], 50.00th=[10421], 60.00th=[10814], 00:11:02.939 | 70.00th=[11469], 80.00th=[12518], 90.00th=[24249], 95.00th=[32375], 00:11:02.939 | 99.00th=[47973], 99.50th=[64750], 99.90th=[71828], 99.95th=[71828], 00:11:02.939 | 99.99th=[71828] 00:11:02.939 bw ( KiB/s): min=16832, max=24128, per=34.71%, avg=20480.00, stdev=5159.05, samples=2 00:11:02.939 iops : min= 4208, max= 6032, avg=5120.00, stdev=1289.76, samples=2 00:11:02.939 lat (usec) : 500=0.03%, 750=0.12% 00:11:02.939 lat (msec) : 4=3.04%, 10=35.06%, 20=51.51%, 50=9.80%, 100=0.44% 00:11:02.939 cpu : usr=3.68%, sys=7.56%, ctx=490, majf=0, minf=1 00:11:02.939 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:11:02.939 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:02.939 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:02.939 issued rwts: total=5120,5142,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:02.939 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:02.939 job1: (groupid=0, jobs=1): err= 0: pid=160241: Fri Jul 26 14:05:10 2024 00:11:02.939 read: IOPS=3680, BW=14.4MiB/s (15.1MB/s)(14.5MiB/1008msec) 00:11:02.939 slat (usec): min=2, max=13081, avg=115.79, stdev=770.23 00:11:02.939 clat (usec): min=1194, max=76223, avg=14157.00, stdev=9158.86 00:11:02.939 lat (usec): min=4735, max=76232, avg=14272.78, stdev=9227.13 00:11:02.939 clat percentiles (usec): 00:11:02.939 | 1.00th=[ 5669], 5.00th=[ 8979], 10.00th=[ 9634], 20.00th=[10421], 00:11:02.939 | 30.00th=[11076], 40.00th=[11338], 50.00th=[11469], 60.00th=[11731], 00:11:02.939 | 70.00th=[12256], 80.00th=[13566], 90.00th=[21890], 95.00th=[26346], 00:11:02.939 | 99.00th=[68682], 99.50th=[72877], 99.90th=[76022], 99.95th=[76022], 00:11:02.939 | 99.99th=[76022] 00:11:02.939 write: IOPS=4063, BW=15.9MiB/s (16.6MB/s)(16.0MiB/1008msec); 0 zone resets 00:11:02.939 slat (usec): min=4, max=29817, avg=127.61, stdev=887.72 00:11:02.939 clat (usec): min=565, max=117865, avg=18364.77, stdev=19655.15 00:11:02.939 lat (usec): min=583, max=117886, avg=18492.38, stdev=19781.66 00:11:02.939 clat percentiles (usec): 00:11:02.939 | 1.00th=[ 1696], 5.00th=[ 3949], 10.00th=[ 8717], 20.00th=[ 10290], 00:11:02.939 | 30.00th=[ 11076], 40.00th=[ 11731], 50.00th=[ 12256], 60.00th=[ 12649], 00:11:02.939 | 70.00th=[ 14353], 80.00th=[ 20579], 90.00th=[ 31589], 95.00th=[ 65799], 00:11:02.939 | 99.00th=[110625], 99.50th=[111674], 99.90th=[117965], 99.95th=[117965], 00:11:02.939 | 99.99th=[117965] 00:11:02.939 bw ( KiB/s): min=12432, max=20320, per=27.75%, avg=16376.00, stdev=5577.66, samples=2 00:11:02.939 iops : min= 3108, max= 5080, avg=4094.00, stdev=1394.41, samples=2 00:11:02.939 lat (usec) : 750=0.10%, 1000=0.26% 00:11:02.939 lat (msec) : 2=0.90%, 4=1.49%, 10=11.85%, 20=67.52%, 50=13.28% 00:11:02.939 lat (msec) : 100=3.42%, 250=1.18% 00:11:02.939 cpu : usr=4.87%, sys=7.65%, ctx=373, majf=0, minf=1 00:11:02.940 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:11:02.940 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:02.940 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:02.940 issued rwts: total=3710,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:02.940 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:02.940 job2: (groupid=0, jobs=1): err= 0: pid=160242: Fri Jul 26 14:05:10 2024 00:11:02.940 read: IOPS=3041, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1006msec) 00:11:02.940 slat (usec): min=3, max=22864, avg=152.86, stdev=1114.73 00:11:02.940 clat (usec): min=4965, max=66085, avg=18831.98, stdev=9865.69 00:11:02.940 lat (usec): min=5722, max=66101, avg=18984.84, stdev=9963.67 00:11:02.940 clat percentiles (usec): 00:11:02.940 | 1.00th=[ 7832], 5.00th=[11207], 10.00th=[11731], 20.00th=[12256], 00:11:02.940 | 30.00th=[13042], 40.00th=[13698], 50.00th=[13960], 60.00th=[15795], 00:11:02.940 | 70.00th=[18220], 80.00th=[25297], 90.00th=[34341], 95.00th=[43254], 00:11:02.940 | 99.00th=[46924], 99.50th=[47449], 99.90th=[52691], 99.95th=[57410], 00:11:02.940 | 99.99th=[66323] 00:11:02.940 write: IOPS=3053, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1006msec); 0 zone resets 00:11:02.940 slat (usec): min=5, max=33882, avg=153.26, stdev=1319.92 00:11:02.940 clat (usec): min=3823, max=89267, avg=22045.41, stdev=13440.13 00:11:02.940 lat (usec): min=3831, max=89319, avg=22198.67, stdev=13589.64 00:11:02.940 clat percentiles (usec): 00:11:02.940 | 1.00th=[ 5735], 5.00th=[ 9372], 10.00th=[10421], 20.00th=[11863], 00:11:02.940 | 30.00th=[12256], 40.00th=[12911], 50.00th=[17433], 60.00th=[21890], 00:11:02.940 | 70.00th=[24773], 80.00th=[31065], 90.00th=[46924], 95.00th=[54789], 00:11:02.940 | 99.00th=[56361], 99.50th=[56361], 99.90th=[68682], 99.95th=[76022], 00:11:02.940 | 99.99th=[89654] 00:11:02.940 bw ( KiB/s): min= 9392, max=15184, per=20.82%, avg=12288.00, stdev=4095.56, samples=2 00:11:02.940 iops : min= 2348, max= 3796, avg=3072.00, stdev=1023.89, samples=2 00:11:02.940 lat (msec) : 4=0.13%, 10=4.26%, 20=59.49%, 50=32.93%, 100=3.20% 00:11:02.940 cpu : usr=4.28%, sys=6.37%, ctx=303, majf=0, minf=1 00:11:02.940 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:11:02.940 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:02.940 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:02.940 issued rwts: total=3060,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:02.940 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:02.940 job3: (groupid=0, jobs=1): err= 0: pid=160243: Fri Jul 26 14:05:10 2024 00:11:02.940 read: IOPS=2219, BW=8876KiB/s (9090kB/s)(8912KiB/1004msec) 00:11:02.940 slat (usec): min=2, max=28031, avg=212.38, stdev=1548.76 00:11:02.940 clat (usec): min=544, max=94356, avg=28698.26, stdev=19650.63 00:11:02.940 lat (usec): min=3407, max=94389, avg=28910.64, stdev=19786.41 00:11:02.940 clat percentiles (usec): 00:11:02.940 | 1.00th=[ 3687], 5.00th=[ 6718], 10.00th=[ 9634], 20.00th=[12387], 00:11:02.940 | 30.00th=[13435], 40.00th=[18482], 50.00th=[25297], 60.00th=[28443], 00:11:02.940 | 70.00th=[34866], 80.00th=[43254], 90.00th=[58459], 95.00th=[72877], 00:11:02.940 | 99.00th=[77071], 99.50th=[84411], 99.90th=[89654], 99.95th=[89654], 00:11:02.940 | 99.99th=[93848] 00:11:02.940 write: IOPS=2549, BW=9.96MiB/s (10.4MB/s)(10.0MiB/1004msec); 0 zone resets 00:11:02.940 slat (usec): min=3, max=20545, avg=200.47, stdev=1229.08 00:11:02.940 clat (usec): min=6999, max=86109, avg=24772.62, stdev=14407.60 00:11:02.940 lat (usec): min=7006, max=86114, avg=24973.08, stdev=14544.71 00:11:02.940 clat percentiles (usec): 00:11:02.940 | 1.00th=[ 7111], 5.00th=[ 9896], 10.00th=[10421], 20.00th=[12911], 00:11:02.940 | 30.00th=[14091], 40.00th=[16909], 50.00th=[21890], 60.00th=[25035], 00:11:02.940 | 70.00th=[26870], 80.00th=[34866], 90.00th=[46924], 95.00th=[53740], 00:11:02.940 | 99.00th=[73925], 99.50th=[84411], 99.90th=[86508], 99.95th=[86508], 00:11:02.940 | 99.99th=[86508] 00:11:02.940 bw ( KiB/s): min= 9928, max=10552, per=17.35%, avg=10240.00, stdev=441.23, samples=2 00:11:02.940 iops : min= 2482, max= 2638, avg=2560.00, stdev=110.31, samples=2 00:11:02.940 lat (usec) : 750=0.02% 00:11:02.940 lat (msec) : 4=0.67%, 10=7.52%, 20=35.96%, 50=43.42%, 100=12.41% 00:11:02.940 cpu : usr=2.09%, sys=2.99%, ctx=249, majf=0, minf=1 00:11:02.940 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:11:02.940 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:02.940 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:02.940 issued rwts: total=2228,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:02.940 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:02.940 00:11:02.940 Run status group 0 (all jobs): 00:11:02.940 READ: bw=54.7MiB/s (57.4MB/s), 8876KiB/s-19.9MiB/s (9090kB/s-20.8MB/s), io=55.1MiB (57.8MB), run=1004-1008msec 00:11:02.940 WRITE: bw=57.6MiB/s (60.4MB/s), 9.96MiB/s-20.0MiB/s (10.4MB/s-20.9MB/s), io=58.1MiB (60.9MB), run=1004-1008msec 00:11:02.940 00:11:02.940 Disk stats (read/write): 00:11:02.940 nvme0n1: ios=4613/4647, merge=0/0, ticks=39742/39604, in_queue=79346, util=86.77% 00:11:02.940 nvme0n2: ios=3346/3584, merge=0/0, ticks=24604/43926, in_queue=68530, util=98.38% 00:11:02.940 nvme0n3: ios=2073/2559, merge=0/0, ticks=33157/48267, in_queue=81424, util=97.81% 00:11:02.940 nvme0n4: ios=1619/2048, merge=0/0, ticks=21131/23483, in_queue=44614, util=87.91% 00:11:02.940 14:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:11:02.940 14:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=160381 00:11:02.940 14:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:11:02.940 14:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:11:02.940 [global] 00:11:02.940 thread=1 00:11:02.940 invalidate=1 00:11:02.940 rw=read 00:11:02.940 time_based=1 00:11:02.940 runtime=10 00:11:02.940 ioengine=libaio 00:11:02.940 direct=1 00:11:02.940 bs=4096 00:11:02.940 iodepth=1 00:11:02.940 norandommap=1 00:11:02.940 numjobs=1 00:11:02.940 00:11:02.940 [job0] 00:11:02.940 filename=/dev/nvme0n1 00:11:02.940 [job1] 00:11:02.940 filename=/dev/nvme0n2 00:11:02.940 [job2] 00:11:02.940 filename=/dev/nvme0n3 00:11:02.940 [job3] 00:11:02.940 filename=/dev/nvme0n4 00:11:02.940 Could not set queue depth (nvme0n1) 00:11:02.940 Could not set queue depth (nvme0n2) 00:11:02.940 Could not set queue depth (nvme0n3) 00:11:02.940 Could not set queue depth (nvme0n4) 00:11:02.940 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:02.940 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:02.940 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:02.940 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:02.940 fio-3.35 00:11:02.940 Starting 4 threads 00:11:06.221 14:05:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:11:06.221 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=51044352, buflen=4096 00:11:06.221 fio: pid=160483, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:11:06.221 14:05:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:11:06.478 14:05:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:06.478 14:05:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:11:06.479 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=23728128, buflen=4096 00:11:06.479 fio: pid=160482, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:11:06.736 14:05:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:06.736 14:05:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:11:06.736 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=5234688, buflen=4096 00:11:06.736 fio: pid=160480, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:11:06.994 14:05:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:06.994 14:05:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:11:06.995 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=5398528, buflen=4096 00:11:06.995 fio: pid=160481, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:11:06.995 00:11:06.995 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=160480: Fri Jul 26 14:05:14 2024 00:11:06.995 read: IOPS=372, BW=1487KiB/s (1523kB/s)(5112KiB/3437msec) 00:11:06.995 slat (usec): min=6, max=14867, avg=38.02, stdev=568.24 00:11:06.995 clat (usec): min=189, max=42982, avg=2629.75, stdev=9608.96 00:11:06.995 lat (usec): min=197, max=42998, avg=2656.16, stdev=9614.42 00:11:06.995 clat percentiles (usec): 00:11:06.995 | 1.00th=[ 198], 5.00th=[ 217], 10.00th=[ 225], 20.00th=[ 235], 00:11:06.995 | 30.00th=[ 243], 40.00th=[ 249], 50.00th=[ 260], 60.00th=[ 273], 00:11:06.995 | 70.00th=[ 285], 80.00th=[ 297], 90.00th=[ 330], 95.00th=[41157], 00:11:06.995 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:11:06.995 | 99.99th=[42730] 00:11:06.995 bw ( KiB/s): min= 176, max= 7232, per=7.46%, avg=1685.33, stdev=2748.57, samples=6 00:11:06.995 iops : min= 44, max= 1808, avg=421.33, stdev=687.14, samples=6 00:11:06.995 lat (usec) : 250=40.27%, 500=53.79% 00:11:06.995 lat (msec) : 2=0.08%, 10=0.08%, 50=5.71% 00:11:06.995 cpu : usr=0.29%, sys=0.93%, ctx=1281, majf=0, minf=1 00:11:06.995 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:06.995 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:06.995 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:06.995 issued rwts: total=1279,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:06.995 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:06.995 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=160481: Fri Jul 26 14:05:14 2024 00:11:06.995 read: IOPS=357, BW=1428KiB/s (1463kB/s)(5272KiB/3691msec) 00:11:06.995 slat (usec): min=5, max=19930, avg=46.70, stdev=767.15 00:11:06.995 clat (usec): min=172, max=41170, avg=2733.77, stdev=9738.92 00:11:06.995 lat (usec): min=178, max=60983, avg=2780.48, stdev=9834.13 00:11:06.995 clat percentiles (usec): 00:11:06.995 | 1.00th=[ 182], 5.00th=[ 192], 10.00th=[ 196], 20.00th=[ 206], 00:11:06.995 | 30.00th=[ 217], 40.00th=[ 231], 50.00th=[ 241], 60.00th=[ 247], 00:11:06.995 | 70.00th=[ 255], 80.00th=[ 265], 90.00th=[ 289], 95.00th=[41157], 00:11:06.995 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:11:06.995 | 99.99th=[41157] 00:11:06.995 bw ( KiB/s): min= 96, max= 5673, per=5.93%, avg=1341.86, stdev=2178.71, samples=7 00:11:06.995 iops : min= 24, max= 1418, avg=335.43, stdev=544.59, samples=7 00:11:06.995 lat (usec) : 250=63.31%, 500=30.02%, 750=0.23%, 1000=0.08% 00:11:06.995 lat (msec) : 2=0.08%, 20=0.15%, 50=6.07% 00:11:06.995 cpu : usr=0.03%, sys=0.62%, ctx=1323, majf=0, minf=1 00:11:06.995 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:06.995 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:06.995 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:06.995 issued rwts: total=1319,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:06.995 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:06.995 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=160482: Fri Jul 26 14:05:14 2024 00:11:06.995 read: IOPS=1823, BW=7291KiB/s (7466kB/s)(22.6MiB/3178msec) 00:11:06.995 slat (usec): min=4, max=12214, avg=12.25, stdev=182.97 00:11:06.995 clat (usec): min=170, max=42049, avg=530.47, stdev=3493.77 00:11:06.995 lat (usec): min=175, max=42064, avg=542.71, stdev=3499.60 00:11:06.995 clat percentiles (usec): 00:11:06.995 | 1.00th=[ 180], 5.00th=[ 190], 10.00th=[ 198], 20.00th=[ 204], 00:11:06.995 | 30.00th=[ 210], 40.00th=[ 215], 50.00th=[ 219], 60.00th=[ 225], 00:11:06.995 | 70.00th=[ 231], 80.00th=[ 243], 90.00th=[ 273], 95.00th=[ 310], 00:11:06.995 | 99.00th=[ 457], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:11:06.995 | 99.99th=[42206] 00:11:06.995 bw ( KiB/s): min= 144, max=17384, per=30.70%, avg=6937.33, stdev=6140.44, samples=6 00:11:06.995 iops : min= 36, max= 4346, avg=1734.33, stdev=1535.11, samples=6 00:11:06.995 lat (usec) : 250=83.33%, 500=15.79%, 750=0.10% 00:11:06.995 lat (msec) : 4=0.02%, 50=0.74% 00:11:06.995 cpu : usr=0.63%, sys=1.98%, ctx=5796, majf=0, minf=1 00:11:06.995 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:06.995 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:06.995 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:06.995 issued rwts: total=5794,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:06.995 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:06.995 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=160483: Fri Jul 26 14:05:14 2024 00:11:06.995 read: IOPS=4285, BW=16.7MiB/s (17.6MB/s)(48.7MiB/2908msec) 00:11:06.995 slat (nsec): min=4730, max=52756, avg=11328.17, stdev=5419.48 00:11:06.995 clat (usec): min=171, max=3730, avg=218.79, stdev=45.24 00:11:06.995 lat (usec): min=177, max=3743, avg=230.12, stdev=46.03 00:11:06.995 clat percentiles (usec): 00:11:06.995 | 1.00th=[ 180], 5.00th=[ 188], 10.00th=[ 194], 20.00th=[ 200], 00:11:06.995 | 30.00th=[ 204], 40.00th=[ 208], 50.00th=[ 212], 60.00th=[ 219], 00:11:06.995 | 70.00th=[ 223], 80.00th=[ 231], 90.00th=[ 249], 95.00th=[ 277], 00:11:06.995 | 99.00th=[ 310], 99.50th=[ 351], 99.90th=[ 494], 99.95th=[ 545], 00:11:06.995 | 99.99th=[ 1401] 00:11:06.995 bw ( KiB/s): min=15800, max=18584, per=75.97%, avg=17166.40, stdev=1011.98, samples=5 00:11:06.995 iops : min= 3950, max= 4646, avg=4291.60, stdev=253.00, samples=5 00:11:06.995 lat (usec) : 250=90.44%, 500=9.46%, 750=0.06%, 1000=0.02% 00:11:06.995 lat (msec) : 2=0.02%, 4=0.01% 00:11:06.995 cpu : usr=2.27%, sys=5.30%, ctx=12464, majf=0, minf=1 00:11:06.995 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:06.995 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:06.995 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:06.995 issued rwts: total=12463,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:06.995 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:06.995 00:11:06.995 Run status group 0 (all jobs): 00:11:06.995 READ: bw=22.1MiB/s (23.1MB/s), 1428KiB/s-16.7MiB/s (1463kB/s-17.6MB/s), io=81.4MiB (85.4MB), run=2908-3691msec 00:11:06.995 00:11:06.995 Disk stats (read/write): 00:11:06.995 nvme0n1: ios=1276/0, merge=0/0, ticks=3268/0, in_queue=3268, util=95.59% 00:11:06.995 nvme0n2: ios=1316/0, merge=0/0, ticks=3505/0, in_queue=3505, util=95.31% 00:11:06.995 nvme0n3: ios=5603/0, merge=0/0, ticks=2964/0, in_queue=2964, util=96.23% 00:11:06.995 nvme0n4: ios=12322/0, merge=0/0, ticks=2777/0, in_queue=2777, util=100.00% 00:11:07.253 14:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:07.253 14:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:11:07.511 14:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:07.511 14:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:11:07.769 14:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:07.769 14:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:11:08.028 14:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:08.028 14:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:11:08.286 14:05:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:11:08.286 14:05:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 160381 00:11:08.286 14:05:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:11:08.286 14:05:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:08.286 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:08.286 14:05:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:08.286 14:05:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:11:08.286 14:05:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:08.286 14:05:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:08.286 14:05:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:08.286 14:05:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:08.286 14:05:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:11:08.286 14:05:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:11:08.286 14:05:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:11:08.286 nvmf hotplug test: fio failed as expected 00:11:08.286 14:05:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:08.544 14:05:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:11:08.544 14:05:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:11:08.544 14:05:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:11:08.544 14:05:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:11:08.544 14:05:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:11:08.544 14:05:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:08.544 14:05:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:11:08.544 14:05:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:08.544 14:05:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:11:08.544 14:05:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:08.544 14:05:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:08.544 rmmod nvme_tcp 00:11:08.544 rmmod nvme_fabrics 00:11:08.544 rmmod nvme_keyring 00:11:08.544 14:05:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:08.544 14:05:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:11:08.544 14:05:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:11:08.544 14:05:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 158350 ']' 00:11:08.544 14:05:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 158350 00:11:08.544 14:05:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 158350 ']' 00:11:08.544 14:05:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 158350 00:11:08.544 14:05:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:11:08.544 14:05:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:08.544 14:05:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 158350 00:11:08.544 14:05:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:08.544 14:05:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:08.544 14:05:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 158350' 00:11:08.544 killing process with pid 158350 00:11:08.544 14:05:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 158350 00:11:08.545 14:05:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 158350 00:11:08.805 14:05:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:08.805 14:05:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:08.805 14:05:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:08.805 14:05:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:08.805 14:05:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:08.805 14:05:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:08.805 14:05:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:08.805 14:05:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:11.348 14:05:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:11.348 00:11:11.348 real 0m23.360s 00:11:11.348 user 1m20.780s 00:11:11.348 sys 0m7.266s 00:11:11.348 14:05:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:11.348 14:05:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:11.348 ************************************ 00:11:11.348 END TEST nvmf_fio_target 00:11:11.348 ************************************ 00:11:11.348 14:05:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:11.348 14:05:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:11.348 14:05:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:11.348 14:05:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:11.348 ************************************ 00:11:11.348 START TEST nvmf_bdevio 00:11:11.348 ************************************ 00:11:11.348 14:05:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:11.348 * Looking for test storage... 00:11:11.348 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:11.348 14:05:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:11.348 14:05:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:11:11.348 14:05:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:11.348 14:05:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:11.348 14:05:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:11.348 14:05:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:11.348 14:05:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:11.348 14:05:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:11.348 14:05:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:11.348 14:05:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:11.348 14:05:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:11.348 14:05:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:11.348 14:05:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:11:11.348 14:05:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:11:11.348 14:05:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:11.348 14:05:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:11.348 14:05:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:11.348 14:05:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:11.348 14:05:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:11.348 14:05:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:11.348 14:05:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:11.348 14:05:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:11.348 14:05:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:11.349 14:05:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:11.349 14:05:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:11.349 14:05:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:11:11.349 14:05:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:11.349 14:05:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:11:11.349 14:05:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:11.349 14:05:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:11.349 14:05:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:11.349 14:05:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:11.349 14:05:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:11.349 14:05:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:11.349 14:05:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:11.349 14:05:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:11.349 14:05:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:11.349 14:05:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:11.349 14:05:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:11:11.349 14:05:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:11.349 14:05:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:11.349 14:05:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:11.349 14:05:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:11.349 14:05:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:11.349 14:05:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:11.349 14:05:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:11.349 14:05:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:11.349 14:05:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:11.349 14:05:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:11.349 14:05:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:11:11.349 14:05:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:13.254 14:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:13.254 14:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:11:13.254 14:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:13.254 14:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:13.254 14:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:13.254 14:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:13.254 14:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:13.254 14:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:11:13.254 14:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:13.254 14:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:11:13.254 14:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:11:13.254 14:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:11:13.254 14:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:11:13.254 14:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:11:13.254 14:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:11:13.254 14:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:13.254 14:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:13.254 14:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:13.254 14:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:13.254 14:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:13.254 14:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:13.254 14:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:13.254 14:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:13.254 14:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:13.254 14:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:13.254 14:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:13.254 14:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:13.254 14:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:13.254 14:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:13.254 14:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:13.254 14:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:13.254 14:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:13.254 14:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:13.254 14:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:11:13.254 Found 0000:09:00.0 (0x8086 - 0x159b) 00:11:13.254 14:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:13.254 14:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:13.254 14:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:13.254 14:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:13.254 14:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:13.254 14:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:13.254 14:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:11:13.254 Found 0000:09:00.1 (0x8086 - 0x159b) 00:11:13.254 14:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:13.254 14:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:13.254 14:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:13.254 14:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:13.254 14:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:13.254 14:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:13.254 14:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:13.254 14:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:13.254 14:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:13.254 14:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:13.254 14:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:13.254 14:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:13.254 14:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:13.254 14:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:13.254 14:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:13.254 14:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:11:13.254 Found net devices under 0000:09:00.0: cvl_0_0 00:11:13.254 14:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:13.254 14:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:13.254 14:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:13.254 14:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:13.254 14:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:13.254 14:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:13.254 14:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:13.254 14:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:13.254 14:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:11:13.254 Found net devices under 0000:09:00.1: cvl_0_1 00:11:13.254 14:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:13.254 14:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:13.254 14:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:11:13.254 14:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:13.254 14:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:13.254 14:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:13.254 14:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:13.254 14:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:13.254 14:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:13.254 14:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:13.254 14:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:13.254 14:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:13.254 14:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:13.254 14:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:13.254 14:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:13.254 14:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:13.254 14:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:13.255 14:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:13.255 14:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:13.255 14:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:13.255 14:05:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:13.255 14:05:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:13.255 14:05:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:13.255 14:05:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:13.255 14:05:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:13.255 14:05:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:13.255 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:13.255 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.102 ms 00:11:13.255 00:11:13.255 --- 10.0.0.2 ping statistics --- 00:11:13.255 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:13.255 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:11:13.255 14:05:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:13.255 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:13.255 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:11:13.255 00:11:13.255 --- 10.0.0.1 ping statistics --- 00:11:13.255 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:13.255 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:11:13.255 14:05:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:13.255 14:05:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:11:13.255 14:05:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:13.255 14:05:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:13.255 14:05:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:13.255 14:05:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:13.255 14:05:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:13.255 14:05:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:13.255 14:05:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:13.255 14:05:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:11:13.255 14:05:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:13.255 14:05:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:13.255 14:05:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:13.255 14:05:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=163107 00:11:13.255 14:05:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 163107 00:11:13.255 14:05:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:11:13.255 14:05:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 163107 ']' 00:11:13.255 14:05:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:13.255 14:05:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:13.255 14:05:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:13.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:13.255 14:05:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:13.255 14:05:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:13.255 [2024-07-26 14:05:21.132746] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:11:13.255 [2024-07-26 14:05:21.132837] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:13.255 EAL: No free 2048 kB hugepages reported on node 1 00:11:13.255 [2024-07-26 14:05:21.197501] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:13.514 [2024-07-26 14:05:21.301879] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:13.514 [2024-07-26 14:05:21.301930] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:13.514 [2024-07-26 14:05:21.301957] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:13.514 [2024-07-26 14:05:21.301967] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:13.514 [2024-07-26 14:05:21.301976] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:13.514 [2024-07-26 14:05:21.302128] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:11:13.514 [2024-07-26 14:05:21.302619] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:11:13.514 [2024-07-26 14:05:21.302669] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:11:13.514 [2024-07-26 14:05:21.302673] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:13.514 14:05:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:13.514 14:05:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:11:13.514 14:05:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:13.514 14:05:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:13.514 14:05:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:13.514 14:05:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:13.514 14:05:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:13.514 14:05:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.514 14:05:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:13.514 [2024-07-26 14:05:21.460066] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:13.514 14:05:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.514 14:05:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:13.514 14:05:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.514 14:05:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:13.514 Malloc0 00:11:13.514 14:05:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.514 14:05:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:13.514 14:05:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.514 14:05:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:13.514 14:05:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.514 14:05:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:13.514 14:05:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.514 14:05:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:13.514 14:05:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.514 14:05:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:13.514 14:05:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.514 14:05:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:13.514 [2024-07-26 14:05:21.513776] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:13.514 14:05:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.514 14:05:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:11:13.514 14:05:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:11:13.514 14:05:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:11:13.514 14:05:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:11:13.514 14:05:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:13.514 14:05:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:13.514 { 00:11:13.514 "params": { 00:11:13.514 "name": "Nvme$subsystem", 00:11:13.514 "trtype": "$TEST_TRANSPORT", 00:11:13.514 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:13.514 "adrfam": "ipv4", 00:11:13.514 "trsvcid": "$NVMF_PORT", 00:11:13.514 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:13.514 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:13.514 "hdgst": ${hdgst:-false}, 00:11:13.514 "ddgst": ${ddgst:-false} 00:11:13.514 }, 00:11:13.514 "method": "bdev_nvme_attach_controller" 00:11:13.514 } 00:11:13.514 EOF 00:11:13.514 )") 00:11:13.514 14:05:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:11:13.514 14:05:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:11:13.514 14:05:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:11:13.514 14:05:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:13.514 "params": { 00:11:13.514 "name": "Nvme1", 00:11:13.514 "trtype": "tcp", 00:11:13.514 "traddr": "10.0.0.2", 00:11:13.514 "adrfam": "ipv4", 00:11:13.514 "trsvcid": "4420", 00:11:13.514 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:13.514 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:13.514 "hdgst": false, 00:11:13.514 "ddgst": false 00:11:13.514 }, 00:11:13.514 "method": "bdev_nvme_attach_controller" 00:11:13.514 }' 00:11:13.772 [2024-07-26 14:05:21.559658] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:11:13.772 [2024-07-26 14:05:21.559731] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid163137 ] 00:11:13.772 EAL: No free 2048 kB hugepages reported on node 1 00:11:13.772 [2024-07-26 14:05:21.620179] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:13.772 [2024-07-26 14:05:21.731990] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:13.772 [2024-07-26 14:05:21.732039] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:13.772 [2024-07-26 14:05:21.732043] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:14.031 I/O targets: 00:11:14.031 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:11:14.031 00:11:14.031 00:11:14.031 CUnit - A unit testing framework for C - Version 2.1-3 00:11:14.031 http://cunit.sourceforge.net/ 00:11:14.031 00:11:14.031 00:11:14.031 Suite: bdevio tests on: Nvme1n1 00:11:14.031 Test: blockdev write read block ...passed 00:11:14.031 Test: blockdev write zeroes read block ...passed 00:11:14.031 Test: blockdev write zeroes read no split ...passed 00:11:14.289 Test: blockdev write zeroes read split ...passed 00:11:14.289 Test: blockdev write zeroes read split partial ...passed 00:11:14.289 Test: blockdev reset ...[2024-07-26 14:05:22.069257] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:11:14.289 [2024-07-26 14:05:22.069371] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x621580 (9): Bad file descriptor 00:11:14.289 [2024-07-26 14:05:22.085493] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:14.289 passed 00:11:14.289 Test: blockdev write read 8 blocks ...passed 00:11:14.289 Test: blockdev write read size > 128k ...passed 00:11:14.289 Test: blockdev write read invalid size ...passed 00:11:14.289 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:14.289 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:14.289 Test: blockdev write read max offset ...passed 00:11:14.289 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:14.289 Test: blockdev writev readv 8 blocks ...passed 00:11:14.289 Test: blockdev writev readv 30 x 1block ...passed 00:11:14.547 Test: blockdev writev readv block ...passed 00:11:14.547 Test: blockdev writev readv size > 128k ...passed 00:11:14.547 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:14.547 Test: blockdev comparev and writev ...[2024-07-26 14:05:22.340076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:14.547 [2024-07-26 14:05:22.340113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:11:14.547 [2024-07-26 14:05:22.340140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:14.547 [2024-07-26 14:05:22.340157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:11:14.547 [2024-07-26 14:05:22.340500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:14.547 [2024-07-26 14:05:22.340525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:11:14.548 [2024-07-26 14:05:22.340557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:14.548 [2024-07-26 14:05:22.340575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:11:14.548 [2024-07-26 14:05:22.340923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:14.548 [2024-07-26 14:05:22.340949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:11:14.548 [2024-07-26 14:05:22.340971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:14.548 [2024-07-26 14:05:22.340988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:11:14.548 [2024-07-26 14:05:22.341320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:14.548 [2024-07-26 14:05:22.341344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:11:14.548 [2024-07-26 14:05:22.341366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:14.548 [2024-07-26 14:05:22.341383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:11:14.548 passed 00:11:14.548 Test: blockdev nvme passthru rw ...passed 00:11:14.548 Test: blockdev nvme passthru vendor specific ...[2024-07-26 14:05:22.424791] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:14.548 [2024-07-26 14:05:22.424819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:11:14.548 [2024-07-26 14:05:22.424963] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:14.548 [2024-07-26 14:05:22.424986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:11:14.548 [2024-07-26 14:05:22.425124] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:14.548 [2024-07-26 14:05:22.425148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:11:14.548 [2024-07-26 14:05:22.425290] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:14.548 [2024-07-26 14:05:22.425314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:11:14.548 passed 00:11:14.548 Test: blockdev nvme admin passthru ...passed 00:11:14.548 Test: blockdev copy ...passed 00:11:14.548 00:11:14.548 Run Summary: Type Total Ran Passed Failed Inactive 00:11:14.548 suites 1 1 n/a 0 0 00:11:14.548 tests 23 23 23 0 0 00:11:14.548 asserts 152 152 152 0 n/a 00:11:14.548 00:11:14.548 Elapsed time = 1.132 seconds 00:11:14.806 14:05:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:14.806 14:05:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.806 14:05:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:14.806 14:05:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.806 14:05:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:11:14.806 14:05:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:11:14.806 14:05:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:14.806 14:05:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:11:14.806 14:05:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:14.806 14:05:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:11:14.806 14:05:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:14.806 14:05:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:14.806 rmmod nvme_tcp 00:11:14.806 rmmod nvme_fabrics 00:11:14.806 rmmod nvme_keyring 00:11:14.806 14:05:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:14.806 14:05:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:11:14.806 14:05:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:11:14.806 14:05:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 163107 ']' 00:11:14.806 14:05:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 163107 00:11:14.806 14:05:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 163107 ']' 00:11:14.806 14:05:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 163107 00:11:14.806 14:05:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:11:14.806 14:05:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:14.806 14:05:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 163107 00:11:14.806 14:05:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:11:14.806 14:05:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:11:14.806 14:05:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 163107' 00:11:14.806 killing process with pid 163107 00:11:14.806 14:05:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 163107 00:11:14.806 14:05:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 163107 00:11:15.064 14:05:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:15.064 14:05:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:15.064 14:05:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:15.064 14:05:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:15.064 14:05:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:15.064 14:05:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:15.064 14:05:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:15.064 14:05:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:17.604 14:05:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:17.604 00:11:17.604 real 0m6.243s 00:11:17.604 user 0m9.788s 00:11:17.604 sys 0m2.011s 00:11:17.604 14:05:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:17.604 14:05:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:17.604 ************************************ 00:11:17.604 END TEST nvmf_bdevio 00:11:17.604 ************************************ 00:11:17.604 14:05:25 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:11:17.604 00:11:17.604 real 3m51.281s 00:11:17.604 user 9m58.032s 00:11:17.604 sys 1m7.161s 00:11:17.604 14:05:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:17.604 14:05:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:17.604 ************************************ 00:11:17.604 END TEST nvmf_target_core 00:11:17.604 ************************************ 00:11:17.604 14:05:25 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:17.604 14:05:25 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:17.604 14:05:25 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:17.604 14:05:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:17.604 ************************************ 00:11:17.604 START TEST nvmf_target_extra 00:11:17.604 ************************************ 00:11:17.604 14:05:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:17.604 * Looking for test storage... 00:11:17.604 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:11:17.604 14:05:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:17.604 14:05:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:11:17.604 14:05:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:17.604 14:05:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:17.604 14:05:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:17.604 14:05:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:17.604 14:05:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:17.604 14:05:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:17.604 14:05:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:17.604 14:05:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:17.604 14:05:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:17.604 14:05:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:17.604 14:05:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:11:17.604 14:05:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:11:17.604 14:05:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:17.604 14:05:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:17.604 14:05:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:17.604 14:05:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:17.604 14:05:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:17.605 14:05:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:17.605 14:05:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:17.605 14:05:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:17.605 14:05:25 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.605 14:05:25 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.605 14:05:25 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.605 14:05:25 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:11:17.605 14:05:25 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.605 14:05:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@47 -- # : 0 00:11:17.605 14:05:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:17.605 14:05:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:17.605 14:05:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:17.605 14:05:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:17.605 14:05:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:17.605 14:05:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:17.605 14:05:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:17.605 14:05:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:17.605 14:05:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:11:17.605 14:05:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:11:17.605 14:05:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:11:17.605 14:05:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:17.605 14:05:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:17.605 14:05:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:17.605 14:05:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:17.605 ************************************ 00:11:17.605 START TEST nvmf_example 00:11:17.605 ************************************ 00:11:17.605 14:05:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:17.605 * Looking for test storage... 00:11:17.605 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:17.605 14:05:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:17.605 14:05:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:11:17.605 14:05:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:17.605 14:05:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:17.605 14:05:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:17.605 14:05:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:17.605 14:05:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:17.605 14:05:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:17.605 14:05:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:17.605 14:05:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:17.605 14:05:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:17.605 14:05:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:17.605 14:05:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:11:17.605 14:05:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:11:17.605 14:05:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:17.605 14:05:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:17.605 14:05:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:17.605 14:05:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:17.605 14:05:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:17.605 14:05:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:17.605 14:05:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:17.605 14:05:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:17.605 14:05:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.605 14:05:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.605 14:05:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.605 14:05:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:11:17.605 14:05:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.605 14:05:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:11:17.605 14:05:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:17.605 14:05:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:17.605 14:05:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:17.605 14:05:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:17.605 14:05:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:17.605 14:05:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:17.605 14:05:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:17.605 14:05:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:17.605 14:05:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:11:17.605 14:05:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:11:17.605 14:05:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:11:17.605 14:05:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:11:17.605 14:05:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:11:17.605 14:05:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:11:17.605 14:05:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:11:17.605 14:05:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:11:17.605 14:05:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:17.605 14:05:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:17.605 14:05:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:11:17.605 14:05:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:17.605 14:05:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:17.605 14:05:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:17.605 14:05:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:17.605 14:05:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:17.605 14:05:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:17.606 14:05:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:17.606 14:05:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:17.606 14:05:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:17.606 14:05:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:17.606 14:05:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:11:17.606 14:05:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:19.507 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:19.507 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:11:19.507 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:19.507 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:19.507 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:19.507 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:19.507 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:19.507 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:11:19.507 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:19.507 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:11:19.507 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:11:19.507 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:11:19.507 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:11:19.508 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:11:19.508 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:11:19.508 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:19.508 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:19.508 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:19.508 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:19.508 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:19.508 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:19.508 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:19.508 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:19.508 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:19.508 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:19.508 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:19.508 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:19.508 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:19.508 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:19.508 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:19.508 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:19.508 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:19.508 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:19.508 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:11:19.508 Found 0000:09:00.0 (0x8086 - 0x159b) 00:11:19.508 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:19.508 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:19.508 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:19.508 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:19.508 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:19.508 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:19.508 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:11:19.508 Found 0000:09:00.1 (0x8086 - 0x159b) 00:11:19.508 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:19.508 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:19.508 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:19.508 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:19.508 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:19.508 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:19.508 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:19.508 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:19.508 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:19.508 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:19.508 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:19.508 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:19.508 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:19.508 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:19.508 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:19.508 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:11:19.508 Found net devices under 0000:09:00.0: cvl_0_0 00:11:19.508 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:19.508 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:19.508 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:19.508 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:19.508 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:19.508 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:19.508 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:19.508 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:19.508 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:11:19.508 Found net devices under 0000:09:00.1: cvl_0_1 00:11:19.508 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:19.508 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:19.508 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:11:19.508 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:19.508 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:19.508 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:19.508 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:19.508 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:19.508 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:19.508 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:19.508 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:19.508 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:19.508 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:19.508 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:19.508 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:19.508 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:19.508 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:19.508 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:19.508 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:19.508 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:19.508 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:19.508 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:19.508 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:19.766 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:19.766 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:19.766 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:19.766 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:19.766 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.097 ms 00:11:19.766 00:11:19.766 --- 10.0.0.2 ping statistics --- 00:11:19.766 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:19.766 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:11:19.766 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:19.766 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:19.766 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.100 ms 00:11:19.766 00:11:19.766 --- 10.0.0.1 ping statistics --- 00:11:19.766 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:19.766 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:11:19.766 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:19.766 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:11:19.766 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:19.766 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:19.766 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:19.766 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:19.767 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:19.767 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:19.767 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:19.767 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:11:19.767 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:11:19.767 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:19.767 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:19.767 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:11:19.767 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:11:19.767 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=165277 00:11:19.767 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:11:19.767 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:19.767 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 165277 00:11:19.767 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@831 -- # '[' -z 165277 ']' 00:11:19.767 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:19.767 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:19.767 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:19.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:19.767 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:19.767 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:19.767 EAL: No free 2048 kB hugepages reported on node 1 00:11:20.025 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:20.025 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # return 0 00:11:20.025 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:11:20.025 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:20.025 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:20.025 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:20.025 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.025 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:20.025 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.025 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:11:20.025 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.025 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:20.025 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.025 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:11:20.025 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:20.025 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.025 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:20.025 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.025 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:11:20.025 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:20.025 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.025 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:20.025 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.025 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:20.025 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.025 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:20.025 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.025 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:11:20.025 14:05:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:20.025 EAL: No free 2048 kB hugepages reported on node 1 00:11:32.216 Initializing NVMe Controllers 00:11:32.216 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:32.216 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:32.216 Initialization complete. Launching workers. 00:11:32.216 ======================================================== 00:11:32.216 Latency(us) 00:11:32.216 Device Information : IOPS MiB/s Average min max 00:11:32.216 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14723.10 57.51 4347.93 731.82 16059.46 00:11:32.216 ======================================================== 00:11:32.216 Total : 14723.10 57.51 4347.93 731.82 16059.46 00:11:32.216 00:11:32.216 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:11:32.216 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:11:32.216 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:32.216 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # sync 00:11:32.216 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:32.216 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:11:32.216 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:32.216 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:32.216 rmmod nvme_tcp 00:11:32.216 rmmod nvme_fabrics 00:11:32.216 rmmod nvme_keyring 00:11:32.216 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:32.216 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:11:32.216 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:11:32.216 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 165277 ']' 00:11:32.216 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@490 -- # killprocess 165277 00:11:32.216 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@950 -- # '[' -z 165277 ']' 00:11:32.216 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # kill -0 165277 00:11:32.216 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # uname 00:11:32.216 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:32.216 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 165277 00:11:32.216 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # process_name=nvmf 00:11:32.216 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # '[' nvmf = sudo ']' 00:11:32.216 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@968 -- # echo 'killing process with pid 165277' 00:11:32.216 killing process with pid 165277 00:11:32.216 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@969 -- # kill 165277 00:11:32.216 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@974 -- # wait 165277 00:11:32.216 nvmf threads initialize successfully 00:11:32.216 bdev subsystem init successfully 00:11:32.216 created a nvmf target service 00:11:32.216 create targets's poll groups done 00:11:32.216 all subsystems of target started 00:11:32.216 nvmf target is running 00:11:32.216 all subsystems of target stopped 00:11:32.216 destroy targets's poll groups done 00:11:32.216 destroyed the nvmf target service 00:11:32.217 bdev subsystem finish successfully 00:11:32.217 nvmf threads destroy successfully 00:11:32.217 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:32.217 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:32.217 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:32.217 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:32.217 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:32.217 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:32.217 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:32.217 14:05:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:32.787 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:32.787 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:11:32.787 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:32.787 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:32.787 00:11:32.787 real 0m15.378s 00:11:32.787 user 0m41.895s 00:11:32.787 sys 0m3.610s 00:11:32.787 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:32.787 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:32.787 ************************************ 00:11:32.787 END TEST nvmf_example 00:11:32.787 ************************************ 00:11:32.787 14:05:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:32.787 14:05:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:32.787 14:05:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:32.787 14:05:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:32.787 ************************************ 00:11:32.787 START TEST nvmf_filesystem 00:11:32.787 ************************************ 00:11:32.787 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:32.787 * Looking for test storage... 00:11:32.787 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:32.787 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:11:32.787 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:11:32.787 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:11:32.787 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:11:32.787 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:11:32.787 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:11:32.787 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:11:32.787 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:11:32.787 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:11:32.787 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:11:32.787 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:11:32.787 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:11:32.787 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:11:32.787 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:11:32.787 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:11:32.787 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:11:32.787 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:11:32.787 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:11:32.787 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:11:32.787 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:11:32.787 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:11:32.787 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:11:32.787 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:11:32.787 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:11:32.787 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:11:32.787 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:11:32.787 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:11:32.787 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:32.787 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:11:32.787 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:11:32.787 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:11:32.787 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:11:32.787 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:11:32.787 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:11:32.787 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:11:32.787 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:11:32.787 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:11:32.788 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:11:32.788 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:11:32.788 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:11:32.788 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:11:32.788 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:11:32.788 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:11:32.788 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:11:32.788 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:11:32.788 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:11:32.788 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:11:32.788 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:11:32.788 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:11:32.788 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:11:32.788 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:11:32.788 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:11:32.788 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:11:32.788 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:11:32.788 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:11:32.788 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:11:32.788 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:11:32.788 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:11:32.788 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:11:32.788 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:11:32.788 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:11:32.788 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:11:32.788 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:11:32.788 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:11:32.788 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:11:32.788 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:11:32.788 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:11:32.788 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:11:32.788 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:11:32.788 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:11:32.788 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:11:32.788 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:11:32.788 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:11:32.788 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:11:32.788 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:11:32.788 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:11:32.788 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:11:32.788 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:11:32.788 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:11:32.788 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:11:32.788 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:11:32.788 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:11:32.788 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:11:32.788 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:11:32.788 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:11:32.788 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:11:32.788 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:11:32.788 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:11:32.788 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:11:32.788 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:11:32.788 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:11:32.788 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:11:32.788 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:32.788 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:32.788 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:32.788 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:32.788 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:32.788 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:32.788 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:11:32.788 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:32.788 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:11:32.788 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:11:32.788 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:11:32.788 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:11:32.788 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:11:32.788 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:11:32.788 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:11:32.788 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:11:32.788 #define SPDK_CONFIG_H 00:11:32.788 #define SPDK_CONFIG_APPS 1 00:11:32.788 #define SPDK_CONFIG_ARCH native 00:11:32.788 #undef SPDK_CONFIG_ASAN 00:11:32.788 #undef SPDK_CONFIG_AVAHI 00:11:32.788 #undef SPDK_CONFIG_CET 00:11:32.788 #define SPDK_CONFIG_COVERAGE 1 00:11:32.788 #define SPDK_CONFIG_CROSS_PREFIX 00:11:32.788 #undef SPDK_CONFIG_CRYPTO 00:11:32.788 #undef SPDK_CONFIG_CRYPTO_MLX5 00:11:32.788 #undef SPDK_CONFIG_CUSTOMOCF 00:11:32.788 #undef SPDK_CONFIG_DAOS 00:11:32.788 #define SPDK_CONFIG_DAOS_DIR 00:11:32.788 #define SPDK_CONFIG_DEBUG 1 00:11:32.788 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:11:32.788 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:11:32.788 #define SPDK_CONFIG_DPDK_INC_DIR 00:11:32.788 #define SPDK_CONFIG_DPDK_LIB_DIR 00:11:32.788 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:11:32.788 #undef SPDK_CONFIG_DPDK_UADK 00:11:32.788 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:32.788 #define SPDK_CONFIG_EXAMPLES 1 00:11:32.788 #undef SPDK_CONFIG_FC 00:11:32.788 #define SPDK_CONFIG_FC_PATH 00:11:32.788 #define SPDK_CONFIG_FIO_PLUGIN 1 00:11:32.788 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:11:32.788 #undef SPDK_CONFIG_FUSE 00:11:32.788 #undef SPDK_CONFIG_FUZZER 00:11:32.788 #define SPDK_CONFIG_FUZZER_LIB 00:11:32.788 #undef SPDK_CONFIG_GOLANG 00:11:32.788 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:11:32.788 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:11:32.788 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:11:32.788 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:11:32.788 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:11:32.788 #undef SPDK_CONFIG_HAVE_LIBBSD 00:11:32.788 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:11:32.788 #define SPDK_CONFIG_IDXD 1 00:11:32.788 #define SPDK_CONFIG_IDXD_KERNEL 1 00:11:32.788 #undef SPDK_CONFIG_IPSEC_MB 00:11:32.788 #define SPDK_CONFIG_IPSEC_MB_DIR 00:11:32.788 #define SPDK_CONFIG_ISAL 1 00:11:32.788 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:11:32.788 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:11:32.788 #define SPDK_CONFIG_LIBDIR 00:11:32.788 #undef SPDK_CONFIG_LTO 00:11:32.788 #define SPDK_CONFIG_MAX_LCORES 128 00:11:32.788 #define SPDK_CONFIG_NVME_CUSE 1 00:11:32.788 #undef SPDK_CONFIG_OCF 00:11:32.788 #define SPDK_CONFIG_OCF_PATH 00:11:32.788 #define SPDK_CONFIG_OPENSSL_PATH 00:11:32.788 #undef SPDK_CONFIG_PGO_CAPTURE 00:11:32.788 #define SPDK_CONFIG_PGO_DIR 00:11:32.788 #undef SPDK_CONFIG_PGO_USE 00:11:32.788 #define SPDK_CONFIG_PREFIX /usr/local 00:11:32.788 #undef SPDK_CONFIG_RAID5F 00:11:32.788 #undef SPDK_CONFIG_RBD 00:11:32.788 #define SPDK_CONFIG_RDMA 1 00:11:32.788 #define SPDK_CONFIG_RDMA_PROV verbs 00:11:32.788 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:11:32.788 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:11:32.788 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:11:32.788 #define SPDK_CONFIG_SHARED 1 00:11:32.788 #undef SPDK_CONFIG_SMA 00:11:32.788 #define SPDK_CONFIG_TESTS 1 00:11:32.788 #undef SPDK_CONFIG_TSAN 00:11:32.789 #define SPDK_CONFIG_UBLK 1 00:11:32.789 #define SPDK_CONFIG_UBSAN 1 00:11:32.789 #undef SPDK_CONFIG_UNIT_TESTS 00:11:32.789 #undef SPDK_CONFIG_URING 00:11:32.789 #define SPDK_CONFIG_URING_PATH 00:11:32.789 #undef SPDK_CONFIG_URING_ZNS 00:11:32.789 #undef SPDK_CONFIG_USDT 00:11:32.789 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:11:32.789 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:11:32.789 #define SPDK_CONFIG_VFIO_USER 1 00:11:32.789 #define SPDK_CONFIG_VFIO_USER_DIR 00:11:32.789 #define SPDK_CONFIG_VHOST 1 00:11:32.789 #define SPDK_CONFIG_VIRTIO 1 00:11:32.789 #undef SPDK_CONFIG_VTUNE 00:11:32.789 #define SPDK_CONFIG_VTUNE_DIR 00:11:32.789 #define SPDK_CONFIG_WERROR 1 00:11:32.789 #define SPDK_CONFIG_WPDK_DIR 00:11:32.789 #undef SPDK_CONFIG_XNVME 00:11:32.789 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:11:32.789 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:11:32.789 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:32.789 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:32.789 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:32.789 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:32.789 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.789 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.789 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.789 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:32.789 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.789 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:32.789 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:32.789 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:32.789 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:32.789 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:11:33.049 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:33.049 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:11:33.049 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:11:33.049 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:11:33.049 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:11:33.049 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:11:33.049 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:11:33.049 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:11:33.049 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:11:33.049 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:11:33.049 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:11:33.049 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:11:33.049 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:11:33.049 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:11:33.049 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:11:33.049 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:11:33.049 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:11:33.049 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:11:33.049 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:11:33.049 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:11:33.049 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:11:33.049 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:11:33.049 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:11:33.049 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:11:33.049 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:11:33.049 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:11:33.049 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:11:33.049 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:11:33.049 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:11:33.049 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:11:33.049 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:11:33.049 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:11:33.049 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:11:33.049 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:11:33.049 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:11:33.049 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:11:33.049 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:11:33.050 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:11:33.050 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:11:33.050 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:11:33.050 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:11:33.050 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:11:33.050 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:11:33.050 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:11:33.050 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:11:33.050 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:11:33.050 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:11:33.050 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:11:33.050 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:11:33.050 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:11:33.050 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:11:33.050 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:11:33.050 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:11:33.050 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:11:33.050 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:11:33.050 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:11:33.050 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:11:33.050 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:11:33.050 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:11:33.050 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:11:33.050 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:11:33.050 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:11:33.050 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:11:33.050 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:11:33.050 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:11:33.050 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:11:33.050 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:11:33.050 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:11:33.050 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:11:33.050 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:11:33.050 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:11:33.050 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:11:33.050 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:11:33.050 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:11:33.050 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:11:33.050 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:11:33.050 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:11:33.050 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:11:33.050 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:11:33.050 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:11:33.050 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:11:33.050 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:11:33.050 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:11:33.050 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:11:33.050 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:11:33.050 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:11:33.050 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:11:33.050 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:11:33.050 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:11:33.050 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:11:33.050 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:11:33.050 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:11:33.050 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:11:33.050 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:11:33.050 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:11:33.050 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:11:33.050 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:11:33.050 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:11:33.050 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:11:33.050 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:11:33.050 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:11:33.050 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:11:33.050 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:11:33.050 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:11:33.050 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:11:33.050 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:11:33.050 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:11:33.050 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:11:33.050 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:11:33.050 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:11:33.050 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:11:33.050 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:11:33.050 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:11:33.050 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:11:33.050 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:11:33.050 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:11:33.050 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:11:33.050 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:11:33.050 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:11:33.050 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:11:33.050 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:11:33.050 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:11:33.050 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:11:33.050 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:11:33.050 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:11:33.050 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:11:33.050 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:11:33.050 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:11:33.050 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:11:33.050 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:11:33.050 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:11:33.050 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:11:33.050 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:11:33.050 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:11:33.050 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:11:33.051 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:11:33.051 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:33.051 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:33.051 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:11:33.051 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:11:33.051 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:33.051 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:33.051 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:33.051 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:33.051 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:11:33.051 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:11:33.051 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:33.051 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:33.051 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONDONTWRITEBYTECODE=1 00:11:33.051 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONDONTWRITEBYTECODE=1 00:11:33.051 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:33.051 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:33.051 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@196 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:33.051 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@196 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:33.051 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:11:33.051 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@201 -- # rm -rf /var/tmp/asan_suppression_file 00:11:33.051 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # cat 00:11:33.051 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@238 -- # echo leak:libfuse3.so 00:11:33.051 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:33.051 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:33.051 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:33.051 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:33.051 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # '[' -z /var/spdk/dependencies ']' 00:11:33.051 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@247 -- # export DEPENDENCY_DIR 00:11:33.051 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:33.051 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:33.051 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@252 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:33.051 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@252 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:33.051 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:33.051 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:33.051 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:33.051 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:33.051 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:33.051 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:33.051 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@261 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:33.051 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@261 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:33.051 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@264 -- # '[' 0 -eq 0 ']' 00:11:33.051 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export valgrind= 00:11:33.051 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # valgrind= 00:11:33.051 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # uname -s 00:11:33.051 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # '[' Linux = Linux ']' 00:11:33.051 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # HUGEMEM=4096 00:11:33.051 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # export CLEAR_HUGE=yes 00:11:33.051 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # CLEAR_HUGE=yes 00:11:33.051 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@274 -- # [[ 0 -eq 1 ]] 00:11:33.051 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@274 -- # [[ 0 -eq 1 ]] 00:11:33.051 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@281 -- # MAKE=make 00:11:33.051 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@282 -- # MAKEFLAGS=-j48 00:11:33.051 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@298 -- # export HUGEMEM=4096 00:11:33.051 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@298 -- # HUGEMEM=4096 00:11:33.051 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@300 -- # NO_HUGE=() 00:11:33.051 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@301 -- # TEST_MODE= 00:11:33.051 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@302 -- # for i in "$@" 00:11:33.051 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@303 -- # case "$i" in 00:11:33.051 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # TEST_TRANSPORT=tcp 00:11:33.051 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@320 -- # [[ -z 166946 ]] 00:11:33.051 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@320 -- # kill -0 166946 00:11:33.051 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:11:33.051 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@330 -- # [[ -v testdir ]] 00:11:33.051 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@332 -- # local requested_size=2147483648 00:11:33.051 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@333 -- # local mount target_dir 00:11:33.051 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@335 -- # local -A mounts fss sizes avails uses 00:11:33.051 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@336 -- # local source fs size avail mount use 00:11:33.051 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # local storage_fallback storage_candidates 00:11:33.051 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # mktemp -udt spdk.XXXXXX 00:11:33.051 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # storage_fallback=/tmp/spdk.GQqwlA 00:11:33.051 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:11:33.052 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # [[ -n '' ]] 00:11:33.052 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@352 -- # [[ -n '' ]] 00:11:33.052 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@357 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.GQqwlA/tests/target /tmp/spdk.GQqwlA 00:11:33.052 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # requested_size=2214592512 00:11:33.052 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:33.052 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # df -T 00:11:33.052 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # grep -v Filesystem 00:11:33.052 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=spdk_devtmpfs 00:11:33.052 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=devtmpfs 00:11:33.052 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=67108864 00:11:33.052 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=67108864 00:11:33.052 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=0 00:11:33.052 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:33.052 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=/dev/pmem0 00:11:33.052 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=ext2 00:11:33.052 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=952066048 00:11:33.052 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=5284429824 00:11:33.052 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=4332363776 00:11:33.052 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:33.052 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=spdk_root 00:11:33.052 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=overlay 00:11:33.052 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=56892350464 00:11:33.052 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=61994721280 00:11:33.052 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=5102370816 00:11:33.052 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:33.052 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:11:33.052 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:11:33.052 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=30987440128 00:11:33.052 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=30997360640 00:11:33.052 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=9920512 00:11:33.052 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:33.052 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:11:33.052 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:11:33.052 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=12376539136 00:11:33.052 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=12398944256 00:11:33.052 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=22405120 00:11:33.052 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:33.052 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:11:33.052 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:11:33.052 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=30997053440 00:11:33.052 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=30997360640 00:11:33.052 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=307200 00:11:33.052 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:33.052 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:11:33.052 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:11:33.052 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=6199468032 00:11:33.052 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=6199472128 00:11:33.052 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=4096 00:11:33.052 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:33.052 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # printf '* Looking for test storage...\n' 00:11:33.052 * Looking for test storage... 00:11:33.052 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@370 -- # local target_space new_size 00:11:33.052 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # for target_dir in "${storage_candidates[@]}" 00:11:33.052 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:33.052 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # awk '$1 !~ /Filesystem/{print $6}' 00:11:33.052 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mount=/ 00:11:33.052 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # target_space=56892350464 00:11:33.052 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # (( target_space == 0 || target_space < requested_size )) 00:11:33.052 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # (( target_space >= requested_size )) 00:11:33.052 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ overlay == tmpfs ]] 00:11:33.052 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ overlay == ramfs ]] 00:11:33.052 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ / == / ]] 00:11:33.052 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # new_size=7316963328 00:11:33.052 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@384 -- # (( new_size * 100 / sizes[/] > 95 )) 00:11:33.052 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:33.052 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:33.052 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@390 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:33.052 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:33.052 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # return 0 00:11:33.052 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:11:33.052 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:11:33.052 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:11:33.052 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:11:33.052 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:11:33.052 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:11:33.052 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:11:33.052 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:11:33.052 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:11:33.052 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:11:33.052 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:11:33.052 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:11:33.052 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:11:33.052 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:11:33.052 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:33.052 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:11:33.052 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:33.052 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:33.052 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:33.052 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:33.052 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:33.052 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:33.052 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:33.052 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:33.052 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:33.052 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:33.052 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:11:33.052 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:11:33.053 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:33.053 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:33.053 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:33.053 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:33.053 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:33.053 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:33.053 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:33.053 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:33.053 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.053 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.053 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.053 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:33.053 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.053 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:11:33.053 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:33.053 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:33.053 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:33.053 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:33.053 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:33.053 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:33.053 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:33.053 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:33.053 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:11:33.053 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:33.053 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:11:33.053 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:33.053 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:33.053 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:33.053 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:33.053 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:33.053 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:33.053 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:33.053 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:33.053 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:33.053 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:33.053 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:11:33.053 14:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:34.955 14:05:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:34.955 14:05:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:11:34.955 14:05:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:34.955 14:05:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:34.955 14:05:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:34.955 14:05:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:34.955 14:05:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:34.955 14:05:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:11:34.955 14:05:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:34.955 14:05:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:11:34.955 14:05:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:11:34.955 14:05:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:11:34.955 14:05:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:11:34.955 14:05:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:11:34.955 14:05:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:11:34.955 14:05:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:34.955 14:05:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:34.955 14:05:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:34.955 14:05:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:34.955 14:05:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:34.955 14:05:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:34.955 14:05:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:34.955 14:05:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:34.955 14:05:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:34.955 14:05:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:34.955 14:05:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:34.955 14:05:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:34.955 14:05:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:34.955 14:05:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:34.955 14:05:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:34.955 14:05:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:34.955 14:05:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:34.955 14:05:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:34.955 14:05:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:11:34.955 Found 0000:09:00.0 (0x8086 - 0x159b) 00:11:34.955 14:05:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:34.955 14:05:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:34.955 14:05:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:34.955 14:05:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:34.956 14:05:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:34.956 14:05:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:34.956 14:05:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:11:34.956 Found 0000:09:00.1 (0x8086 - 0x159b) 00:11:34.956 14:05:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:34.956 14:05:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:34.956 14:05:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:34.956 14:05:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:34.956 14:05:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:34.956 14:05:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:34.956 14:05:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:34.956 14:05:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:34.956 14:05:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:34.956 14:05:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:34.956 14:05:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:34.956 14:05:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:34.956 14:05:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:34.956 14:05:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:34.956 14:05:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:34.956 14:05:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:11:34.956 Found net devices under 0000:09:00.0: cvl_0_0 00:11:34.956 14:05:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:34.956 14:05:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:34.956 14:05:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:34.956 14:05:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:34.956 14:05:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:34.956 14:05:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:34.956 14:05:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:34.956 14:05:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:34.956 14:05:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:11:34.956 Found net devices under 0000:09:00.1: cvl_0_1 00:11:34.956 14:05:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:34.956 14:05:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:34.956 14:05:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:11:34.956 14:05:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:34.956 14:05:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:34.956 14:05:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:34.956 14:05:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:34.956 14:05:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:34.956 14:05:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:34.956 14:05:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:34.956 14:05:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:34.956 14:05:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:34.956 14:05:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:34.956 14:05:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:34.956 14:05:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:34.956 14:05:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:34.956 14:05:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:34.956 14:05:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:34.956 14:05:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:35.214 14:05:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:35.214 14:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:35.214 14:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:35.214 14:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:35.215 14:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:35.215 14:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:35.215 14:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:35.215 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:35.215 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.241 ms 00:11:35.215 00:11:35.215 --- 10.0.0.2 ping statistics --- 00:11:35.215 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:35.215 rtt min/avg/max/mdev = 0.241/0.241/0.241/0.000 ms 00:11:35.215 14:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:35.215 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:35.215 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:11:35.215 00:11:35.215 --- 10.0.0.1 ping statistics --- 00:11:35.215 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:35.215 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:11:35.215 14:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:35.215 14:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:11:35.215 14:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:35.215 14:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:35.215 14:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:35.215 14:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:35.215 14:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:35.215 14:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:35.215 14:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:35.215 14:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:11:35.215 14:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:35.215 14:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:35.215 14:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:35.215 ************************************ 00:11:35.215 START TEST nvmf_filesystem_no_in_capsule 00:11:35.215 ************************************ 00:11:35.215 14:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 0 00:11:35.215 14:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:11:35.215 14:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:35.215 14:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:35.215 14:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:35.215 14:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:35.215 14:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=168573 00:11:35.215 14:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:35.215 14:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 168573 00:11:35.215 14:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 168573 ']' 00:11:35.215 14:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:35.215 14:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:35.215 14:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:35.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:35.215 14:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:35.215 14:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:35.215 [2024-07-26 14:05:43.168257] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:11:35.215 [2024-07-26 14:05:43.168349] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:35.215 EAL: No free 2048 kB hugepages reported on node 1 00:11:35.473 [2024-07-26 14:05:43.234540] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:35.473 [2024-07-26 14:05:43.346019] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:35.473 [2024-07-26 14:05:43.346091] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:35.473 [2024-07-26 14:05:43.346119] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:35.473 [2024-07-26 14:05:43.346132] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:35.473 [2024-07-26 14:05:43.346142] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:35.473 [2024-07-26 14:05:43.346223] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:35.473 [2024-07-26 14:05:43.346290] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:35.473 [2024-07-26 14:05:43.346356] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:35.473 [2024-07-26 14:05:43.346359] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:35.473 14:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:35.473 14:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:11:35.473 14:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:35.473 14:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:35.473 14:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:35.732 14:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:35.732 14:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:35.732 14:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:35.732 14:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.732 14:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:35.732 [2024-07-26 14:05:43.500769] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:35.732 14:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.732 14:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:35.732 14:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.732 14:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:35.732 Malloc1 00:11:35.732 14:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.732 14:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:35.732 14:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.732 14:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:35.732 14:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.732 14:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:35.732 14:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.732 14:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:35.732 14:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.732 14:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:35.732 14:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.732 14:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:35.732 [2024-07-26 14:05:43.677210] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:35.732 14:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.732 14:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:35.732 14:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:11:35.732 14:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:11:35.732 14:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:11:35.732 14:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:11:35.732 14:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:35.732 14:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.732 14:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:35.732 14:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.732 14:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:11:35.732 { 00:11:35.732 "name": "Malloc1", 00:11:35.732 "aliases": [ 00:11:35.732 "6eaa7ebd-11af-4eb0-922a-d70a4ed3bf39" 00:11:35.732 ], 00:11:35.732 "product_name": "Malloc disk", 00:11:35.732 "block_size": 512, 00:11:35.732 "num_blocks": 1048576, 00:11:35.732 "uuid": "6eaa7ebd-11af-4eb0-922a-d70a4ed3bf39", 00:11:35.732 "assigned_rate_limits": { 00:11:35.732 "rw_ios_per_sec": 0, 00:11:35.732 "rw_mbytes_per_sec": 0, 00:11:35.732 "r_mbytes_per_sec": 0, 00:11:35.732 "w_mbytes_per_sec": 0 00:11:35.732 }, 00:11:35.732 "claimed": true, 00:11:35.732 "claim_type": "exclusive_write", 00:11:35.732 "zoned": false, 00:11:35.732 "supported_io_types": { 00:11:35.732 "read": true, 00:11:35.732 "write": true, 00:11:35.732 "unmap": true, 00:11:35.732 "flush": true, 00:11:35.732 "reset": true, 00:11:35.732 "nvme_admin": false, 00:11:35.732 "nvme_io": false, 00:11:35.732 "nvme_io_md": false, 00:11:35.732 "write_zeroes": true, 00:11:35.732 "zcopy": true, 00:11:35.732 "get_zone_info": false, 00:11:35.732 "zone_management": false, 00:11:35.732 "zone_append": false, 00:11:35.732 "compare": false, 00:11:35.732 "compare_and_write": false, 00:11:35.732 "abort": true, 00:11:35.732 "seek_hole": false, 00:11:35.732 "seek_data": false, 00:11:35.732 "copy": true, 00:11:35.732 "nvme_iov_md": false 00:11:35.732 }, 00:11:35.732 "memory_domains": [ 00:11:35.732 { 00:11:35.732 "dma_device_id": "system", 00:11:35.732 "dma_device_type": 1 00:11:35.732 }, 00:11:35.732 { 00:11:35.732 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:35.732 "dma_device_type": 2 00:11:35.732 } 00:11:35.732 ], 00:11:35.732 "driver_specific": {} 00:11:35.732 } 00:11:35.732 ]' 00:11:35.732 14:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:11:35.732 14:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:11:35.732 14:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:11:35.990 14:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:11:35.990 14:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:11:35.990 14:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:11:35.990 14:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:35.990 14:05:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:36.555 14:05:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:36.555 14:05:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:11:36.555 14:05:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:36.556 14:05:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:36.556 14:05:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:11:38.454 14:05:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:38.454 14:05:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:38.454 14:05:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:38.454 14:05:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:38.454 14:05:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:38.454 14:05:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:11:38.454 14:05:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:38.454 14:05:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:38.454 14:05:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:38.454 14:05:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:38.454 14:05:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:38.454 14:05:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:38.454 14:05:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:38.454 14:05:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:38.454 14:05:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:38.454 14:05:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:38.454 14:05:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:38.711 14:05:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:39.276 14:05:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:40.210 14:05:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:11:40.210 14:05:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:40.210 14:05:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:40.210 14:05:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:40.210 14:05:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:40.210 ************************************ 00:11:40.210 START TEST filesystem_ext4 00:11:40.210 ************************************ 00:11:40.210 14:05:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:40.210 14:05:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:40.210 14:05:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:40.210 14:05:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:40.210 14:05:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:11:40.210 14:05:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:40.210 14:05:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:11:40.210 14:05:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local force 00:11:40.210 14:05:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:11:40.210 14:05:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:11:40.210 14:05:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:40.210 mke2fs 1.46.5 (30-Dec-2021) 00:11:40.210 Discarding device blocks: 0/522240 done 00:11:40.210 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:40.210 Filesystem UUID: 4b13a414-cc8c-44f7-b7d8-c4f52b9f6275 00:11:40.210 Superblock backups stored on blocks: 00:11:40.210 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:40.210 00:11:40.210 Allocating group tables: 0/64 done 00:11:40.210 Writing inode tables: 0/64 done 00:11:40.468 Creating journal (8192 blocks): done 00:11:40.468 Writing superblocks and filesystem accounting information: 0/64 done 00:11:40.468 00:11:40.468 14:05:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@945 -- # return 0 00:11:40.468 14:05:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:40.468 14:05:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:40.727 14:05:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:11:40.727 14:05:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:40.727 14:05:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:11:40.727 14:05:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:40.727 14:05:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:40.727 14:05:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 168573 00:11:40.727 14:05:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:40.727 14:05:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:40.727 14:05:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:40.727 14:05:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:40.727 00:11:40.727 real 0m0.516s 00:11:40.727 user 0m0.019s 00:11:40.727 sys 0m0.054s 00:11:40.727 14:05:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:40.727 14:05:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:40.727 ************************************ 00:11:40.727 END TEST filesystem_ext4 00:11:40.727 ************************************ 00:11:40.727 14:05:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:40.727 14:05:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:40.727 14:05:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:40.727 14:05:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:40.727 ************************************ 00:11:40.727 START TEST filesystem_btrfs 00:11:40.727 ************************************ 00:11:40.727 14:05:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:40.727 14:05:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:40.727 14:05:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:40.727 14:05:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:40.727 14:05:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:11:40.727 14:05:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:40.727 14:05:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:11:40.727 14:05:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local force 00:11:40.727 14:05:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:11:40.727 14:05:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:11:40.727 14:05:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:41.293 btrfs-progs v6.6.2 00:11:41.293 See https://btrfs.readthedocs.io for more information. 00:11:41.293 00:11:41.293 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:41.293 NOTE: several default settings have changed in version 5.15, please make sure 00:11:41.293 this does not affect your deployments: 00:11:41.293 - DUP for metadata (-m dup) 00:11:41.293 - enabled no-holes (-O no-holes) 00:11:41.293 - enabled free-space-tree (-R free-space-tree) 00:11:41.293 00:11:41.293 Label: (null) 00:11:41.293 UUID: 972f4b36-af96-42da-8074-41e760ca64bf 00:11:41.293 Node size: 16384 00:11:41.293 Sector size: 4096 00:11:41.293 Filesystem size: 510.00MiB 00:11:41.293 Block group profiles: 00:11:41.293 Data: single 8.00MiB 00:11:41.293 Metadata: DUP 32.00MiB 00:11:41.293 System: DUP 8.00MiB 00:11:41.293 SSD detected: yes 00:11:41.293 Zoned device: no 00:11:41.293 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:11:41.293 Runtime features: free-space-tree 00:11:41.293 Checksum: crc32c 00:11:41.293 Number of devices: 1 00:11:41.293 Devices: 00:11:41.293 ID SIZE PATH 00:11:41.293 1 510.00MiB /dev/nvme0n1p1 00:11:41.293 00:11:41.293 14:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@945 -- # return 0 00:11:41.293 14:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:41.859 14:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:41.859 14:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:11:41.859 14:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:41.859 14:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:11:41.859 14:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:41.859 14:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:41.859 14:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 168573 00:11:41.859 14:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:41.859 14:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:41.859 14:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:41.859 14:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:41.859 00:11:41.859 real 0m1.058s 00:11:41.859 user 0m0.016s 00:11:41.859 sys 0m0.161s 00:11:41.859 14:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:41.859 14:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:41.859 ************************************ 00:11:41.859 END TEST filesystem_btrfs 00:11:41.859 ************************************ 00:11:41.859 14:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:11:41.859 14:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:41.859 14:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:41.859 14:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:41.859 ************************************ 00:11:41.859 START TEST filesystem_xfs 00:11:41.859 ************************************ 00:11:41.859 14:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:11:41.859 14:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:41.859 14:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:41.859 14:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:41.859 14:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:11:41.859 14:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:41.859 14:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local i=0 00:11:41.859 14:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local force 00:11:41.859 14:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:11:41.859 14:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@934 -- # force=-f 00:11:41.859 14:05:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:41.859 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:41.859 = sectsz=512 attr=2, projid32bit=1 00:11:41.859 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:41.859 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:41.859 data = bsize=4096 blocks=130560, imaxpct=25 00:11:41.859 = sunit=0 swidth=0 blks 00:11:41.859 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:41.859 log =internal log bsize=4096 blocks=16384, version=2 00:11:41.859 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:41.859 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:43.231 Discarding blocks...Done. 00:11:43.231 14:05:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@945 -- # return 0 00:11:43.231 14:05:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:45.128 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:45.128 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:11:45.128 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:45.128 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:11:45.128 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:11:45.128 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:45.386 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 168573 00:11:45.386 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:45.386 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:45.386 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:45.386 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:45.386 00:11:45.386 real 0m3.459s 00:11:45.386 user 0m0.017s 00:11:45.386 sys 0m0.094s 00:11:45.386 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:45.386 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:45.386 ************************************ 00:11:45.386 END TEST filesystem_xfs 00:11:45.386 ************************************ 00:11:45.386 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:45.644 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:45.644 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:45.644 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:45.644 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:45.644 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:11:45.644 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:45.644 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:45.644 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:45.644 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:45.644 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:11:45.644 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:45.644 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.644 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:45.644 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.644 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:45.644 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 168573 00:11:45.644 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 168573 ']' 00:11:45.644 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # kill -0 168573 00:11:45.644 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # uname 00:11:45.644 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:45.644 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 168573 00:11:45.644 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:45.644 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:45.644 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 168573' 00:11:45.644 killing process with pid 168573 00:11:45.644 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@969 -- # kill 168573 00:11:45.644 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@974 -- # wait 168573 00:11:46.209 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:46.209 00:11:46.209 real 0m10.973s 00:11:46.209 user 0m41.817s 00:11:46.209 sys 0m1.866s 00:11:46.209 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:46.209 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:46.209 ************************************ 00:11:46.209 END TEST nvmf_filesystem_no_in_capsule 00:11:46.209 ************************************ 00:11:46.209 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:11:46.209 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:46.209 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:46.209 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:46.209 ************************************ 00:11:46.209 START TEST nvmf_filesystem_in_capsule 00:11:46.209 ************************************ 00:11:46.209 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 4096 00:11:46.209 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:11:46.210 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:46.210 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:46.210 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:46.210 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:46.210 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=170115 00:11:46.210 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:46.210 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 170115 00:11:46.210 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 170115 ']' 00:11:46.210 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:46.210 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:46.210 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:46.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:46.210 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:46.210 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:46.210 [2024-07-26 14:05:54.195073] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:11:46.210 [2024-07-26 14:05:54.195149] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:46.210 EAL: No free 2048 kB hugepages reported on node 1 00:11:46.471 [2024-07-26 14:05:54.257755] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:46.471 [2024-07-26 14:05:54.364718] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:46.471 [2024-07-26 14:05:54.364764] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:46.471 [2024-07-26 14:05:54.364779] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:46.471 [2024-07-26 14:05:54.364791] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:46.471 [2024-07-26 14:05:54.364817] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:46.471 [2024-07-26 14:05:54.364881] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:46.471 [2024-07-26 14:05:54.364942] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:46.471 [2024-07-26 14:05:54.365010] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:46.471 [2024-07-26 14:05:54.365012] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:46.728 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:46.728 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:11:46.728 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:46.728 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:46.728 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:46.728 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:46.728 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:46.728 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:11:46.728 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.728 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:46.728 [2024-07-26 14:05:54.533106] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:46.728 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.728 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:46.728 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.728 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:46.728 Malloc1 00:11:46.729 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.729 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:46.729 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.729 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:46.729 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.729 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:46.729 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.729 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:46.729 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.729 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:46.729 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.729 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:46.729 [2024-07-26 14:05:54.716389] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:46.729 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.729 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:46.729 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:11:46.729 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:11:46.729 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:11:46.729 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:11:46.729 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:46.729 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.729 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:46.729 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.729 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:11:46.729 { 00:11:46.729 "name": "Malloc1", 00:11:46.729 "aliases": [ 00:11:46.729 "1c40054f-1007-4284-8806-8746a8c175dc" 00:11:46.729 ], 00:11:46.729 "product_name": "Malloc disk", 00:11:46.729 "block_size": 512, 00:11:46.729 "num_blocks": 1048576, 00:11:46.729 "uuid": "1c40054f-1007-4284-8806-8746a8c175dc", 00:11:46.729 "assigned_rate_limits": { 00:11:46.729 "rw_ios_per_sec": 0, 00:11:46.729 "rw_mbytes_per_sec": 0, 00:11:46.729 "r_mbytes_per_sec": 0, 00:11:46.729 "w_mbytes_per_sec": 0 00:11:46.729 }, 00:11:46.729 "claimed": true, 00:11:46.729 "claim_type": "exclusive_write", 00:11:46.729 "zoned": false, 00:11:46.729 "supported_io_types": { 00:11:46.729 "read": true, 00:11:46.729 "write": true, 00:11:46.729 "unmap": true, 00:11:46.729 "flush": true, 00:11:46.729 "reset": true, 00:11:46.729 "nvme_admin": false, 00:11:46.729 "nvme_io": false, 00:11:46.729 "nvme_io_md": false, 00:11:46.729 "write_zeroes": true, 00:11:46.729 "zcopy": true, 00:11:46.729 "get_zone_info": false, 00:11:46.729 "zone_management": false, 00:11:46.729 "zone_append": false, 00:11:46.729 "compare": false, 00:11:46.729 "compare_and_write": false, 00:11:46.729 "abort": true, 00:11:46.729 "seek_hole": false, 00:11:46.729 "seek_data": false, 00:11:46.729 "copy": true, 00:11:46.729 "nvme_iov_md": false 00:11:46.729 }, 00:11:46.729 "memory_domains": [ 00:11:46.729 { 00:11:46.729 "dma_device_id": "system", 00:11:46.729 "dma_device_type": 1 00:11:46.729 }, 00:11:46.729 { 00:11:46.729 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:46.729 "dma_device_type": 2 00:11:46.729 } 00:11:46.729 ], 00:11:46.729 "driver_specific": {} 00:11:46.729 } 00:11:46.729 ]' 00:11:46.729 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:11:46.986 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:11:46.986 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:11:46.986 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:11:46.986 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:11:46.986 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:11:46.986 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:46.986 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:47.553 14:05:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:47.553 14:05:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:11:47.553 14:05:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:47.553 14:05:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:47.553 14:05:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:11:50.080 14:05:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:50.080 14:05:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:50.080 14:05:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:50.080 14:05:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:50.080 14:05:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:50.080 14:05:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:11:50.080 14:05:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:50.080 14:05:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:50.080 14:05:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:50.080 14:05:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:50.080 14:05:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:50.081 14:05:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:50.081 14:05:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:50.081 14:05:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:50.081 14:05:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:50.081 14:05:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:50.081 14:05:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:50.081 14:05:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:50.081 14:05:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:51.453 14:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:11:51.453 14:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:51.453 14:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:51.453 14:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:51.453 14:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:51.453 ************************************ 00:11:51.453 START TEST filesystem_in_capsule_ext4 00:11:51.453 ************************************ 00:11:51.453 14:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:51.453 14:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:51.453 14:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:51.453 14:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:51.454 14:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:11:51.454 14:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:51.454 14:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:11:51.454 14:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local force 00:11:51.454 14:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:11:51.454 14:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:11:51.454 14:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:51.454 mke2fs 1.46.5 (30-Dec-2021) 00:11:51.454 Discarding device blocks: 0/522240 done 00:11:51.454 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:51.454 Filesystem UUID: 52dffb4d-2397-4e05-9b3e-f693635d17d3 00:11:51.454 Superblock backups stored on blocks: 00:11:51.454 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:51.454 00:11:51.454 Allocating group tables: 0/64 done 00:11:51.454 Writing inode tables: 0/64 done 00:11:51.454 Creating journal (8192 blocks): done 00:11:51.454 Writing superblocks and filesystem accounting information: 0/64 done 00:11:51.454 00:11:51.454 14:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@945 -- # return 0 00:11:51.454 14:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:51.454 14:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:51.712 14:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:11:51.712 14:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:51.712 14:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:11:51.712 14:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:51.712 14:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:51.712 14:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 170115 00:11:51.712 14:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:51.712 14:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:51.712 14:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:51.712 14:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:51.712 00:11:51.712 real 0m0.427s 00:11:51.712 user 0m0.013s 00:11:51.712 sys 0m0.053s 00:11:51.712 14:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:51.712 14:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:51.712 ************************************ 00:11:51.712 END TEST filesystem_in_capsule_ext4 00:11:51.712 ************************************ 00:11:51.712 14:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:51.712 14:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:51.712 14:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:51.712 14:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:51.712 ************************************ 00:11:51.712 START TEST filesystem_in_capsule_btrfs 00:11:51.712 ************************************ 00:11:51.712 14:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:51.712 14:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:51.712 14:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:51.712 14:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:51.712 14:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:11:51.712 14:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:51.712 14:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:11:51.712 14:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local force 00:11:51.712 14:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:11:51.712 14:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:11:51.712 14:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:51.970 btrfs-progs v6.6.2 00:11:51.970 See https://btrfs.readthedocs.io for more information. 00:11:51.970 00:11:51.970 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:51.970 NOTE: several default settings have changed in version 5.15, please make sure 00:11:51.970 this does not affect your deployments: 00:11:51.970 - DUP for metadata (-m dup) 00:11:51.970 - enabled no-holes (-O no-holes) 00:11:51.970 - enabled free-space-tree (-R free-space-tree) 00:11:51.970 00:11:51.970 Label: (null) 00:11:51.970 UUID: ecff4d7d-4ff8-4d14-8521-66c63bc2b000 00:11:51.970 Node size: 16384 00:11:51.970 Sector size: 4096 00:11:51.970 Filesystem size: 510.00MiB 00:11:51.970 Block group profiles: 00:11:51.970 Data: single 8.00MiB 00:11:51.970 Metadata: DUP 32.00MiB 00:11:51.970 System: DUP 8.00MiB 00:11:51.970 SSD detected: yes 00:11:51.970 Zoned device: no 00:11:51.970 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:11:51.970 Runtime features: free-space-tree 00:11:51.970 Checksum: crc32c 00:11:51.970 Number of devices: 1 00:11:51.970 Devices: 00:11:51.970 ID SIZE PATH 00:11:51.970 1 510.00MiB /dev/nvme0n1p1 00:11:51.970 00:11:51.970 14:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@945 -- # return 0 00:11:51.970 14:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:51.970 14:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:51.970 14:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:11:51.970 14:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:51.970 14:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:11:51.970 14:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:51.970 14:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:51.970 14:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 170115 00:11:51.970 14:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:51.970 14:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:51.970 14:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:51.970 14:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:51.970 00:11:51.970 real 0m0.401s 00:11:51.970 user 0m0.024s 00:11:51.970 sys 0m0.104s 00:11:51.970 14:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:51.970 14:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:51.970 ************************************ 00:11:51.970 END TEST filesystem_in_capsule_btrfs 00:11:51.970 ************************************ 00:11:52.228 14:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:11:52.228 14:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:52.228 14:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:52.228 14:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:52.228 ************************************ 00:11:52.228 START TEST filesystem_in_capsule_xfs 00:11:52.228 ************************************ 00:11:52.228 14:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:11:52.228 14:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:52.228 14:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:52.228 14:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:52.228 14:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:11:52.228 14:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:52.228 14:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local i=0 00:11:52.228 14:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local force 00:11:52.228 14:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:11:52.228 14:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@934 -- # force=-f 00:11:52.228 14:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:52.229 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:52.229 = sectsz=512 attr=2, projid32bit=1 00:11:52.229 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:52.229 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:52.229 data = bsize=4096 blocks=130560, imaxpct=25 00:11:52.229 = sunit=0 swidth=0 blks 00:11:52.229 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:52.229 log =internal log bsize=4096 blocks=16384, version=2 00:11:52.229 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:52.229 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:53.161 Discarding blocks...Done. 00:11:53.161 14:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@945 -- # return 0 00:11:53.161 14:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:55.057 14:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:55.315 14:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:11:55.315 14:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:55.315 14:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:11:55.315 14:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:11:55.315 14:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:55.315 14:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 170115 00:11:55.315 14:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:55.315 14:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:55.315 14:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:55.315 14:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:55.315 00:11:55.315 real 0m3.133s 00:11:55.315 user 0m0.015s 00:11:55.315 sys 0m0.059s 00:11:55.315 14:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:55.315 14:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:55.315 ************************************ 00:11:55.315 END TEST filesystem_in_capsule_xfs 00:11:55.315 ************************************ 00:11:55.315 14:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:55.572 14:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:55.572 14:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:55.572 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:55.572 14:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:55.572 14:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:11:55.572 14:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:55.572 14:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:55.572 14:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:55.572 14:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:55.572 14:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:11:55.572 14:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:55.572 14:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.572 14:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:55.572 14:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.572 14:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:55.572 14:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 170115 00:11:55.572 14:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 170115 ']' 00:11:55.572 14:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # kill -0 170115 00:11:55.572 14:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # uname 00:11:55.572 14:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:55.572 14:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 170115 00:11:55.572 14:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:55.572 14:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:55.572 14:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 170115' 00:11:55.572 killing process with pid 170115 00:11:55.572 14:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@969 -- # kill 170115 00:11:55.572 14:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@974 -- # wait 170115 00:11:56.137 14:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:56.137 00:11:56.137 real 0m9.865s 00:11:56.137 user 0m37.532s 00:11:56.137 sys 0m1.641s 00:11:56.137 14:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:56.137 14:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:56.137 ************************************ 00:11:56.137 END TEST nvmf_filesystem_in_capsule 00:11:56.137 ************************************ 00:11:56.137 14:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:11:56.137 14:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:56.137 14:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:11:56.137 14:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:56.137 14:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:11:56.137 14:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:56.137 14:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:56.137 rmmod nvme_tcp 00:11:56.137 rmmod nvme_fabrics 00:11:56.137 rmmod nvme_keyring 00:11:56.137 14:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:56.137 14:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:11:56.137 14:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:11:56.137 14:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:11:56.137 14:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:56.137 14:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:56.137 14:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:56.137 14:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:56.137 14:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:56.137 14:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:56.137 14:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:56.137 14:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:58.675 14:06:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:58.675 00:11:58.675 real 0m25.399s 00:11:58.675 user 1m20.288s 00:11:58.675 sys 0m5.137s 00:11:58.675 14:06:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:58.675 14:06:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:58.675 ************************************ 00:11:58.675 END TEST nvmf_filesystem 00:11:58.675 ************************************ 00:11:58.675 14:06:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:58.675 14:06:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:58.675 14:06:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:58.675 14:06:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:58.675 ************************************ 00:11:58.675 START TEST nvmf_target_discovery 00:11:58.675 ************************************ 00:11:58.675 14:06:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:58.675 * Looking for test storage... 00:11:58.675 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:58.675 14:06:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:58.675 14:06:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:11:58.675 14:06:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:58.675 14:06:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:58.675 14:06:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:58.675 14:06:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:58.675 14:06:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:58.675 14:06:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:58.675 14:06:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:58.675 14:06:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:58.675 14:06:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:58.675 14:06:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:58.675 14:06:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:11:58.675 14:06:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:11:58.675 14:06:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:58.675 14:06:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:58.675 14:06:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:58.675 14:06:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:58.675 14:06:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:58.675 14:06:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:58.675 14:06:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:58.675 14:06:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:58.675 14:06:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:58.675 14:06:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:58.675 14:06:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:58.675 14:06:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:11:58.675 14:06:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:58.675 14:06:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:11:58.675 14:06:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:58.675 14:06:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:58.675 14:06:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:58.675 14:06:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:58.675 14:06:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:58.676 14:06:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:58.676 14:06:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:58.676 14:06:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:58.676 14:06:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:11:58.676 14:06:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:11:58.676 14:06:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:11:58.676 14:06:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:11:58.676 14:06:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:11:58.676 14:06:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:58.676 14:06:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:58.676 14:06:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:58.676 14:06:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:58.676 14:06:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:58.676 14:06:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:58.676 14:06:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:58.676 14:06:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:58.676 14:06:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:58.676 14:06:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:58.676 14:06:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:11:58.676 14:06:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:00.577 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:00.577 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:12:00.577 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:00.577 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:00.577 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:00.577 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:00.577 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:00.577 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:12:00.577 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:00.577 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:12:00.577 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:12:00.577 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:12:00.577 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:12:00.577 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:12:00.577 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:12:00.577 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:00.577 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:00.577 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:00.577 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:00.577 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:00.577 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:00.577 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:00.577 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:00.577 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:00.577 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:00.577 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:00.577 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:00.577 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:00.577 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:00.577 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:00.577 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:00.577 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:00.577 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:00.577 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:12:00.577 Found 0000:09:00.0 (0x8086 - 0x159b) 00:12:00.578 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:00.578 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:00.578 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:00.578 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:00.578 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:00.578 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:00.578 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:12:00.578 Found 0000:09:00.1 (0x8086 - 0x159b) 00:12:00.578 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:00.578 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:00.578 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:00.578 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:00.578 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:00.578 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:00.578 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:00.578 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:00.578 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:00.578 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:00.578 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:00.578 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:00.578 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:00.578 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:00.578 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:00.578 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:12:00.578 Found net devices under 0000:09:00.0: cvl_0_0 00:12:00.578 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:00.578 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:00.578 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:00.578 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:00.578 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:00.578 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:00.578 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:00.578 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:00.578 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:12:00.578 Found net devices under 0000:09:00.1: cvl_0_1 00:12:00.578 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:00.578 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:00.578 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:12:00.578 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:00.578 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:00.578 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:00.578 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:00.578 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:00.578 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:00.578 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:00.578 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:00.578 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:00.578 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:00.578 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:00.578 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:00.578 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:00.578 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:00.578 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:00.578 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:00.578 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:00.578 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:00.578 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:00.578 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:00.578 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:00.578 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:00.578 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:00.578 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:00.578 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.228 ms 00:12:00.578 00:12:00.578 --- 10.0.0.2 ping statistics --- 00:12:00.578 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:00.578 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:12:00.578 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:00.578 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:00.578 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.091 ms 00:12:00.578 00:12:00.578 --- 10.0.0.1 ping statistics --- 00:12:00.578 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:00.578 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:12:00.578 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:00.578 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:12:00.578 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:00.578 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:00.578 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:00.578 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:00.578 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:00.578 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:00.578 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:00.578 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:12:00.578 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:00.578 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:00.578 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:00.578 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=173812 00:12:00.578 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:00.578 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 173812 00:12:00.578 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@831 -- # '[' -z 173812 ']' 00:12:00.578 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:00.578 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:00.578 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:00.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:00.578 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:00.578 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:00.578 [2024-07-26 14:06:08.462470] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:12:00.578 [2024-07-26 14:06:08.462564] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:00.578 EAL: No free 2048 kB hugepages reported on node 1 00:12:00.578 [2024-07-26 14:06:08.532496] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:00.836 [2024-07-26 14:06:08.646635] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:00.836 [2024-07-26 14:06:08.646681] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:00.836 [2024-07-26 14:06:08.646711] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:00.836 [2024-07-26 14:06:08.646724] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:00.836 [2024-07-26 14:06:08.646735] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:00.836 [2024-07-26 14:06:08.646826] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:00.836 [2024-07-26 14:06:08.646877] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:00.836 [2024-07-26 14:06:08.646957] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:00.836 [2024-07-26 14:06:08.646960] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:00.836 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:00.836 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # return 0 00:12:00.836 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:00.836 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:00.836 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:00.836 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:00.836 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:00.836 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.836 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:00.836 [2024-07-26 14:06:08.798743] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:00.836 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.836 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:12:00.837 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:00.837 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:12:00.837 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.837 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:00.837 Null1 00:12:00.837 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.837 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:00.837 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.837 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:00.837 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.837 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:12:00.837 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.837 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:00.837 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.837 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:00.837 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.837 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:00.837 [2024-07-26 14:06:08.843096] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:00.837 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.837 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:00.837 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:12:00.837 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.837 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:01.095 Null2 00:12:01.095 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.095 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:12:01.095 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.095 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:01.095 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.095 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:12:01.095 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.095 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:01.095 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.095 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:12:01.095 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.095 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:01.095 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.095 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:01.095 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:12:01.095 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.095 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:01.095 Null3 00:12:01.095 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.095 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:12:01.095 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.095 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:01.095 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.095 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:12:01.095 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.095 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:01.095 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.095 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:12:01.095 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.095 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:01.095 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.095 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:01.095 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:12:01.095 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.095 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:01.095 Null4 00:12:01.095 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.095 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:12:01.095 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.095 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:01.095 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.095 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:12:01.095 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.095 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:01.095 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.095 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:12:01.095 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.095 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:01.095 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.095 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:01.095 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.095 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:01.095 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.095 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:12:01.095 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.095 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:01.095 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.095 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 4420 00:12:01.095 00:12:01.096 Discovery Log Number of Records 6, Generation counter 6 00:12:01.096 =====Discovery Log Entry 0====== 00:12:01.096 trtype: tcp 00:12:01.096 adrfam: ipv4 00:12:01.096 subtype: current discovery subsystem 00:12:01.096 treq: not required 00:12:01.096 portid: 0 00:12:01.096 trsvcid: 4420 00:12:01.096 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:01.096 traddr: 10.0.0.2 00:12:01.096 eflags: explicit discovery connections, duplicate discovery information 00:12:01.096 sectype: none 00:12:01.096 =====Discovery Log Entry 1====== 00:12:01.096 trtype: tcp 00:12:01.096 adrfam: ipv4 00:12:01.096 subtype: nvme subsystem 00:12:01.096 treq: not required 00:12:01.096 portid: 0 00:12:01.096 trsvcid: 4420 00:12:01.096 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:01.096 traddr: 10.0.0.2 00:12:01.096 eflags: none 00:12:01.096 sectype: none 00:12:01.096 =====Discovery Log Entry 2====== 00:12:01.096 trtype: tcp 00:12:01.096 adrfam: ipv4 00:12:01.096 subtype: nvme subsystem 00:12:01.096 treq: not required 00:12:01.096 portid: 0 00:12:01.096 trsvcid: 4420 00:12:01.096 subnqn: nqn.2016-06.io.spdk:cnode2 00:12:01.096 traddr: 10.0.0.2 00:12:01.096 eflags: none 00:12:01.096 sectype: none 00:12:01.096 =====Discovery Log Entry 3====== 00:12:01.096 trtype: tcp 00:12:01.096 adrfam: ipv4 00:12:01.096 subtype: nvme subsystem 00:12:01.096 treq: not required 00:12:01.096 portid: 0 00:12:01.096 trsvcid: 4420 00:12:01.096 subnqn: nqn.2016-06.io.spdk:cnode3 00:12:01.096 traddr: 10.0.0.2 00:12:01.096 eflags: none 00:12:01.096 sectype: none 00:12:01.096 =====Discovery Log Entry 4====== 00:12:01.096 trtype: tcp 00:12:01.096 adrfam: ipv4 00:12:01.096 subtype: nvme subsystem 00:12:01.096 treq: not required 00:12:01.096 portid: 0 00:12:01.096 trsvcid: 4420 00:12:01.096 subnqn: nqn.2016-06.io.spdk:cnode4 00:12:01.096 traddr: 10.0.0.2 00:12:01.096 eflags: none 00:12:01.096 sectype: none 00:12:01.096 =====Discovery Log Entry 5====== 00:12:01.096 trtype: tcp 00:12:01.096 adrfam: ipv4 00:12:01.096 subtype: discovery subsystem referral 00:12:01.096 treq: not required 00:12:01.096 portid: 0 00:12:01.096 trsvcid: 4430 00:12:01.096 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:01.096 traddr: 10.0.0.2 00:12:01.096 eflags: none 00:12:01.096 sectype: none 00:12:01.096 14:06:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:12:01.096 Perform nvmf subsystem discovery via RPC 00:12:01.096 14:06:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:12:01.096 14:06:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.096 14:06:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:01.096 [ 00:12:01.096 { 00:12:01.096 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:01.096 "subtype": "Discovery", 00:12:01.096 "listen_addresses": [ 00:12:01.096 { 00:12:01.096 "trtype": "TCP", 00:12:01.096 "adrfam": "IPv4", 00:12:01.096 "traddr": "10.0.0.2", 00:12:01.096 "trsvcid": "4420" 00:12:01.096 } 00:12:01.096 ], 00:12:01.096 "allow_any_host": true, 00:12:01.096 "hosts": [] 00:12:01.096 }, 00:12:01.096 { 00:12:01.096 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:01.096 "subtype": "NVMe", 00:12:01.096 "listen_addresses": [ 00:12:01.096 { 00:12:01.096 "trtype": "TCP", 00:12:01.096 "adrfam": "IPv4", 00:12:01.096 "traddr": "10.0.0.2", 00:12:01.096 "trsvcid": "4420" 00:12:01.096 } 00:12:01.096 ], 00:12:01.096 "allow_any_host": true, 00:12:01.096 "hosts": [], 00:12:01.096 "serial_number": "SPDK00000000000001", 00:12:01.096 "model_number": "SPDK bdev Controller", 00:12:01.096 "max_namespaces": 32, 00:12:01.096 "min_cntlid": 1, 00:12:01.096 "max_cntlid": 65519, 00:12:01.096 "namespaces": [ 00:12:01.096 { 00:12:01.096 "nsid": 1, 00:12:01.096 "bdev_name": "Null1", 00:12:01.096 "name": "Null1", 00:12:01.096 "nguid": "8DB7F80B5C68458894E5970029F8746C", 00:12:01.096 "uuid": "8db7f80b-5c68-4588-94e5-970029f8746c" 00:12:01.096 } 00:12:01.096 ] 00:12:01.096 }, 00:12:01.096 { 00:12:01.096 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:01.096 "subtype": "NVMe", 00:12:01.096 "listen_addresses": [ 00:12:01.096 { 00:12:01.096 "trtype": "TCP", 00:12:01.096 "adrfam": "IPv4", 00:12:01.096 "traddr": "10.0.0.2", 00:12:01.096 "trsvcid": "4420" 00:12:01.096 } 00:12:01.096 ], 00:12:01.096 "allow_any_host": true, 00:12:01.096 "hosts": [], 00:12:01.096 "serial_number": "SPDK00000000000002", 00:12:01.096 "model_number": "SPDK bdev Controller", 00:12:01.096 "max_namespaces": 32, 00:12:01.096 "min_cntlid": 1, 00:12:01.096 "max_cntlid": 65519, 00:12:01.096 "namespaces": [ 00:12:01.096 { 00:12:01.096 "nsid": 1, 00:12:01.096 "bdev_name": "Null2", 00:12:01.096 "name": "Null2", 00:12:01.096 "nguid": "C19EA26A4711430D9B1CFD72933B2348", 00:12:01.096 "uuid": "c19ea26a-4711-430d-9b1c-fd72933b2348" 00:12:01.096 } 00:12:01.096 ] 00:12:01.096 }, 00:12:01.096 { 00:12:01.096 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:12:01.096 "subtype": "NVMe", 00:12:01.096 "listen_addresses": [ 00:12:01.096 { 00:12:01.096 "trtype": "TCP", 00:12:01.096 "adrfam": "IPv4", 00:12:01.096 "traddr": "10.0.0.2", 00:12:01.096 "trsvcid": "4420" 00:12:01.096 } 00:12:01.096 ], 00:12:01.096 "allow_any_host": true, 00:12:01.096 "hosts": [], 00:12:01.096 "serial_number": "SPDK00000000000003", 00:12:01.096 "model_number": "SPDK bdev Controller", 00:12:01.096 "max_namespaces": 32, 00:12:01.096 "min_cntlid": 1, 00:12:01.096 "max_cntlid": 65519, 00:12:01.096 "namespaces": [ 00:12:01.096 { 00:12:01.096 "nsid": 1, 00:12:01.096 "bdev_name": "Null3", 00:12:01.096 "name": "Null3", 00:12:01.096 "nguid": "84FC0DB7B9EC4BE4A1D48E5E311A8336", 00:12:01.096 "uuid": "84fc0db7-b9ec-4be4-a1d4-8e5e311a8336" 00:12:01.096 } 00:12:01.096 ] 00:12:01.096 }, 00:12:01.096 { 00:12:01.096 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:12:01.096 "subtype": "NVMe", 00:12:01.096 "listen_addresses": [ 00:12:01.096 { 00:12:01.096 "trtype": "TCP", 00:12:01.096 "adrfam": "IPv4", 00:12:01.096 "traddr": "10.0.0.2", 00:12:01.096 "trsvcid": "4420" 00:12:01.096 } 00:12:01.096 ], 00:12:01.096 "allow_any_host": true, 00:12:01.096 "hosts": [], 00:12:01.096 "serial_number": "SPDK00000000000004", 00:12:01.096 "model_number": "SPDK bdev Controller", 00:12:01.096 "max_namespaces": 32, 00:12:01.096 "min_cntlid": 1, 00:12:01.096 "max_cntlid": 65519, 00:12:01.096 "namespaces": [ 00:12:01.096 { 00:12:01.096 "nsid": 1, 00:12:01.096 "bdev_name": "Null4", 00:12:01.096 "name": "Null4", 00:12:01.096 "nguid": "97DCC6A16FB74CE8AD9418A23BEBC022", 00:12:01.096 "uuid": "97dcc6a1-6fb7-4ce8-ad94-18a23bebc022" 00:12:01.096 } 00:12:01.096 ] 00:12:01.096 } 00:12:01.096 ] 00:12:01.096 14:06:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.096 14:06:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:12:01.096 14:06:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:01.096 14:06:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:01.096 14:06:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.096 14:06:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:01.354 14:06:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.354 14:06:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:12:01.354 14:06:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.354 14:06:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:01.354 14:06:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.354 14:06:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:01.354 14:06:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:12:01.354 14:06:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.354 14:06:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:01.354 14:06:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.354 14:06:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:12:01.354 14:06:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.354 14:06:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:01.354 14:06:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.354 14:06:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:01.354 14:06:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:12:01.354 14:06:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.354 14:06:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:01.354 14:06:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.354 14:06:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:12:01.354 14:06:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.354 14:06:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:01.354 14:06:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.354 14:06:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:01.354 14:06:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:12:01.354 14:06:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.354 14:06:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:01.354 14:06:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.354 14:06:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:12:01.354 14:06:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.354 14:06:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:01.354 14:06:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.355 14:06:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:12:01.355 14:06:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.355 14:06:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:01.355 14:06:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.355 14:06:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:12:01.355 14:06:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:12:01.355 14:06:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.355 14:06:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:01.355 14:06:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.355 14:06:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:12:01.355 14:06:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:12:01.355 14:06:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:12:01.355 14:06:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:12:01.355 14:06:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:01.355 14:06:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:12:01.355 14:06:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:01.355 14:06:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:12:01.355 14:06:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:01.355 14:06:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:01.355 rmmod nvme_tcp 00:12:01.355 rmmod nvme_fabrics 00:12:01.355 rmmod nvme_keyring 00:12:01.355 14:06:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:01.355 14:06:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:12:01.355 14:06:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:12:01.355 14:06:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 173812 ']' 00:12:01.355 14:06:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 173812 00:12:01.355 14:06:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@950 -- # '[' -z 173812 ']' 00:12:01.355 14:06:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # kill -0 173812 00:12:01.355 14:06:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # uname 00:12:01.355 14:06:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:01.355 14:06:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 173812 00:12:01.355 14:06:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:01.355 14:06:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:01.355 14:06:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 173812' 00:12:01.355 killing process with pid 173812 00:12:01.355 14:06:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@969 -- # kill 173812 00:12:01.355 14:06:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@974 -- # wait 173812 00:12:01.615 14:06:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:01.615 14:06:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:01.615 14:06:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:01.615 14:06:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:01.615 14:06:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:01.615 14:06:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:01.615 14:06:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:01.615 14:06:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:04.157 14:06:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:04.157 00:12:04.157 real 0m5.422s 00:12:04.157 user 0m4.285s 00:12:04.157 sys 0m1.842s 00:12:04.157 14:06:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:04.157 14:06:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:04.157 ************************************ 00:12:04.157 END TEST nvmf_target_discovery 00:12:04.157 ************************************ 00:12:04.157 14:06:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:04.157 14:06:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:04.157 14:06:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:04.157 14:06:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:04.157 ************************************ 00:12:04.157 START TEST nvmf_referrals 00:12:04.157 ************************************ 00:12:04.157 14:06:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:04.157 * Looking for test storage... 00:12:04.157 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:04.157 14:06:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:04.157 14:06:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:12:04.157 14:06:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:04.157 14:06:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:04.157 14:06:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:04.157 14:06:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:04.157 14:06:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:04.157 14:06:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:04.157 14:06:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:04.157 14:06:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:04.157 14:06:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:04.157 14:06:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:04.157 14:06:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:12:04.157 14:06:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:12:04.157 14:06:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:04.157 14:06:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:04.157 14:06:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:04.157 14:06:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:04.157 14:06:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:04.157 14:06:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:04.157 14:06:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:04.157 14:06:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:04.157 14:06:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.158 14:06:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.158 14:06:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.158 14:06:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:12:04.158 14:06:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.158 14:06:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:12:04.158 14:06:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:04.158 14:06:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:04.158 14:06:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:04.158 14:06:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:04.158 14:06:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:04.158 14:06:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:04.158 14:06:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:04.158 14:06:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:04.158 14:06:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:12:04.158 14:06:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:12:04.158 14:06:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:12:04.158 14:06:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:12:04.158 14:06:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:12:04.158 14:06:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:12:04.158 14:06:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:12:04.158 14:06:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:04.158 14:06:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:04.158 14:06:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:04.158 14:06:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:04.158 14:06:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:04.158 14:06:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:04.158 14:06:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:04.158 14:06:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:04.158 14:06:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:04.158 14:06:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:04.158 14:06:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:12:04.158 14:06:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:06.059 14:06:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:06.059 14:06:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:12:06.059 14:06:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:06.059 14:06:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:06.060 14:06:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:06.060 14:06:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:06.060 14:06:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:06.060 14:06:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:12:06.060 14:06:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:06.060 14:06:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:12:06.060 14:06:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:12:06.060 14:06:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:12:06.060 14:06:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:12:06.060 14:06:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:12:06.060 14:06:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:12:06.060 14:06:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:06.060 14:06:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:06.060 14:06:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:06.060 14:06:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:06.060 14:06:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:06.060 14:06:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:06.060 14:06:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:06.060 14:06:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:06.060 14:06:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:06.060 14:06:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:06.060 14:06:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:06.060 14:06:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:06.060 14:06:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:06.060 14:06:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:06.060 14:06:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:06.060 14:06:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:06.060 14:06:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:06.060 14:06:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:06.060 14:06:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:12:06.060 Found 0000:09:00.0 (0x8086 - 0x159b) 00:12:06.060 14:06:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:06.060 14:06:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:06.060 14:06:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:06.060 14:06:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:06.060 14:06:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:06.060 14:06:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:06.060 14:06:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:12:06.060 Found 0000:09:00.1 (0x8086 - 0x159b) 00:12:06.060 14:06:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:06.060 14:06:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:06.060 14:06:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:06.060 14:06:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:06.060 14:06:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:06.060 14:06:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:06.060 14:06:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:06.060 14:06:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:06.060 14:06:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:06.060 14:06:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:06.060 14:06:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:06.060 14:06:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:06.060 14:06:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:06.060 14:06:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:06.060 14:06:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:06.060 14:06:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:12:06.060 Found net devices under 0000:09:00.0: cvl_0_0 00:12:06.060 14:06:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:06.060 14:06:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:06.060 14:06:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:06.060 14:06:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:06.060 14:06:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:06.060 14:06:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:06.060 14:06:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:06.060 14:06:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:06.060 14:06:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:12:06.060 Found net devices under 0000:09:00.1: cvl_0_1 00:12:06.060 14:06:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:06.060 14:06:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:06.060 14:06:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:12:06.060 14:06:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:06.060 14:06:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:06.060 14:06:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:06.060 14:06:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:06.060 14:06:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:06.060 14:06:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:06.060 14:06:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:06.060 14:06:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:06.060 14:06:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:06.060 14:06:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:06.060 14:06:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:06.060 14:06:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:06.060 14:06:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:06.060 14:06:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:06.060 14:06:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:06.060 14:06:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:06.060 14:06:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:06.060 14:06:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:06.060 14:06:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:06.060 14:06:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:06.060 14:06:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:06.060 14:06:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:06.060 14:06:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:06.060 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:06.060 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.274 ms 00:12:06.060 00:12:06.060 --- 10.0.0.2 ping statistics --- 00:12:06.060 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:06.060 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:12:06.060 14:06:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:06.060 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:06.060 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:12:06.060 00:12:06.060 --- 10.0.0.1 ping statistics --- 00:12:06.060 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:06.060 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:12:06.060 14:06:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:06.060 14:06:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:12:06.060 14:06:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:06.060 14:06:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:06.060 14:06:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:06.061 14:06:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:06.061 14:06:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:06.061 14:06:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:06.061 14:06:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:06.061 14:06:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:12:06.061 14:06:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:06.061 14:06:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:06.061 14:06:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:06.061 14:06:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=176027 00:12:06.061 14:06:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:06.061 14:06:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 176027 00:12:06.061 14:06:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@831 -- # '[' -z 176027 ']' 00:12:06.061 14:06:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:06.061 14:06:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:06.061 14:06:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:06.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:06.061 14:06:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:06.061 14:06:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:06.061 [2024-07-26 14:06:14.064362] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:12:06.061 [2024-07-26 14:06:14.064455] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:06.319 EAL: No free 2048 kB hugepages reported on node 1 00:12:06.319 [2024-07-26 14:06:14.128333] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:06.319 [2024-07-26 14:06:14.241688] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:06.319 [2024-07-26 14:06:14.241736] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:06.319 [2024-07-26 14:06:14.241763] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:06.319 [2024-07-26 14:06:14.241775] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:06.319 [2024-07-26 14:06:14.241785] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:06.319 [2024-07-26 14:06:14.241848] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:06.319 [2024-07-26 14:06:14.241908] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:06.319 [2024-07-26 14:06:14.241985] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:06.319 [2024-07-26 14:06:14.241989] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:06.577 14:06:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:06.577 14:06:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # return 0 00:12:06.577 14:06:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:06.577 14:06:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:06.577 14:06:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:06.577 14:06:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:06.577 14:06:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:06.577 14:06:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.577 14:06:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:06.577 [2024-07-26 14:06:14.377693] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:06.577 14:06:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.577 14:06:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:12:06.577 14:06:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.577 14:06:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:06.577 [2024-07-26 14:06:14.389964] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:12:06.577 14:06:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.577 14:06:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:12:06.577 14:06:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.577 14:06:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:06.577 14:06:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.577 14:06:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:12:06.577 14:06:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.577 14:06:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:06.577 14:06:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.577 14:06:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:12:06.577 14:06:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.577 14:06:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:06.577 14:06:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.577 14:06:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:06.577 14:06:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:12:06.577 14:06:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.577 14:06:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:06.577 14:06:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.577 14:06:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:12:06.577 14:06:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:12:06.577 14:06:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:06.577 14:06:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:06.577 14:06:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:06.577 14:06:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.577 14:06:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:06.577 14:06:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:06.577 14:06:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.577 14:06:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:06.577 14:06:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:06.578 14:06:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:12:06.578 14:06:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:06.578 14:06:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:06.578 14:06:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:06.578 14:06:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:06.578 14:06:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:06.835 14:06:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:06.835 14:06:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:06.835 14:06:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:12:06.835 14:06:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.835 14:06:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:06.835 14:06:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.835 14:06:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:12:06.835 14:06:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.835 14:06:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:06.835 14:06:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.835 14:06:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:12:06.835 14:06:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.835 14:06:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:06.835 14:06:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.835 14:06:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:06.835 14:06:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:12:06.835 14:06:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.835 14:06:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:06.835 14:06:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.835 14:06:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:12:06.835 14:06:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:12:06.835 14:06:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:06.835 14:06:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:06.835 14:06:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:06.835 14:06:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:06.835 14:06:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:06.835 14:06:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:06.835 14:06:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:12:06.835 14:06:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:12:06.835 14:06:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.835 14:06:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:07.093 14:06:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.093 14:06:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:07.093 14:06:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.093 14:06:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:07.093 14:06:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.093 14:06:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:12:07.093 14:06:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:07.093 14:06:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:07.093 14:06:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:07.093 14:06:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.093 14:06:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:07.093 14:06:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:07.093 14:06:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.093 14:06:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:12:07.093 14:06:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:07.093 14:06:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:12:07.093 14:06:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:07.093 14:06:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:07.093 14:06:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:07.093 14:06:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:07.093 14:06:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:07.093 14:06:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:12:07.093 14:06:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:07.093 14:06:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:12:07.093 14:06:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:12:07.093 14:06:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:07.093 14:06:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:07.093 14:06:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:07.351 14:06:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:12:07.351 14:06:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:12:07.351 14:06:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:12:07.351 14:06:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:07.351 14:06:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:07.351 14:06:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:07.351 14:06:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:07.351 14:06:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:07.351 14:06:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.351 14:06:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:07.351 14:06:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.351 14:06:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:12:07.351 14:06:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:07.351 14:06:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:07.351 14:06:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.351 14:06:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:07.351 14:06:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:07.351 14:06:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:07.351 14:06:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.351 14:06:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:12:07.351 14:06:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:07.351 14:06:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:12:07.351 14:06:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:07.351 14:06:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:07.351 14:06:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:07.351 14:06:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:07.351 14:06:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:07.608 14:06:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:12:07.608 14:06:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:07.608 14:06:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:12:07.609 14:06:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:12:07.609 14:06:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:07.609 14:06:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:07.609 14:06:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:07.866 14:06:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:12:07.866 14:06:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:12:07.866 14:06:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:12:07.866 14:06:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:07.866 14:06:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:07.866 14:06:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:07.866 14:06:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:07.866 14:06:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:12:07.866 14:06:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.866 14:06:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:07.866 14:06:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.866 14:06:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:07.866 14:06:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:12:07.866 14:06:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.866 14:06:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:07.866 14:06:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.866 14:06:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:12:07.866 14:06:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:12:07.866 14:06:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:07.866 14:06:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:07.866 14:06:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:07.866 14:06:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:07.866 14:06:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:08.124 14:06:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:08.124 14:06:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:12:08.124 14:06:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:12:08.124 14:06:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:12:08.124 14:06:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:08.125 14:06:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:12:08.125 14:06:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:08.125 14:06:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:12:08.125 14:06:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:08.125 14:06:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:08.125 rmmod nvme_tcp 00:12:08.125 rmmod nvme_fabrics 00:12:08.125 rmmod nvme_keyring 00:12:08.125 14:06:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:08.125 14:06:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:12:08.125 14:06:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:12:08.125 14:06:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 176027 ']' 00:12:08.125 14:06:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 176027 00:12:08.125 14:06:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@950 -- # '[' -z 176027 ']' 00:12:08.125 14:06:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # kill -0 176027 00:12:08.125 14:06:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # uname 00:12:08.125 14:06:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:08.125 14:06:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 176027 00:12:08.125 14:06:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:08.125 14:06:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:08.125 14:06:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@968 -- # echo 'killing process with pid 176027' 00:12:08.125 killing process with pid 176027 00:12:08.125 14:06:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@969 -- # kill 176027 00:12:08.125 14:06:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@974 -- # wait 176027 00:12:08.384 14:06:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:08.384 14:06:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:08.384 14:06:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:08.384 14:06:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:08.384 14:06:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:08.384 14:06:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:08.384 14:06:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:08.384 14:06:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:10.922 14:06:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:10.922 00:12:10.922 real 0m6.703s 00:12:10.922 user 0m9.605s 00:12:10.922 sys 0m2.144s 00:12:10.922 14:06:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:10.922 14:06:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:10.922 ************************************ 00:12:10.922 END TEST nvmf_referrals 00:12:10.922 ************************************ 00:12:10.922 14:06:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:10.922 14:06:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:10.922 14:06:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:10.922 14:06:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:10.922 ************************************ 00:12:10.922 START TEST nvmf_connect_disconnect 00:12:10.922 ************************************ 00:12:10.922 14:06:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:10.922 * Looking for test storage... 00:12:10.922 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:10.922 14:06:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:10.922 14:06:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:12:10.922 14:06:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:10.922 14:06:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:10.922 14:06:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:10.922 14:06:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:10.922 14:06:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:10.922 14:06:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:10.922 14:06:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:10.922 14:06:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:10.922 14:06:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:10.922 14:06:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:10.922 14:06:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:12:10.922 14:06:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:12:10.922 14:06:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:10.922 14:06:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:10.922 14:06:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:10.922 14:06:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:10.922 14:06:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:10.922 14:06:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:10.922 14:06:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:10.922 14:06:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:10.922 14:06:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.922 14:06:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.922 14:06:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.922 14:06:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:12:10.922 14:06:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.923 14:06:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:12:10.923 14:06:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:10.923 14:06:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:10.923 14:06:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:10.923 14:06:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:10.923 14:06:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:10.923 14:06:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:10.923 14:06:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:10.923 14:06:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:10.923 14:06:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:10.923 14:06:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:10.923 14:06:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:12:10.923 14:06:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:10.923 14:06:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:10.923 14:06:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:10.923 14:06:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:10.923 14:06:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:10.923 14:06:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:10.923 14:06:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:10.923 14:06:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:10.923 14:06:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:10.923 14:06:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:10.923 14:06:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:12:10.923 14:06:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:12.828 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:12.828 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:12:12.828 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:12.828 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:12.828 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:12.828 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:12.828 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:12.828 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:12:12.828 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:12.828 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:12:12.828 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:12:12.828 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:12:12.828 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:12:12.828 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:12:12.828 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:12:12.828 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:12.828 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:12.828 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:12.828 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:12.828 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:12.828 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:12.828 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:12.828 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:12.828 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:12.828 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:12.828 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:12.828 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:12.828 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:12.828 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:12.828 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:12.828 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:12.828 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:12.828 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:12.828 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:12:12.828 Found 0000:09:00.0 (0x8086 - 0x159b) 00:12:12.828 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:12.828 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:12.828 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:12.828 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:12.828 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:12.828 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:12.828 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:12:12.828 Found 0000:09:00.1 (0x8086 - 0x159b) 00:12:12.828 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:12.828 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:12.828 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:12.828 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:12.828 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:12.828 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:12.828 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:12.828 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:12.828 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:12.828 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:12.828 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:12.828 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:12.828 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:12.829 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:12.829 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:12.829 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:12:12.829 Found net devices under 0000:09:00.0: cvl_0_0 00:12:12.829 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:12.829 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:12.829 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:12.829 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:12.829 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:12.829 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:12.829 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:12.829 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:12.829 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:12:12.829 Found net devices under 0000:09:00.1: cvl_0_1 00:12:12.829 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:12.829 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:12.829 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:12:12.829 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:12.829 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:12.829 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:12.829 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:12.829 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:12.829 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:12.829 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:12.829 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:12.829 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:12.829 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:12.829 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:12.829 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:12.829 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:12.829 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:12.829 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:12.829 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:12.829 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:12.829 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:12.829 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:12.829 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:12.829 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:12.829 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:12.829 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:12.829 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:12.829 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.163 ms 00:12:12.829 00:12:12.829 --- 10.0.0.2 ping statistics --- 00:12:12.829 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:12.829 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:12:12.829 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:12.829 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:12.829 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.097 ms 00:12:12.829 00:12:12.829 --- 10.0.0.1 ping statistics --- 00:12:12.829 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:12.829 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:12:12.829 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:12.829 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:12:12.829 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:12.829 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:12.829 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:12.829 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:12.829 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:12.829 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:12.829 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:12.829 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:12:12.829 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:12.829 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:12.829 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:12.829 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=178320 00:12:12.829 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:12.829 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 178320 00:12:12.829 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # '[' -z 178320 ']' 00:12:12.829 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:12.829 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:12.829 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:12.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:12.829 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:12.829 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:13.087 [2024-07-26 14:06:20.845553] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:12:13.087 [2024-07-26 14:06:20.845653] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:13.087 EAL: No free 2048 kB hugepages reported on node 1 00:12:13.087 [2024-07-26 14:06:20.905970] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:13.087 [2024-07-26 14:06:21.014220] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:13.087 [2024-07-26 14:06:21.014263] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:13.087 [2024-07-26 14:06:21.014293] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:13.087 [2024-07-26 14:06:21.014305] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:13.087 [2024-07-26 14:06:21.014314] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:13.087 [2024-07-26 14:06:21.014444] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:13.087 [2024-07-26 14:06:21.014496] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:13.087 [2024-07-26 14:06:21.014553] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:13.087 [2024-07-26 14:06:21.014556] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:13.345 14:06:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:13.345 14:06:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # return 0 00:12:13.345 14:06:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:13.345 14:06:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:13.345 14:06:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:13.345 14:06:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:13.345 14:06:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:13.345 14:06:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.345 14:06:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:13.345 [2024-07-26 14:06:21.163902] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:13.345 14:06:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.345 14:06:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:12:13.345 14:06:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.345 14:06:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:13.345 14:06:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.345 14:06:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:12:13.345 14:06:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:13.345 14:06:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.345 14:06:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:13.345 14:06:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.345 14:06:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:13.345 14:06:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.345 14:06:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:13.345 14:06:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.345 14:06:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:13.345 14:06:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.345 14:06:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:13.345 [2024-07-26 14:06:21.216474] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:13.346 14:06:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.346 14:06:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:12:13.346 14:06:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:12:13.346 14:06:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:12:16.622 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:19.148 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:21.674 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:24.952 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:27.478 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:27.478 14:06:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:27.478 14:06:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:27.478 14:06:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:27.478 14:06:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:12:27.478 14:06:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:27.478 14:06:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:12:27.478 14:06:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:27.478 14:06:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:27.478 rmmod nvme_tcp 00:12:27.478 rmmod nvme_fabrics 00:12:27.478 rmmod nvme_keyring 00:12:27.478 14:06:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:27.478 14:06:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:12:27.478 14:06:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:12:27.478 14:06:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 178320 ']' 00:12:27.478 14:06:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 178320 00:12:27.478 14:06:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # '[' -z 178320 ']' 00:12:27.478 14:06:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # kill -0 178320 00:12:27.478 14:06:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # uname 00:12:27.478 14:06:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:27.478 14:06:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 178320 00:12:27.478 14:06:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:27.478 14:06:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:27.478 14:06:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 178320' 00:12:27.478 killing process with pid 178320 00:12:27.478 14:06:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@969 -- # kill 178320 00:12:27.478 14:06:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@974 -- # wait 178320 00:12:27.478 14:06:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:27.478 14:06:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:27.478 14:06:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:27.478 14:06:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:27.478 14:06:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:27.478 14:06:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:27.478 14:06:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:27.478 14:06:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:30.015 14:06:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:30.015 00:12:30.015 real 0m19.043s 00:12:30.015 user 0m56.969s 00:12:30.015 sys 0m3.384s 00:12:30.015 14:06:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:30.015 14:06:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:30.015 ************************************ 00:12:30.015 END TEST nvmf_connect_disconnect 00:12:30.015 ************************************ 00:12:30.015 14:06:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:30.015 14:06:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:30.015 14:06:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:30.015 14:06:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:30.015 ************************************ 00:12:30.015 START TEST nvmf_multitarget 00:12:30.015 ************************************ 00:12:30.015 14:06:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:30.015 * Looking for test storage... 00:12:30.015 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:30.015 14:06:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:30.015 14:06:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:12:30.015 14:06:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:30.015 14:06:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:30.015 14:06:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:30.015 14:06:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:30.015 14:06:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:30.015 14:06:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:30.015 14:06:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:30.015 14:06:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:30.015 14:06:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:30.015 14:06:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:30.015 14:06:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:12:30.015 14:06:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:12:30.015 14:06:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:30.015 14:06:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:30.015 14:06:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:30.015 14:06:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:30.015 14:06:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:30.015 14:06:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:30.015 14:06:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:30.015 14:06:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:30.015 14:06:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.015 14:06:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.015 14:06:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.015 14:06:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:12:30.015 14:06:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.015 14:06:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:12:30.015 14:06:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:30.016 14:06:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:30.016 14:06:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:30.016 14:06:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:30.016 14:06:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:30.016 14:06:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:30.016 14:06:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:30.016 14:06:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:30.016 14:06:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:30.016 14:06:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:12:30.016 14:06:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:30.016 14:06:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:30.016 14:06:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:30.016 14:06:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:30.016 14:06:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:30.016 14:06:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:30.016 14:06:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:30.016 14:06:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:30.016 14:06:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:30.016 14:06:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:30.016 14:06:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:12:30.016 14:06:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:31.919 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:31.919 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:12:31.919 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:31.919 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:31.919 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:31.919 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:31.919 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:31.919 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:12:31.919 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:31.919 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:12:31.919 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:12:31.919 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:12:31.919 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:12:31.919 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:12:31.919 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:12:31.919 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:31.919 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:31.919 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:31.919 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:31.919 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:31.919 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:31.919 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:31.919 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:31.919 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:31.919 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:31.919 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:31.919 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:31.919 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:31.919 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:31.919 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:31.919 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:31.919 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:31.919 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:31.919 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:12:31.919 Found 0000:09:00.0 (0x8086 - 0x159b) 00:12:31.919 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:31.919 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:31.919 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:31.919 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:31.919 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:31.919 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:31.919 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:12:31.920 Found 0000:09:00.1 (0x8086 - 0x159b) 00:12:31.920 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:31.920 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:31.920 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:31.920 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:31.920 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:31.920 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:31.920 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:31.920 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:31.920 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:31.920 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:31.920 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:31.920 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:31.920 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:31.920 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:31.920 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:31.920 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:12:31.920 Found net devices under 0000:09:00.0: cvl_0_0 00:12:31.920 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:31.920 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:31.920 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:31.920 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:31.920 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:31.920 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:31.920 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:31.920 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:31.920 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:12:31.920 Found net devices under 0000:09:00.1: cvl_0_1 00:12:31.920 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:31.920 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:31.920 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:12:31.920 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:31.920 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:31.920 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:31.920 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:31.920 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:31.920 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:31.920 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:31.920 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:31.920 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:31.920 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:31.920 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:31.920 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:31.920 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:31.920 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:31.920 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:31.920 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:31.920 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:31.920 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:31.920 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:31.920 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:31.920 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:31.920 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:31.920 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:31.920 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:31.920 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.130 ms 00:12:31.920 00:12:31.920 --- 10.0.0.2 ping statistics --- 00:12:31.920 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:31.920 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:12:31.920 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:31.920 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:31.920 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.136 ms 00:12:31.920 00:12:31.920 --- 10.0.0.1 ping statistics --- 00:12:31.920 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:31.920 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:12:31.920 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:31.920 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:12:31.920 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:31.920 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:31.920 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:31.920 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:31.920 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:31.920 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:31.920 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:31.920 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:31.920 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:31.920 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:31.920 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:31.920 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=182068 00:12:31.920 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:31.920 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 182068 00:12:31.920 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@831 -- # '[' -z 182068 ']' 00:12:31.920 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:31.920 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:31.920 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:31.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:31.920 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:31.920 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:31.920 [2024-07-26 14:06:39.925457] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:12:31.920 [2024-07-26 14:06:39.925563] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:32.178 EAL: No free 2048 kB hugepages reported on node 1 00:12:32.178 [2024-07-26 14:06:39.991491] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:32.178 [2024-07-26 14:06:40.103346] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:32.178 [2024-07-26 14:06:40.103417] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:32.178 [2024-07-26 14:06:40.103439] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:32.178 [2024-07-26 14:06:40.103450] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:32.178 [2024-07-26 14:06:40.103460] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:32.178 [2024-07-26 14:06:40.103551] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:32.179 [2024-07-26 14:06:40.103608] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:32.179 [2024-07-26 14:06:40.103629] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:32.179 [2024-07-26 14:06:40.103633] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:32.436 14:06:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:32.436 14:06:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # return 0 00:12:32.436 14:06:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:32.436 14:06:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:32.436 14:06:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:32.436 14:06:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:32.436 14:06:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:32.436 14:06:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:32.436 14:06:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:12:32.436 14:06:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:32.436 14:06:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:32.693 "nvmf_tgt_1" 00:12:32.693 14:06:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:32.693 "nvmf_tgt_2" 00:12:32.693 14:06:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:32.693 14:06:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:12:32.950 14:06:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:32.950 14:06:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:32.950 true 00:12:32.950 14:06:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:32.950 true 00:12:33.208 14:06:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:33.208 14:06:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:12:33.208 14:06:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:33.208 14:06:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:33.208 14:06:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:12:33.208 14:06:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:33.208 14:06:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:12:33.208 14:06:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:33.208 14:06:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:12:33.208 14:06:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:33.208 14:06:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:33.208 rmmod nvme_tcp 00:12:33.208 rmmod nvme_fabrics 00:12:33.208 rmmod nvme_keyring 00:12:33.208 14:06:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:33.208 14:06:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:12:33.209 14:06:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:12:33.209 14:06:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 182068 ']' 00:12:33.209 14:06:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 182068 00:12:33.209 14:06:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@950 -- # '[' -z 182068 ']' 00:12:33.209 14:06:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # kill -0 182068 00:12:33.209 14:06:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # uname 00:12:33.209 14:06:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:33.209 14:06:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 182068 00:12:33.209 14:06:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:33.209 14:06:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:33.209 14:06:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@968 -- # echo 'killing process with pid 182068' 00:12:33.209 killing process with pid 182068 00:12:33.209 14:06:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@969 -- # kill 182068 00:12:33.209 14:06:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@974 -- # wait 182068 00:12:33.467 14:06:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:33.467 14:06:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:33.467 14:06:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:33.467 14:06:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:33.467 14:06:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:33.467 14:06:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:33.467 14:06:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:33.467 14:06:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:36.021 14:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:36.021 00:12:36.021 real 0m5.992s 00:12:36.021 user 0m6.819s 00:12:36.021 sys 0m2.001s 00:12:36.021 14:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:36.021 14:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:36.021 ************************************ 00:12:36.021 END TEST nvmf_multitarget 00:12:36.021 ************************************ 00:12:36.021 14:06:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:36.021 14:06:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:36.021 14:06:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:36.021 14:06:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:36.021 ************************************ 00:12:36.021 START TEST nvmf_rpc 00:12:36.021 ************************************ 00:12:36.021 14:06:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:36.021 * Looking for test storage... 00:12:36.021 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:36.021 14:06:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:36.022 14:06:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:12:36.022 14:06:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:36.022 14:06:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:36.022 14:06:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:36.022 14:06:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:36.022 14:06:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:36.022 14:06:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:36.022 14:06:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:36.022 14:06:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:36.022 14:06:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:36.022 14:06:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:36.022 14:06:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:12:36.022 14:06:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:12:36.022 14:06:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:36.022 14:06:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:36.022 14:06:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:36.022 14:06:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:36.022 14:06:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:36.022 14:06:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:36.022 14:06:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:36.022 14:06:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:36.022 14:06:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.022 14:06:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.022 14:06:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.022 14:06:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:12:36.022 14:06:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.022 14:06:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:12:36.022 14:06:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:36.022 14:06:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:36.022 14:06:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:36.022 14:06:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:36.022 14:06:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:36.022 14:06:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:36.022 14:06:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:36.022 14:06:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:36.022 14:06:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:12:36.022 14:06:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:12:36.022 14:06:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:36.022 14:06:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:36.022 14:06:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:36.022 14:06:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:36.022 14:06:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:36.022 14:06:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:36.022 14:06:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:36.022 14:06:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:36.022 14:06:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:36.022 14:06:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:36.022 14:06:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:12:36.022 14:06:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.929 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:37.929 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:12:37.929 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:37.929 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:37.929 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:37.929 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:37.929 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:37.929 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:12:37.929 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:37.929 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:12:37.929 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:12:37.929 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:12:37.929 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:12:37.929 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:12:37.929 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:12:37.929 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:37.929 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:37.929 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:37.929 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:37.929 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:37.929 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:37.929 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:37.929 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:37.929 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:37.929 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:37.929 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:37.929 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:37.929 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:37.929 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:37.929 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:37.929 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:37.929 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:37.929 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:37.929 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:12:37.929 Found 0000:09:00.0 (0x8086 - 0x159b) 00:12:37.929 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:37.929 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:37.929 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:37.929 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:37.929 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:37.929 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:37.929 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:12:37.929 Found 0000:09:00.1 (0x8086 - 0x159b) 00:12:37.929 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:37.929 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:37.929 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:37.929 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:37.929 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:37.929 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:37.929 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:37.929 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:37.929 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:37.929 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:37.929 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:37.929 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:37.929 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:37.929 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:37.929 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:37.929 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:12:37.929 Found net devices under 0000:09:00.0: cvl_0_0 00:12:37.929 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:37.929 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:37.929 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:37.929 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:37.929 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:37.929 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:37.929 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:37.929 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:37.929 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:12:37.929 Found net devices under 0000:09:00.1: cvl_0_1 00:12:37.929 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:37.929 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:37.929 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:12:37.929 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:37.929 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:37.930 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:37.930 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:37.930 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:37.930 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:37.930 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:37.930 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:37.930 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:37.930 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:37.930 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:37.930 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:37.930 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:37.930 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:37.930 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:37.930 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:37.930 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:37.930 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:37.930 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:37.930 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:37.930 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:37.930 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:37.930 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:37.930 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:37.930 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.142 ms 00:12:37.930 00:12:37.930 --- 10.0.0.2 ping statistics --- 00:12:37.930 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:37.930 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:12:37.930 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:37.930 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:37.930 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.129 ms 00:12:37.930 00:12:37.930 --- 10.0.0.1 ping statistics --- 00:12:37.930 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:37.930 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:12:37.930 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:37.930 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:12:37.930 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:37.930 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:37.930 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:37.930 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:37.930 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:37.930 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:37.930 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:37.930 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:37.930 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:37.930 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:37.930 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.930 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=184164 00:12:37.930 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:37.930 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 184164 00:12:37.930 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@831 -- # '[' -z 184164 ']' 00:12:37.930 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:37.930 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:37.930 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:37.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:37.930 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:37.930 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.930 [2024-07-26 14:06:45.908908] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:12:37.930 [2024-07-26 14:06:45.908987] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:37.930 EAL: No free 2048 kB hugepages reported on node 1 00:12:38.189 [2024-07-26 14:06:45.973441] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:38.189 [2024-07-26 14:06:46.084392] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:38.189 [2024-07-26 14:06:46.084444] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:38.189 [2024-07-26 14:06:46.084457] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:38.189 [2024-07-26 14:06:46.084469] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:38.189 [2024-07-26 14:06:46.084478] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:38.189 [2024-07-26 14:06:46.084616] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:38.189 [2024-07-26 14:06:46.084643] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:38.189 [2024-07-26 14:06:46.084694] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:38.189 [2024-07-26 14:06:46.084696] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:38.447 14:06:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:38.448 14:06:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # return 0 00:12:38.448 14:06:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:38.448 14:06:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:38.448 14:06:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:38.448 14:06:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:38.448 14:06:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:38.448 14:06:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.448 14:06:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:38.448 14:06:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.448 14:06:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:12:38.448 "tick_rate": 2700000000, 00:12:38.448 "poll_groups": [ 00:12:38.448 { 00:12:38.448 "name": "nvmf_tgt_poll_group_000", 00:12:38.448 "admin_qpairs": 0, 00:12:38.448 "io_qpairs": 0, 00:12:38.448 "current_admin_qpairs": 0, 00:12:38.448 "current_io_qpairs": 0, 00:12:38.448 "pending_bdev_io": 0, 00:12:38.448 "completed_nvme_io": 0, 00:12:38.448 "transports": [] 00:12:38.448 }, 00:12:38.448 { 00:12:38.448 "name": "nvmf_tgt_poll_group_001", 00:12:38.448 "admin_qpairs": 0, 00:12:38.448 "io_qpairs": 0, 00:12:38.448 "current_admin_qpairs": 0, 00:12:38.448 "current_io_qpairs": 0, 00:12:38.448 "pending_bdev_io": 0, 00:12:38.448 "completed_nvme_io": 0, 00:12:38.448 "transports": [] 00:12:38.448 }, 00:12:38.448 { 00:12:38.448 "name": "nvmf_tgt_poll_group_002", 00:12:38.448 "admin_qpairs": 0, 00:12:38.448 "io_qpairs": 0, 00:12:38.448 "current_admin_qpairs": 0, 00:12:38.448 "current_io_qpairs": 0, 00:12:38.448 "pending_bdev_io": 0, 00:12:38.448 "completed_nvme_io": 0, 00:12:38.448 "transports": [] 00:12:38.448 }, 00:12:38.448 { 00:12:38.448 "name": "nvmf_tgt_poll_group_003", 00:12:38.448 "admin_qpairs": 0, 00:12:38.448 "io_qpairs": 0, 00:12:38.448 "current_admin_qpairs": 0, 00:12:38.448 "current_io_qpairs": 0, 00:12:38.448 "pending_bdev_io": 0, 00:12:38.448 "completed_nvme_io": 0, 00:12:38.448 "transports": [] 00:12:38.448 } 00:12:38.448 ] 00:12:38.448 }' 00:12:38.448 14:06:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:38.448 14:06:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:38.448 14:06:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:38.448 14:06:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:12:38.448 14:06:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:38.448 14:06:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:38.448 14:06:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:38.448 14:06:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:38.448 14:06:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.448 14:06:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:38.448 [2024-07-26 14:06:46.346540] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:38.448 14:06:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.448 14:06:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:38.448 14:06:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.448 14:06:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:38.448 14:06:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.448 14:06:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:12:38.448 "tick_rate": 2700000000, 00:12:38.448 "poll_groups": [ 00:12:38.448 { 00:12:38.448 "name": "nvmf_tgt_poll_group_000", 00:12:38.448 "admin_qpairs": 0, 00:12:38.448 "io_qpairs": 0, 00:12:38.448 "current_admin_qpairs": 0, 00:12:38.448 "current_io_qpairs": 0, 00:12:38.448 "pending_bdev_io": 0, 00:12:38.448 "completed_nvme_io": 0, 00:12:38.448 "transports": [ 00:12:38.448 { 00:12:38.448 "trtype": "TCP" 00:12:38.448 } 00:12:38.448 ] 00:12:38.448 }, 00:12:38.448 { 00:12:38.448 "name": "nvmf_tgt_poll_group_001", 00:12:38.448 "admin_qpairs": 0, 00:12:38.448 "io_qpairs": 0, 00:12:38.448 "current_admin_qpairs": 0, 00:12:38.448 "current_io_qpairs": 0, 00:12:38.448 "pending_bdev_io": 0, 00:12:38.448 "completed_nvme_io": 0, 00:12:38.448 "transports": [ 00:12:38.448 { 00:12:38.448 "trtype": "TCP" 00:12:38.448 } 00:12:38.448 ] 00:12:38.448 }, 00:12:38.448 { 00:12:38.448 "name": "nvmf_tgt_poll_group_002", 00:12:38.448 "admin_qpairs": 0, 00:12:38.448 "io_qpairs": 0, 00:12:38.448 "current_admin_qpairs": 0, 00:12:38.448 "current_io_qpairs": 0, 00:12:38.448 "pending_bdev_io": 0, 00:12:38.448 "completed_nvme_io": 0, 00:12:38.448 "transports": [ 00:12:38.448 { 00:12:38.448 "trtype": "TCP" 00:12:38.448 } 00:12:38.448 ] 00:12:38.448 }, 00:12:38.448 { 00:12:38.448 "name": "nvmf_tgt_poll_group_003", 00:12:38.448 "admin_qpairs": 0, 00:12:38.448 "io_qpairs": 0, 00:12:38.448 "current_admin_qpairs": 0, 00:12:38.448 "current_io_qpairs": 0, 00:12:38.448 "pending_bdev_io": 0, 00:12:38.448 "completed_nvme_io": 0, 00:12:38.448 "transports": [ 00:12:38.448 { 00:12:38.448 "trtype": "TCP" 00:12:38.448 } 00:12:38.448 ] 00:12:38.448 } 00:12:38.448 ] 00:12:38.448 }' 00:12:38.448 14:06:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:38.448 14:06:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:38.448 14:06:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:38.448 14:06:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:38.448 14:06:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:38.448 14:06:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:38.448 14:06:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:38.448 14:06:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:38.448 14:06:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:38.448 14:06:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:38.448 14:06:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:12:38.448 14:06:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:38.448 14:06:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:38.448 14:06:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:38.448 14:06:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.448 14:06:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:38.707 Malloc1 00:12:38.707 14:06:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.707 14:06:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:38.707 14:06:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.707 14:06:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:38.707 14:06:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.707 14:06:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:38.707 14:06:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.707 14:06:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:38.707 14:06:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.707 14:06:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:38.707 14:06:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.707 14:06:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:38.707 14:06:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.707 14:06:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:38.707 14:06:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.707 14:06:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:38.707 [2024-07-26 14:06:46.508269] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:38.707 14:06:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.707 14:06:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.2 -s 4420 00:12:38.707 14:06:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:12:38.707 14:06:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.2 -s 4420 00:12:38.707 14:06:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:12:38.707 14:06:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:38.707 14:06:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:12:38.707 14:06:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:38.707 14:06:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:12:38.707 14:06:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:38.707 14:06:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:12:38.707 14:06:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:12:38.707 14:06:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.2 -s 4420 00:12:38.707 [2024-07-26 14:06:46.530703] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a' 00:12:38.707 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:38.707 could not add new controller: failed to write to nvme-fabrics device 00:12:38.707 14:06:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:12:38.707 14:06:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:38.707 14:06:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:38.707 14:06:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:38.707 14:06:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:12:38.707 14:06:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.707 14:06:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:38.707 14:06:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.707 14:06:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:39.272 14:06:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:39.272 14:06:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:39.273 14:06:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:39.273 14:06:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:39.273 14:06:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:41.172 14:06:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:41.172 14:06:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:41.172 14:06:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:41.172 14:06:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:41.172 14:06:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:41.172 14:06:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:41.172 14:06:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:41.435 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:41.435 14:06:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:41.435 14:06:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:41.435 14:06:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:41.435 14:06:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:41.435 14:06:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:41.435 14:06:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:41.435 14:06:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:41.435 14:06:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:12:41.435 14:06:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.435 14:06:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.435 14:06:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.435 14:06:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:41.435 14:06:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:12:41.435 14:06:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:41.435 14:06:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:12:41.435 14:06:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:41.435 14:06:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:12:41.435 14:06:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:41.435 14:06:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:12:41.435 14:06:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:41.435 14:06:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:12:41.435 14:06:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:12:41.435 14:06:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:41.436 [2024-07-26 14:06:49.259724] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a' 00:12:41.436 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:41.436 could not add new controller: failed to write to nvme-fabrics device 00:12:41.436 14:06:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:12:41.436 14:06:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:41.436 14:06:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:41.436 14:06:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:41.436 14:06:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:41.436 14:06:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.436 14:06:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.436 14:06:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.436 14:06:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:42.002 14:06:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:42.002 14:06:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:42.002 14:06:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:42.002 14:06:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:42.002 14:06:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:43.900 14:06:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:43.900 14:06:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:43.900 14:06:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:43.900 14:06:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:43.900 14:06:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:43.900 14:06:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:43.900 14:06:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:44.170 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:44.170 14:06:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:44.170 14:06:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:44.170 14:06:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:44.170 14:06:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:44.170 14:06:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:44.170 14:06:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:44.170 14:06:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:44.170 14:06:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:44.170 14:06:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.170 14:06:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.170 14:06:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.170 14:06:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:12:44.170 14:06:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:44.170 14:06:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:44.170 14:06:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.170 14:06:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.170 14:06:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.170 14:06:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:44.170 14:06:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.170 14:06:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.170 [2024-07-26 14:06:51.978678] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:44.170 14:06:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.170 14:06:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:44.170 14:06:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.170 14:06:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.170 14:06:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.170 14:06:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:44.170 14:06:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.170 14:06:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.170 14:06:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.170 14:06:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:44.736 14:06:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:44.736 14:06:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:44.736 14:06:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:44.736 14:06:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:44.736 14:06:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:46.637 14:06:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:46.637 14:06:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:46.637 14:06:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:46.637 14:06:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:46.637 14:06:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:46.637 14:06:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:46.637 14:06:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:46.895 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:46.895 14:06:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:46.895 14:06:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:46.895 14:06:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:46.895 14:06:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:46.895 14:06:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:46.895 14:06:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:46.895 14:06:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:46.895 14:06:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:46.895 14:06:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.895 14:06:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.895 14:06:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.895 14:06:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:46.895 14:06:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.895 14:06:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.895 14:06:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.895 14:06:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:46.895 14:06:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:46.895 14:06:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.895 14:06:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.895 14:06:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.895 14:06:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:46.895 14:06:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.895 14:06:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.895 [2024-07-26 14:06:54.746456] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:46.895 14:06:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.895 14:06:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:46.896 14:06:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.896 14:06:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.896 14:06:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.896 14:06:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:46.896 14:06:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.896 14:06:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.896 14:06:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.896 14:06:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:47.462 14:06:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:47.462 14:06:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:47.462 14:06:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:47.462 14:06:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:47.462 14:06:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:49.361 14:06:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:49.361 14:06:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:49.361 14:06:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:49.361 14:06:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:49.361 14:06:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:49.361 14:06:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:49.361 14:06:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:49.619 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:49.619 14:06:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:49.619 14:06:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:49.619 14:06:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:49.619 14:06:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:49.619 14:06:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:49.619 14:06:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:49.619 14:06:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:49.619 14:06:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:49.619 14:06:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.619 14:06:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.619 14:06:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.619 14:06:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:49.620 14:06:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.620 14:06:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.620 14:06:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.620 14:06:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:49.620 14:06:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:49.620 14:06:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.620 14:06:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.620 14:06:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.620 14:06:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:49.620 14:06:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.620 14:06:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.620 [2024-07-26 14:06:57.511100] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:49.620 14:06:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.620 14:06:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:49.620 14:06:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.620 14:06:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.620 14:06:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.620 14:06:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:49.620 14:06:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.620 14:06:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.620 14:06:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.620 14:06:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:50.553 14:06:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:50.553 14:06:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:50.553 14:06:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:50.553 14:06:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:50.553 14:06:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:52.453 14:07:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:52.453 14:07:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:52.453 14:07:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:52.453 14:07:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:52.453 14:07:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:52.453 14:07:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:52.453 14:07:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:52.453 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:52.453 14:07:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:52.453 14:07:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:52.453 14:07:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:52.453 14:07:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:52.453 14:07:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:52.453 14:07:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:52.453 14:07:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:52.453 14:07:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:52.453 14:07:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.453 14:07:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.453 14:07:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.453 14:07:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:52.453 14:07:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.453 14:07:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.453 14:07:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.453 14:07:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:52.453 14:07:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:52.453 14:07:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.453 14:07:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.453 14:07:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.453 14:07:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:52.453 14:07:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.453 14:07:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.453 [2024-07-26 14:07:00.334096] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:52.453 14:07:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.453 14:07:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:52.453 14:07:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.453 14:07:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.453 14:07:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.453 14:07:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:52.453 14:07:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.453 14:07:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.453 14:07:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.453 14:07:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:53.019 14:07:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:53.019 14:07:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:53.019 14:07:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:53.019 14:07:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:53.019 14:07:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:55.546 14:07:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:55.546 14:07:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:55.546 14:07:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:55.546 14:07:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:55.546 14:07:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:55.546 14:07:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:55.546 14:07:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:55.546 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:55.546 14:07:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:55.546 14:07:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:55.546 14:07:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:55.546 14:07:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:55.546 14:07:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:55.546 14:07:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:55.546 14:07:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:55.546 14:07:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:55.546 14:07:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.546 14:07:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.546 14:07:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.546 14:07:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:55.546 14:07:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.546 14:07:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.546 14:07:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.547 14:07:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:55.547 14:07:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:55.547 14:07:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.547 14:07:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.547 14:07:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.547 14:07:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:55.547 14:07:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.547 14:07:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.547 [2024-07-26 14:07:03.105057] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:55.547 14:07:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.547 14:07:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:55.547 14:07:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.547 14:07:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.547 14:07:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.547 14:07:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:55.547 14:07:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.547 14:07:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.547 14:07:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.547 14:07:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:55.805 14:07:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:55.805 14:07:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:55.805 14:07:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:55.805 14:07:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:55.805 14:07:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:57.704 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:57.704 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:57.704 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:57.964 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:57.964 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:57.964 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:57.964 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:57.964 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:57.964 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:57.964 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:57.964 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:57.964 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:57.964 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:57.964 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:57.964 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:57.964 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:57.964 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.964 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.964 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.964 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:57.964 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.964 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.964 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.964 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:12:57.964 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:57.964 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:57.964 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.964 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.964 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.964 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:57.964 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.964 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.964 [2024-07-26 14:07:05.822472] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:57.964 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.964 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:57.964 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.964 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.964 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.964 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:57.964 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.964 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.964 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.964 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:57.964 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.964 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.964 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.964 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:57.964 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.964 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.964 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.964 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:57.964 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:57.964 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.964 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.964 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.964 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:57.964 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.964 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.964 [2024-07-26 14:07:05.870583] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:57.964 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.964 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:57.964 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.964 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.964 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.964 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:57.964 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.964 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.964 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.964 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:57.964 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.964 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.964 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.964 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:57.964 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.964 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.964 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.964 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:57.964 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:57.964 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.964 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.964 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.964 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:57.964 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.964 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.964 [2024-07-26 14:07:05.918753] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:57.965 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.965 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:57.965 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.965 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.965 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.965 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:57.965 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.965 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.965 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.965 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:57.965 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.965 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.965 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.965 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:57.965 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.965 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.965 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.965 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:57.965 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:57.965 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.965 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.965 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.965 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:57.965 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.965 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.965 [2024-07-26 14:07:05.966950] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:57.965 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.965 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:57.965 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.965 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.965 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.965 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:57.965 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.965 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.223 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.223 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:58.223 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.223 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.223 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.223 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:58.223 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.223 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.223 14:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.224 14:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:58.224 14:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:58.224 14:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.224 14:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.224 14:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.224 14:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:58.224 14:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.224 14:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.224 [2024-07-26 14:07:06.015077] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:58.224 14:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.224 14:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:58.224 14:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.224 14:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.224 14:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.224 14:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:58.224 14:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.224 14:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.224 14:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.224 14:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:58.224 14:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.224 14:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.224 14:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.224 14:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:58.224 14:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.224 14:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.224 14:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.224 14:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:12:58.224 14:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.224 14:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.224 14:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.224 14:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:12:58.224 "tick_rate": 2700000000, 00:12:58.224 "poll_groups": [ 00:12:58.224 { 00:12:58.224 "name": "nvmf_tgt_poll_group_000", 00:12:58.224 "admin_qpairs": 2, 00:12:58.224 "io_qpairs": 84, 00:12:58.224 "current_admin_qpairs": 0, 00:12:58.224 "current_io_qpairs": 0, 00:12:58.224 "pending_bdev_io": 0, 00:12:58.224 "completed_nvme_io": 144, 00:12:58.224 "transports": [ 00:12:58.224 { 00:12:58.224 "trtype": "TCP" 00:12:58.224 } 00:12:58.224 ] 00:12:58.224 }, 00:12:58.224 { 00:12:58.224 "name": "nvmf_tgt_poll_group_001", 00:12:58.224 "admin_qpairs": 2, 00:12:58.224 "io_qpairs": 84, 00:12:58.224 "current_admin_qpairs": 0, 00:12:58.224 "current_io_qpairs": 0, 00:12:58.224 "pending_bdev_io": 0, 00:12:58.224 "completed_nvme_io": 171, 00:12:58.224 "transports": [ 00:12:58.224 { 00:12:58.224 "trtype": "TCP" 00:12:58.224 } 00:12:58.224 ] 00:12:58.224 }, 00:12:58.224 { 00:12:58.224 "name": "nvmf_tgt_poll_group_002", 00:12:58.224 "admin_qpairs": 1, 00:12:58.224 "io_qpairs": 84, 00:12:58.224 "current_admin_qpairs": 0, 00:12:58.224 "current_io_qpairs": 0, 00:12:58.224 "pending_bdev_io": 0, 00:12:58.224 "completed_nvme_io": 98, 00:12:58.224 "transports": [ 00:12:58.224 { 00:12:58.224 "trtype": "TCP" 00:12:58.224 } 00:12:58.224 ] 00:12:58.224 }, 00:12:58.224 { 00:12:58.224 "name": "nvmf_tgt_poll_group_003", 00:12:58.224 "admin_qpairs": 2, 00:12:58.224 "io_qpairs": 84, 00:12:58.224 "current_admin_qpairs": 0, 00:12:58.224 "current_io_qpairs": 0, 00:12:58.224 "pending_bdev_io": 0, 00:12:58.224 "completed_nvme_io": 273, 00:12:58.224 "transports": [ 00:12:58.224 { 00:12:58.224 "trtype": "TCP" 00:12:58.224 } 00:12:58.224 ] 00:12:58.224 } 00:12:58.224 ] 00:12:58.224 }' 00:12:58.224 14:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:12:58.224 14:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:58.224 14:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:58.224 14:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:58.224 14:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:12:58.224 14:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:12:58.224 14:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:58.224 14:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:58.224 14:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:58.224 14:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:12:58.224 14:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:12:58.224 14:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:12:58.224 14:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:12:58.224 14:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:58.224 14:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:12:58.224 14:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:58.224 14:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:12:58.224 14:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:58.224 14:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:58.224 rmmod nvme_tcp 00:12:58.224 rmmod nvme_fabrics 00:12:58.224 rmmod nvme_keyring 00:12:58.224 14:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:58.224 14:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:12:58.224 14:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:12:58.224 14:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 184164 ']' 00:12:58.224 14:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 184164 00:12:58.224 14:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@950 -- # '[' -z 184164 ']' 00:12:58.224 14:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # kill -0 184164 00:12:58.224 14:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # uname 00:12:58.224 14:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:58.224 14:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 184164 00:12:58.483 14:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:58.483 14:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:58.483 14:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 184164' 00:12:58.483 killing process with pid 184164 00:12:58.483 14:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@969 -- # kill 184164 00:12:58.483 14:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@974 -- # wait 184164 00:12:58.742 14:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:58.742 14:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:58.742 14:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:58.742 14:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:58.742 14:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:58.742 14:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:58.742 14:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:58.742 14:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:00.649 14:07:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:00.650 00:13:00.650 real 0m25.052s 00:13:00.650 user 1m20.886s 00:13:00.650 sys 0m4.214s 00:13:00.650 14:07:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:00.650 14:07:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.650 ************************************ 00:13:00.650 END TEST nvmf_rpc 00:13:00.650 ************************************ 00:13:00.650 14:07:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:00.650 14:07:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:00.650 14:07:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:00.650 14:07:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:00.650 ************************************ 00:13:00.650 START TEST nvmf_invalid 00:13:00.650 ************************************ 00:13:00.650 14:07:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:00.908 * Looking for test storage... 00:13:00.908 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:00.908 14:07:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:00.908 14:07:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:13:00.908 14:07:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:00.908 14:07:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:00.908 14:07:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:00.908 14:07:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:00.908 14:07:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:00.908 14:07:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:00.908 14:07:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:00.908 14:07:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:00.908 14:07:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:00.908 14:07:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:00.908 14:07:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:00.908 14:07:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:13:00.908 14:07:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:00.908 14:07:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:00.908 14:07:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:00.908 14:07:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:00.908 14:07:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:00.908 14:07:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:00.908 14:07:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:00.908 14:07:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:00.909 14:07:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.909 14:07:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.909 14:07:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.909 14:07:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:13:00.909 14:07:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.909 14:07:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:13:00.909 14:07:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:00.909 14:07:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:00.909 14:07:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:00.909 14:07:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:00.909 14:07:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:00.909 14:07:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:00.909 14:07:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:00.909 14:07:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:00.909 14:07:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:00.909 14:07:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:00.909 14:07:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:13:00.909 14:07:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:13:00.909 14:07:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:13:00.909 14:07:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:13:00.909 14:07:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:00.909 14:07:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:00.909 14:07:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:00.909 14:07:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:00.909 14:07:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:00.909 14:07:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:00.909 14:07:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:00.909 14:07:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:00.909 14:07:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:00.909 14:07:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:00.909 14:07:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:13:00.909 14:07:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:02.817 14:07:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:02.817 14:07:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:13:02.817 14:07:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:02.817 14:07:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:02.817 14:07:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:02.817 14:07:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:02.817 14:07:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:02.817 14:07:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:13:02.817 14:07:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:02.817 14:07:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:13:02.817 14:07:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:13:02.817 14:07:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:13:02.817 14:07:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:13:02.817 14:07:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:13:02.817 14:07:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:13:02.817 14:07:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:02.817 14:07:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:02.817 14:07:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:02.817 14:07:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:02.817 14:07:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:02.817 14:07:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:02.817 14:07:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:02.817 14:07:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:02.817 14:07:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:02.817 14:07:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:02.817 14:07:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:02.817 14:07:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:02.817 14:07:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:02.817 14:07:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:02.817 14:07:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:02.817 14:07:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:02.817 14:07:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:02.817 14:07:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:02.817 14:07:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:13:02.817 Found 0000:09:00.0 (0x8086 - 0x159b) 00:13:02.817 14:07:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:02.817 14:07:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:02.817 14:07:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:02.817 14:07:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:02.817 14:07:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:02.817 14:07:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:02.817 14:07:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:13:02.817 Found 0000:09:00.1 (0x8086 - 0x159b) 00:13:02.817 14:07:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:02.817 14:07:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:02.817 14:07:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:02.817 14:07:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:02.817 14:07:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:02.817 14:07:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:02.817 14:07:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:02.817 14:07:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:02.817 14:07:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:02.817 14:07:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:02.817 14:07:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:02.817 14:07:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:02.817 14:07:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:02.817 14:07:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:02.817 14:07:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:02.817 14:07:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:13:02.817 Found net devices under 0000:09:00.0: cvl_0_0 00:13:02.817 14:07:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:02.817 14:07:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:02.817 14:07:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:02.817 14:07:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:02.817 14:07:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:02.817 14:07:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:02.817 14:07:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:02.817 14:07:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:02.817 14:07:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:13:02.817 Found net devices under 0000:09:00.1: cvl_0_1 00:13:02.817 14:07:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:02.817 14:07:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:02.817 14:07:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:13:02.817 14:07:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:02.817 14:07:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:02.817 14:07:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:02.817 14:07:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:02.817 14:07:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:02.817 14:07:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:02.817 14:07:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:02.817 14:07:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:02.817 14:07:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:02.817 14:07:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:02.818 14:07:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:02.818 14:07:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:02.818 14:07:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:02.818 14:07:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:02.818 14:07:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:02.818 14:07:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:03.076 14:07:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:03.076 14:07:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:03.076 14:07:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:03.077 14:07:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:03.077 14:07:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:03.077 14:07:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:03.077 14:07:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:03.077 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:03.077 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.183 ms 00:13:03.077 00:13:03.077 --- 10.0.0.2 ping statistics --- 00:13:03.077 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:03.077 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:13:03.077 14:07:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:03.077 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:03.077 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.093 ms 00:13:03.077 00:13:03.077 --- 10.0.0.1 ping statistics --- 00:13:03.077 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:03.077 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:13:03.077 14:07:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:03.077 14:07:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:13:03.077 14:07:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:03.077 14:07:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:03.077 14:07:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:03.077 14:07:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:03.077 14:07:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:03.077 14:07:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:03.077 14:07:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:03.077 14:07:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:13:03.077 14:07:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:03.077 14:07:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:03.077 14:07:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:03.077 14:07:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=188650 00:13:03.077 14:07:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:03.077 14:07:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 188650 00:13:03.077 14:07:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@831 -- # '[' -z 188650 ']' 00:13:03.077 14:07:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:03.077 14:07:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:03.077 14:07:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:03.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:03.077 14:07:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:03.077 14:07:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:03.077 [2024-07-26 14:07:10.990360] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:13:03.077 [2024-07-26 14:07:10.990431] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:03.077 EAL: No free 2048 kB hugepages reported on node 1 00:13:03.077 [2024-07-26 14:07:11.057966] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:03.335 [2024-07-26 14:07:11.174185] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:03.335 [2024-07-26 14:07:11.174241] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:03.335 [2024-07-26 14:07:11.174254] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:03.335 [2024-07-26 14:07:11.174265] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:03.335 [2024-07-26 14:07:11.174275] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:03.335 [2024-07-26 14:07:11.174329] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:03.335 [2024-07-26 14:07:11.174398] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:03.335 [2024-07-26 14:07:11.174518] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:03.335 [2024-07-26 14:07:11.174522] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:03.335 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:03.335 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # return 0 00:13:03.335 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:03.335 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:03.335 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:03.335 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:03.335 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:03.335 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode22848 00:13:03.900 [2024-07-26 14:07:11.610713] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:13:03.900 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:13:03.900 { 00:13:03.900 "nqn": "nqn.2016-06.io.spdk:cnode22848", 00:13:03.900 "tgt_name": "foobar", 00:13:03.900 "method": "nvmf_create_subsystem", 00:13:03.900 "req_id": 1 00:13:03.900 } 00:13:03.900 Got JSON-RPC error response 00:13:03.900 response: 00:13:03.900 { 00:13:03.900 "code": -32603, 00:13:03.900 "message": "Unable to find target foobar" 00:13:03.900 }' 00:13:03.900 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:13:03.900 { 00:13:03.900 "nqn": "nqn.2016-06.io.spdk:cnode22848", 00:13:03.900 "tgt_name": "foobar", 00:13:03.900 "method": "nvmf_create_subsystem", 00:13:03.900 "req_id": 1 00:13:03.900 } 00:13:03.900 Got JSON-RPC error response 00:13:03.900 response: 00:13:03.900 { 00:13:03.900 "code": -32603, 00:13:03.900 "message": "Unable to find target foobar" 00:13:03.900 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:13:03.900 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:13:03.900 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode2889 00:13:03.900 [2024-07-26 14:07:11.907752] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2889: invalid serial number 'SPDKISFASTANDAWESOME' 00:13:04.158 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:13:04.158 { 00:13:04.158 "nqn": "nqn.2016-06.io.spdk:cnode2889", 00:13:04.158 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:04.158 "method": "nvmf_create_subsystem", 00:13:04.158 "req_id": 1 00:13:04.158 } 00:13:04.158 Got JSON-RPC error response 00:13:04.158 response: 00:13:04.158 { 00:13:04.158 "code": -32602, 00:13:04.158 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:04.158 }' 00:13:04.158 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:13:04.158 { 00:13:04.158 "nqn": "nqn.2016-06.io.spdk:cnode2889", 00:13:04.158 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:04.158 "method": "nvmf_create_subsystem", 00:13:04.158 "req_id": 1 00:13:04.158 } 00:13:04.158 Got JSON-RPC error response 00:13:04.158 response: 00:13:04.158 { 00:13:04.158 "code": -32602, 00:13:04.158 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:04.158 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:04.158 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:13:04.158 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode17910 00:13:04.158 [2024-07-26 14:07:12.172606] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17910: invalid model number 'SPDK_Controller' 00:13:04.416 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:13:04.416 { 00:13:04.416 "nqn": "nqn.2016-06.io.spdk:cnode17910", 00:13:04.416 "model_number": "SPDK_Controller\u001f", 00:13:04.416 "method": "nvmf_create_subsystem", 00:13:04.416 "req_id": 1 00:13:04.416 } 00:13:04.416 Got JSON-RPC error response 00:13:04.416 response: 00:13:04.416 { 00:13:04.416 "code": -32602, 00:13:04.416 "message": "Invalid MN SPDK_Controller\u001f" 00:13:04.416 }' 00:13:04.416 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:13:04.416 { 00:13:04.416 "nqn": "nqn.2016-06.io.spdk:cnode17910", 00:13:04.416 "model_number": "SPDK_Controller\u001f", 00:13:04.416 "method": "nvmf_create_subsystem", 00:13:04.416 "req_id": 1 00:13:04.416 } 00:13:04.416 Got JSON-RPC error response 00:13:04.416 response: 00:13:04.416 { 00:13:04.416 "code": -32602, 00:13:04.416 "message": "Invalid MN SPDK_Controller\u001f" 00:13:04.416 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:04.416 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:13:04.416 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:13:04.416 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:04.416 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:04.416 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:04.416 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:04.416 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.416 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:13:04.416 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:13:04.416 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:13:04.416 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.416 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.416 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:13:04.416 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:13:04.416 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:13:04.416 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.416 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.416 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:13:04.416 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:13:04.416 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:13:04.416 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.416 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.416 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:13:04.417 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:13:04.417 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:13:04.417 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.417 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.417 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:13:04.417 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:13:04.417 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:13:04.417 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.417 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.417 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:13:04.417 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:13:04.417 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:13:04.417 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.417 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.417 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:13:04.417 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:13:04.417 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:13:04.417 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.417 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.417 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:13:04.417 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:13:04.417 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:13:04.417 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.417 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.417 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:13:04.417 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:13:04.417 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:13:04.417 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.417 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.417 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:13:04.417 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:13:04.417 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:13:04.417 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.417 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.417 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:13:04.417 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:13:04.417 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:13:04.417 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.417 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.417 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:13:04.417 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:13:04.417 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:13:04.417 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.417 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.417 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:13:04.417 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:13:04.417 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:13:04.417 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.417 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.417 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:13:04.417 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:13:04.417 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:13:04.417 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.417 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.417 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:13:04.417 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:13:04.417 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:13:04.417 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.417 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.417 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:13:04.417 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:13:04.417 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:13:04.417 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.417 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.417 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:13:04.417 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:13:04.417 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:13:04.417 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.417 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.417 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:13:04.417 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:13:04.417 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:13:04.417 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.417 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.417 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:13:04.417 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:13:04.417 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:13:04.417 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.417 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.417 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:13:04.417 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:13:04.417 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:13:04.417 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.417 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.417 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:13:04.417 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:13:04.417 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:13:04.417 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.417 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.417 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ ^ == \- ]] 00:13:04.417 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '^*$D!Q2?=c~"l]F'\'' !t{u' 00:13:04.417 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '^*$D!Q2?=c~"l]F'\'' !t{u' nqn.2016-06.io.spdk:cnode23081 00:13:04.677 [2024-07-26 14:07:12.533850] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23081: invalid serial number '^*$D!Q2?=c~"l]F' !t{u' 00:13:04.677 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:13:04.677 { 00:13:04.677 "nqn": "nqn.2016-06.io.spdk:cnode23081", 00:13:04.677 "serial_number": "^*$D!Q2?=c~\"l]F'\'' !t{u", 00:13:04.677 "method": "nvmf_create_subsystem", 00:13:04.677 "req_id": 1 00:13:04.677 } 00:13:04.677 Got JSON-RPC error response 00:13:04.677 response: 00:13:04.677 { 00:13:04.677 "code": -32602, 00:13:04.677 "message": "Invalid SN ^*$D!Q2?=c~\"l]F'\'' !t{u" 00:13:04.677 }' 00:13:04.677 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:13:04.677 { 00:13:04.677 "nqn": "nqn.2016-06.io.spdk:cnode23081", 00:13:04.677 "serial_number": "^*$D!Q2?=c~\"l]F' !t{u", 00:13:04.677 "method": "nvmf_create_subsystem", 00:13:04.677 "req_id": 1 00:13:04.677 } 00:13:04.677 Got JSON-RPC error response 00:13:04.677 response: 00:13:04.677 { 00:13:04.677 "code": -32602, 00:13:04.677 "message": "Invalid SN ^*$D!Q2?=c~\"l]F' !t{u" 00:13:04.677 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:04.677 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:13:04.677 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:13:04.677 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:04.677 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:04.677 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:04.677 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:04.677 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.677 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:13:04.677 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:13:04.677 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:13:04.677 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.677 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.677 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:13:04.677 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:13:04.677 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:13:04.677 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.677 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.677 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:13:04.677 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:13:04.677 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:13:04.677 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.677 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.677 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:13:04.677 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:13:04.677 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:13:04.677 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.677 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.677 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:13:04.677 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:13:04.677 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:13:04.677 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.677 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.677 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:13:04.677 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:13:04.677 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:13:04.677 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.677 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.677 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:13:04.677 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:13:04.677 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:13:04.677 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.677 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.677 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:13:04.677 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:13:04.677 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:13:04.677 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.677 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.677 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:13:04.677 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:13:04.677 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:13:04.677 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.677 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.677 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:13:04.677 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:13:04.677 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:13:04.677 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.677 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.677 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:13:04.677 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:13:04.677 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:13:04.677 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.677 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.677 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:13:04.677 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:13:04.677 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:13:04.677 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.677 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.677 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:13:04.677 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:13:04.677 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:13:04.677 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.677 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.677 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:13:04.677 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:13:04.677 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:13:04.677 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.677 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.677 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:13:04.677 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:13:04.677 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:13:04.677 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.677 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.677 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:13:04.677 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:13:04.677 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:13:04.677 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.678 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.678 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:13:04.678 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:13:04.678 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:13:04.678 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.678 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.678 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:13:04.678 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:13:04.678 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:13:04.678 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.678 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.678 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:13:04.678 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:13:04.678 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:13:04.678 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.678 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.678 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:13:04.678 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:13:04.678 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:13:04.678 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.678 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.678 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:13:04.678 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:13:04.678 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:13:04.678 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.678 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.678 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:13:04.678 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:13:04.678 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:13:04.678 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.678 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.678 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:13:04.678 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:13:04.678 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:13:04.678 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.678 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.678 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:13:04.678 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:13:04.678 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:13:04.678 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.678 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.678 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:13:04.678 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:13:04.678 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:13:04.678 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.678 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.678 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:13:04.678 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:13:04.678 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:13:04.678 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.678 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.678 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:13:04.678 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:13:04.678 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:13:04.678 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.678 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.678 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:13:04.678 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:13:04.678 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:13:04.678 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.678 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.678 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:13:04.678 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:13:04.678 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:13:04.678 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.678 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.678 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:13:04.678 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:13:04.678 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:13:04.678 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.678 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.678 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:13:04.678 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:13:04.678 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:13:04.678 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.678 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.678 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:13:04.678 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:13:04.678 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:13:04.678 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.678 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.678 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:13:04.678 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:13:04.678 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:13:04.678 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.678 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.678 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:13:04.678 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:13:04.678 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:13:04.678 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.678 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.678 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:13:04.678 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:13:04.678 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:13:04.678 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.678 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.678 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:13:04.678 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:13:04.678 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:13:04.678 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.678 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.678 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:13:04.678 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:13:04.678 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:13:04.678 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.678 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.678 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:13:04.678 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:13:04.678 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:13:04.678 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.678 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.678 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:13:04.678 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:13:04.678 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:13:04.678 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.678 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.678 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:13:04.678 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:13:04.679 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:13:04.679 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.679 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.679 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:13:04.679 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:13:04.679 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:13:04.679 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.679 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.679 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ s == \- ]] 00:13:04.679 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 's4Nk1F,LGIaTV!-'\''4inIZ::t.ZSO+KV:8vKro6vxt' 00:13:04.679 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 's4Nk1F,LGIaTV!-'\''4inIZ::t.ZSO+KV:8vKro6vxt' nqn.2016-06.io.spdk:cnode17794 00:13:04.936 [2024-07-26 14:07:12.891035] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17794: invalid model number 's4Nk1F,LGIaTV!-'4inIZ::t.ZSO+KV:8vKro6vxt' 00:13:04.936 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:13:04.936 { 00:13:04.936 "nqn": "nqn.2016-06.io.spdk:cnode17794", 00:13:04.936 "model_number": "s4Nk1F,LGIaTV!-'\''4inIZ::t.ZSO+KV:8vKro6vxt", 00:13:04.936 "method": "nvmf_create_subsystem", 00:13:04.936 "req_id": 1 00:13:04.936 } 00:13:04.936 Got JSON-RPC error response 00:13:04.936 response: 00:13:04.936 { 00:13:04.936 "code": -32602, 00:13:04.936 "message": "Invalid MN s4Nk1F,LGIaTV!-'\''4inIZ::t.ZSO+KV:8vKro6vxt" 00:13:04.936 }' 00:13:04.936 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:13:04.936 { 00:13:04.936 "nqn": "nqn.2016-06.io.spdk:cnode17794", 00:13:04.936 "model_number": "s4Nk1F,LGIaTV!-'4inIZ::t.ZSO+KV:8vKro6vxt", 00:13:04.936 "method": "nvmf_create_subsystem", 00:13:04.936 "req_id": 1 00:13:04.936 } 00:13:04.936 Got JSON-RPC error response 00:13:04.936 response: 00:13:04.936 { 00:13:04.936 "code": -32602, 00:13:04.936 "message": "Invalid MN s4Nk1F,LGIaTV!-'4inIZ::t.ZSO+KV:8vKro6vxt" 00:13:04.936 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:04.936 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:13:05.194 [2024-07-26 14:07:13.135939] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:05.194 14:07:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:13:05.451 14:07:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:13:05.451 14:07:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:13:05.451 14:07:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:13:05.451 14:07:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:13:05.451 14:07:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:13:05.708 [2024-07-26 14:07:13.649639] nvmf_rpc.c: 809:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:13:05.708 14:07:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:13:05.708 { 00:13:05.708 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:05.708 "listen_address": { 00:13:05.708 "trtype": "tcp", 00:13:05.708 "traddr": "", 00:13:05.708 "trsvcid": "4421" 00:13:05.708 }, 00:13:05.708 "method": "nvmf_subsystem_remove_listener", 00:13:05.708 "req_id": 1 00:13:05.708 } 00:13:05.708 Got JSON-RPC error response 00:13:05.708 response: 00:13:05.708 { 00:13:05.708 "code": -32602, 00:13:05.708 "message": "Invalid parameters" 00:13:05.708 }' 00:13:05.709 14:07:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:13:05.709 { 00:13:05.709 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:05.709 "listen_address": { 00:13:05.709 "trtype": "tcp", 00:13:05.709 "traddr": "", 00:13:05.709 "trsvcid": "4421" 00:13:05.709 }, 00:13:05.709 "method": "nvmf_subsystem_remove_listener", 00:13:05.709 "req_id": 1 00:13:05.709 } 00:13:05.709 Got JSON-RPC error response 00:13:05.709 response: 00:13:05.709 { 00:13:05.709 "code": -32602, 00:13:05.709 "message": "Invalid parameters" 00:13:05.709 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:13:05.709 14:07:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode24205 -i 0 00:13:05.966 [2024-07-26 14:07:13.898384] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24205: invalid cntlid range [0-65519] 00:13:05.966 14:07:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:13:05.966 { 00:13:05.966 "nqn": "nqn.2016-06.io.spdk:cnode24205", 00:13:05.966 "min_cntlid": 0, 00:13:05.966 "method": "nvmf_create_subsystem", 00:13:05.966 "req_id": 1 00:13:05.966 } 00:13:05.966 Got JSON-RPC error response 00:13:05.966 response: 00:13:05.966 { 00:13:05.966 "code": -32602, 00:13:05.966 "message": "Invalid cntlid range [0-65519]" 00:13:05.966 }' 00:13:05.966 14:07:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:13:05.966 { 00:13:05.966 "nqn": "nqn.2016-06.io.spdk:cnode24205", 00:13:05.966 "min_cntlid": 0, 00:13:05.966 "method": "nvmf_create_subsystem", 00:13:05.966 "req_id": 1 00:13:05.966 } 00:13:05.966 Got JSON-RPC error response 00:13:05.966 response: 00:13:05.966 { 00:13:05.966 "code": -32602, 00:13:05.966 "message": "Invalid cntlid range [0-65519]" 00:13:05.966 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:05.966 14:07:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4564 -i 65520 00:13:06.224 [2024-07-26 14:07:14.143196] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4564: invalid cntlid range [65520-65519] 00:13:06.224 14:07:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:13:06.224 { 00:13:06.224 "nqn": "nqn.2016-06.io.spdk:cnode4564", 00:13:06.224 "min_cntlid": 65520, 00:13:06.224 "method": "nvmf_create_subsystem", 00:13:06.224 "req_id": 1 00:13:06.224 } 00:13:06.224 Got JSON-RPC error response 00:13:06.224 response: 00:13:06.224 { 00:13:06.224 "code": -32602, 00:13:06.224 "message": "Invalid cntlid range [65520-65519]" 00:13:06.224 }' 00:13:06.224 14:07:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:13:06.224 { 00:13:06.224 "nqn": "nqn.2016-06.io.spdk:cnode4564", 00:13:06.224 "min_cntlid": 65520, 00:13:06.224 "method": "nvmf_create_subsystem", 00:13:06.224 "req_id": 1 00:13:06.224 } 00:13:06.224 Got JSON-RPC error response 00:13:06.224 response: 00:13:06.224 { 00:13:06.224 "code": -32602, 00:13:06.224 "message": "Invalid cntlid range [65520-65519]" 00:13:06.224 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:06.224 14:07:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode13075 -I 0 00:13:06.481 [2024-07-26 14:07:14.400052] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13075: invalid cntlid range [1-0] 00:13:06.481 14:07:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:13:06.481 { 00:13:06.481 "nqn": "nqn.2016-06.io.spdk:cnode13075", 00:13:06.481 "max_cntlid": 0, 00:13:06.481 "method": "nvmf_create_subsystem", 00:13:06.481 "req_id": 1 00:13:06.481 } 00:13:06.481 Got JSON-RPC error response 00:13:06.481 response: 00:13:06.481 { 00:13:06.481 "code": -32602, 00:13:06.481 "message": "Invalid cntlid range [1-0]" 00:13:06.481 }' 00:13:06.481 14:07:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:13:06.481 { 00:13:06.481 "nqn": "nqn.2016-06.io.spdk:cnode13075", 00:13:06.481 "max_cntlid": 0, 00:13:06.481 "method": "nvmf_create_subsystem", 00:13:06.481 "req_id": 1 00:13:06.481 } 00:13:06.481 Got JSON-RPC error response 00:13:06.481 response: 00:13:06.481 { 00:13:06.481 "code": -32602, 00:13:06.481 "message": "Invalid cntlid range [1-0]" 00:13:06.481 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:06.481 14:07:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6884 -I 65520 00:13:06.738 [2024-07-26 14:07:14.656907] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6884: invalid cntlid range [1-65520] 00:13:06.738 14:07:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:13:06.738 { 00:13:06.738 "nqn": "nqn.2016-06.io.spdk:cnode6884", 00:13:06.738 "max_cntlid": 65520, 00:13:06.738 "method": "nvmf_create_subsystem", 00:13:06.738 "req_id": 1 00:13:06.738 } 00:13:06.738 Got JSON-RPC error response 00:13:06.738 response: 00:13:06.738 { 00:13:06.738 "code": -32602, 00:13:06.738 "message": "Invalid cntlid range [1-65520]" 00:13:06.738 }' 00:13:06.738 14:07:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:13:06.738 { 00:13:06.738 "nqn": "nqn.2016-06.io.spdk:cnode6884", 00:13:06.738 "max_cntlid": 65520, 00:13:06.738 "method": "nvmf_create_subsystem", 00:13:06.738 "req_id": 1 00:13:06.738 } 00:13:06.738 Got JSON-RPC error response 00:13:06.738 response: 00:13:06.738 { 00:13:06.738 "code": -32602, 00:13:06.738 "message": "Invalid cntlid range [1-65520]" 00:13:06.738 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:06.738 14:07:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3185 -i 6 -I 5 00:13:06.994 [2024-07-26 14:07:14.905749] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3185: invalid cntlid range [6-5] 00:13:06.994 14:07:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:13:06.994 { 00:13:06.994 "nqn": "nqn.2016-06.io.spdk:cnode3185", 00:13:06.994 "min_cntlid": 6, 00:13:06.994 "max_cntlid": 5, 00:13:06.994 "method": "nvmf_create_subsystem", 00:13:06.994 "req_id": 1 00:13:06.994 } 00:13:06.994 Got JSON-RPC error response 00:13:06.994 response: 00:13:06.994 { 00:13:06.994 "code": -32602, 00:13:06.994 "message": "Invalid cntlid range [6-5]" 00:13:06.994 }' 00:13:06.994 14:07:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:13:06.994 { 00:13:06.994 "nqn": "nqn.2016-06.io.spdk:cnode3185", 00:13:06.994 "min_cntlid": 6, 00:13:06.994 "max_cntlid": 5, 00:13:06.994 "method": "nvmf_create_subsystem", 00:13:06.994 "req_id": 1 00:13:06.994 } 00:13:06.994 Got JSON-RPC error response 00:13:06.994 response: 00:13:06.994 { 00:13:06.994 "code": -32602, 00:13:06.994 "message": "Invalid cntlid range [6-5]" 00:13:06.994 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:06.994 14:07:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:13:07.251 14:07:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:13:07.251 { 00:13:07.251 "name": "foobar", 00:13:07.251 "method": "nvmf_delete_target", 00:13:07.251 "req_id": 1 00:13:07.251 } 00:13:07.251 Got JSON-RPC error response 00:13:07.251 response: 00:13:07.251 { 00:13:07.251 "code": -32602, 00:13:07.251 "message": "The specified target doesn'\''t exist, cannot delete it." 00:13:07.251 }' 00:13:07.251 14:07:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:13:07.251 { 00:13:07.251 "name": "foobar", 00:13:07.251 "method": "nvmf_delete_target", 00:13:07.251 "req_id": 1 00:13:07.251 } 00:13:07.251 Got JSON-RPC error response 00:13:07.251 response: 00:13:07.251 { 00:13:07.251 "code": -32602, 00:13:07.251 "message": "The specified target doesn't exist, cannot delete it." 00:13:07.251 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:13:07.251 14:07:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:13:07.251 14:07:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:13:07.251 14:07:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:07.251 14:07:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:13:07.251 14:07:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:07.251 14:07:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:13:07.251 14:07:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:07.251 14:07:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:07.251 rmmod nvme_tcp 00:13:07.251 rmmod nvme_fabrics 00:13:07.251 rmmod nvme_keyring 00:13:07.251 14:07:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:07.251 14:07:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:13:07.251 14:07:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:13:07.251 14:07:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 188650 ']' 00:13:07.251 14:07:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 188650 00:13:07.251 14:07:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@950 -- # '[' -z 188650 ']' 00:13:07.251 14:07:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # kill -0 188650 00:13:07.251 14:07:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # uname 00:13:07.251 14:07:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:07.251 14:07:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 188650 00:13:07.251 14:07:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:07.251 14:07:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:07.251 14:07:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 188650' 00:13:07.251 killing process with pid 188650 00:13:07.251 14:07:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@969 -- # kill 188650 00:13:07.251 14:07:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@974 -- # wait 188650 00:13:07.509 14:07:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:07.509 14:07:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:07.509 14:07:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:07.509 14:07:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:07.509 14:07:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:07.509 14:07:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:07.509 14:07:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:07.509 14:07:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:09.416 14:07:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:09.416 00:13:09.416 real 0m8.770s 00:13:09.416 user 0m20.439s 00:13:09.416 sys 0m2.466s 00:13:09.416 14:07:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:09.416 14:07:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:09.416 ************************************ 00:13:09.416 END TEST nvmf_invalid 00:13:09.416 ************************************ 00:13:09.676 14:07:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:09.676 14:07:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:09.676 14:07:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:09.676 14:07:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:09.676 ************************************ 00:13:09.676 START TEST nvmf_connect_stress 00:13:09.676 ************************************ 00:13:09.676 14:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:09.676 * Looking for test storage... 00:13:09.676 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:09.676 14:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:09.676 14:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:13:09.676 14:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:09.676 14:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:09.676 14:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:09.676 14:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:09.676 14:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:09.676 14:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:09.676 14:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:09.676 14:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:09.676 14:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:09.676 14:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:09.676 14:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:09.676 14:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:13:09.676 14:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:09.676 14:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:09.676 14:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:09.676 14:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:09.676 14:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:09.676 14:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:09.676 14:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:09.676 14:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:09.676 14:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.676 14:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.676 14:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.676 14:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:13:09.676 14:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.676 14:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:13:09.676 14:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:09.676 14:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:09.676 14:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:09.676 14:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:09.676 14:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:09.676 14:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:09.676 14:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:09.676 14:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:09.676 14:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:13:09.676 14:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:09.676 14:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:09.676 14:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:09.676 14:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:09.676 14:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:09.676 14:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:09.676 14:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:09.676 14:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:09.676 14:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:09.676 14:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:09.676 14:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:13:09.676 14:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:12.216 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:12.216 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:13:12.216 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:12.216 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:12.216 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:12.216 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:12.217 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:12.217 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:13:12.217 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:12.217 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:13:12.217 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:13:12.217 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:13:12.217 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:13:12.217 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:13:12.217 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:13:12.217 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:12.217 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:12.217 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:12.217 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:12.217 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:12.217 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:12.217 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:12.217 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:12.217 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:12.217 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:12.217 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:12.217 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:12.217 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:12.217 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:12.217 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:12.217 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:12.217 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:12.217 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:12.217 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:13:12.217 Found 0000:09:00.0 (0x8086 - 0x159b) 00:13:12.217 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:12.217 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:12.217 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:12.217 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:12.217 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:12.217 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:12.217 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:13:12.217 Found 0000:09:00.1 (0x8086 - 0x159b) 00:13:12.217 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:12.217 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:12.217 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:12.217 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:12.217 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:12.217 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:12.217 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:12.217 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:12.217 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:12.217 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:12.217 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:12.217 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:12.217 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:12.217 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:12.217 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:12.217 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:13:12.217 Found net devices under 0000:09:00.0: cvl_0_0 00:13:12.217 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:12.217 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:12.217 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:12.217 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:12.217 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:12.217 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:12.217 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:12.217 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:12.217 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:13:12.217 Found net devices under 0000:09:00.1: cvl_0_1 00:13:12.217 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:12.217 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:12.217 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:13:12.217 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:12.217 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:12.217 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:12.217 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:12.217 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:12.217 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:12.217 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:12.217 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:12.217 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:12.217 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:12.217 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:12.217 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:12.217 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:12.217 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:12.217 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:12.217 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:12.217 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:12.217 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:12.217 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:12.217 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:12.217 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:12.217 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:12.217 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:12.217 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:12.217 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.270 ms 00:13:12.217 00:13:12.217 --- 10.0.0.2 ping statistics --- 00:13:12.217 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:12.217 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:13:12.217 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:12.217 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:12.217 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:13:12.217 00:13:12.217 --- 10.0.0.1 ping statistics --- 00:13:12.217 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:12.217 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:13:12.217 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:12.217 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:13:12.217 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:12.217 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:12.217 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:12.217 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:12.217 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:12.217 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:12.218 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:12.218 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:13:12.218 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:12.218 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:12.218 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:12.218 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=191283 00:13:12.218 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:12.218 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 191283 00:13:12.218 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@831 -- # '[' -z 191283 ']' 00:13:12.218 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:12.218 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:12.218 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:12.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:12.218 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:12.218 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:12.218 [2024-07-26 14:07:19.890810] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:13:12.218 [2024-07-26 14:07:19.890905] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:12.218 EAL: No free 2048 kB hugepages reported on node 1 00:13:12.218 [2024-07-26 14:07:19.957677] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:12.218 [2024-07-26 14:07:20.074261] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:12.218 [2024-07-26 14:07:20.074330] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:12.218 [2024-07-26 14:07:20.074344] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:12.218 [2024-07-26 14:07:20.074355] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:12.218 [2024-07-26 14:07:20.074365] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:12.218 [2024-07-26 14:07:20.074415] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:12.218 [2024-07-26 14:07:20.074537] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:12.218 [2024-07-26 14:07:20.074537] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:12.218 14:07:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:12.218 14:07:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # return 0 00:13:12.218 14:07:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:12.218 14:07:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:12.218 14:07:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:12.218 14:07:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:12.218 14:07:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:12.218 14:07:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.218 14:07:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:12.218 [2024-07-26 14:07:20.222991] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:12.476 14:07:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.476 14:07:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:12.476 14:07:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.476 14:07:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:12.476 14:07:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.476 14:07:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:12.476 14:07:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.476 14:07:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:12.476 [2024-07-26 14:07:20.258632] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:12.476 14:07:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.476 14:07:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:12.476 14:07:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.476 14:07:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:12.476 NULL1 00:13:12.476 14:07:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.476 14:07:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=191320 00:13:12.476 14:07:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:12.476 14:07:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:12.476 14:07:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:13:12.476 14:07:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:13:12.476 14:07:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:12.476 14:07:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:12.476 14:07:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:12.476 14:07:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:12.476 14:07:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:12.476 14:07:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:12.476 14:07:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:12.476 14:07:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:12.476 14:07:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:12.476 14:07:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:12.476 14:07:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:12.476 14:07:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:12.476 14:07:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:12.476 14:07:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:12.476 14:07:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:12.476 14:07:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:12.476 14:07:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:12.476 14:07:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:12.476 14:07:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:12.476 14:07:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:12.476 14:07:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:12.476 14:07:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:12.476 14:07:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:12.476 14:07:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:12.476 EAL: No free 2048 kB hugepages reported on node 1 00:13:12.476 14:07:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:12.476 14:07:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:12.476 14:07:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:12.476 14:07:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:12.477 14:07:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:12.477 14:07:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:12.477 14:07:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:12.477 14:07:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:12.477 14:07:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:12.477 14:07:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:12.477 14:07:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:12.477 14:07:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:12.477 14:07:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:12.477 14:07:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:12.477 14:07:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:12.477 14:07:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:12.477 14:07:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 191320 00:13:12.477 14:07:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:12.477 14:07:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.477 14:07:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:12.735 14:07:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.735 14:07:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 191320 00:13:12.735 14:07:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:12.735 14:07:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.735 14:07:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:12.993 14:07:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.993 14:07:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 191320 00:13:12.993 14:07:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:12.993 14:07:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.993 14:07:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:13.558 14:07:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.558 14:07:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 191320 00:13:13.558 14:07:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:13.558 14:07:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.558 14:07:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:13.816 14:07:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.816 14:07:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 191320 00:13:13.816 14:07:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:13.816 14:07:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.816 14:07:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:14.074 14:07:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.074 14:07:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 191320 00:13:14.074 14:07:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:14.074 14:07:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.074 14:07:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:14.331 14:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.331 14:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 191320 00:13:14.331 14:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:14.331 14:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.331 14:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:14.589 14:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.589 14:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 191320 00:13:14.589 14:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:14.589 14:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.589 14:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:15.154 14:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.154 14:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 191320 00:13:15.154 14:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:15.154 14:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.154 14:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:15.412 14:07:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.412 14:07:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 191320 00:13:15.412 14:07:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:15.412 14:07:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.412 14:07:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:15.670 14:07:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.670 14:07:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 191320 00:13:15.670 14:07:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:15.670 14:07:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.670 14:07:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:15.928 14:07:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.928 14:07:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 191320 00:13:15.928 14:07:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:15.928 14:07:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.928 14:07:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:16.186 14:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.186 14:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 191320 00:13:16.186 14:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:16.186 14:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.186 14:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:16.751 14:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.752 14:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 191320 00:13:16.752 14:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:16.752 14:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.752 14:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:17.010 14:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.010 14:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 191320 00:13:17.010 14:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:17.010 14:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.010 14:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:17.267 14:07:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.267 14:07:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 191320 00:13:17.267 14:07:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:17.267 14:07:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.267 14:07:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:17.525 14:07:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.525 14:07:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 191320 00:13:17.525 14:07:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:17.525 14:07:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.525 14:07:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:17.783 14:07:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.783 14:07:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 191320 00:13:17.783 14:07:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:17.783 14:07:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.783 14:07:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:18.348 14:07:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.348 14:07:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 191320 00:13:18.348 14:07:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:18.348 14:07:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.348 14:07:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:18.605 14:07:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.605 14:07:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 191320 00:13:18.605 14:07:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:18.605 14:07:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.605 14:07:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:18.863 14:07:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.863 14:07:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 191320 00:13:18.863 14:07:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:18.863 14:07:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.863 14:07:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:19.120 14:07:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.120 14:07:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 191320 00:13:19.120 14:07:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:19.120 14:07:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.120 14:07:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:19.378 14:07:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.378 14:07:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 191320 00:13:19.378 14:07:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:19.378 14:07:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.378 14:07:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:19.943 14:07:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.943 14:07:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 191320 00:13:19.943 14:07:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:19.943 14:07:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.943 14:07:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:20.200 14:07:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.201 14:07:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 191320 00:13:20.201 14:07:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:20.201 14:07:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.201 14:07:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:20.458 14:07:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.458 14:07:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 191320 00:13:20.458 14:07:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:20.458 14:07:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.458 14:07:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:20.715 14:07:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.715 14:07:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 191320 00:13:20.715 14:07:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:20.715 14:07:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.715 14:07:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:21.279 14:07:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.279 14:07:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 191320 00:13:21.279 14:07:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:21.279 14:07:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.279 14:07:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:21.537 14:07:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.537 14:07:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 191320 00:13:21.537 14:07:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:21.537 14:07:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.537 14:07:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:21.795 14:07:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.795 14:07:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 191320 00:13:21.795 14:07:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:21.795 14:07:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.795 14:07:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:22.053 14:07:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.053 14:07:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 191320 00:13:22.053 14:07:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:22.053 14:07:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.053 14:07:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:22.311 14:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.311 14:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 191320 00:13:22.311 14:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:22.311 14:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.311 14:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:22.568 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:22.827 14:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.827 14:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 191320 00:13:22.827 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (191320) - No such process 00:13:22.827 14:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 191320 00:13:22.827 14:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:22.827 14:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:22.827 14:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:13:22.827 14:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:22.827 14:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:13:22.827 14:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:22.827 14:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:13:22.827 14:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:22.827 14:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:22.827 rmmod nvme_tcp 00:13:22.827 rmmod nvme_fabrics 00:13:22.827 rmmod nvme_keyring 00:13:22.827 14:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:22.827 14:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:13:22.827 14:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:13:22.827 14:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 191283 ']' 00:13:22.827 14:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 191283 00:13:22.827 14:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@950 -- # '[' -z 191283 ']' 00:13:22.827 14:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # kill -0 191283 00:13:22.827 14:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # uname 00:13:22.827 14:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:22.827 14:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 191283 00:13:22.827 14:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:13:22.827 14:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:13:22.827 14:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 191283' 00:13:22.827 killing process with pid 191283 00:13:22.827 14:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@969 -- # kill 191283 00:13:22.827 14:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@974 -- # wait 191283 00:13:23.087 14:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:23.087 14:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:23.087 14:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:23.087 14:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:23.087 14:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:23.087 14:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:23.087 14:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:23.087 14:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:24.997 14:07:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:24.997 00:13:24.997 real 0m15.524s 00:13:24.997 user 0m40.134s 00:13:24.997 sys 0m4.710s 00:13:24.997 14:07:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:24.997 14:07:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:24.997 ************************************ 00:13:24.997 END TEST nvmf_connect_stress 00:13:24.997 ************************************ 00:13:24.997 14:07:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:24.997 14:07:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:24.997 14:07:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:24.997 14:07:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:25.267 ************************************ 00:13:25.267 START TEST nvmf_fused_ordering 00:13:25.267 ************************************ 00:13:25.267 14:07:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:25.267 * Looking for test storage... 00:13:25.267 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:25.267 14:07:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:25.268 14:07:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:13:25.268 14:07:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:25.268 14:07:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:25.268 14:07:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:25.268 14:07:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:25.268 14:07:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:25.268 14:07:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:25.268 14:07:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:25.268 14:07:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:25.268 14:07:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:25.268 14:07:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:25.268 14:07:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:25.268 14:07:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:13:25.268 14:07:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:25.268 14:07:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:25.268 14:07:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:25.268 14:07:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:25.268 14:07:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:25.268 14:07:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:25.268 14:07:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:25.268 14:07:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:25.268 14:07:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.268 14:07:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.268 14:07:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.268 14:07:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:13:25.268 14:07:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.268 14:07:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:13:25.268 14:07:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:25.268 14:07:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:25.268 14:07:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:25.268 14:07:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:25.268 14:07:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:25.268 14:07:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:25.268 14:07:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:25.268 14:07:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:25.268 14:07:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:13:25.268 14:07:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:25.268 14:07:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:25.268 14:07:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:25.268 14:07:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:25.268 14:07:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:25.268 14:07:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:25.268 14:07:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:25.268 14:07:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:25.268 14:07:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:25.268 14:07:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:25.268 14:07:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:13:25.268 14:07:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:27.172 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:27.172 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:13:27.172 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:27.172 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:27.172 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:27.172 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:27.172 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:27.172 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:13:27.172 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:27.172 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:13:27.172 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:13:27.172 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:13:27.172 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:13:27.172 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:13:27.172 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:13:27.172 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:27.172 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:27.172 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:27.172 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:27.172 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:27.172 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:27.172 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:27.172 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:27.172 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:27.172 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:27.172 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:27.172 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:27.172 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:27.172 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:27.172 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:27.172 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:27.172 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:27.172 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:27.172 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:13:27.172 Found 0000:09:00.0 (0x8086 - 0x159b) 00:13:27.172 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:27.172 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:27.172 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:27.172 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:27.172 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:27.172 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:27.172 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:13:27.172 Found 0000:09:00.1 (0x8086 - 0x159b) 00:13:27.172 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:27.172 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:27.172 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:27.172 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:27.172 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:27.172 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:27.172 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:27.172 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:27.173 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:27.173 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:27.173 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:27.173 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:27.173 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:27.173 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:27.173 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:27.173 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:13:27.173 Found net devices under 0000:09:00.0: cvl_0_0 00:13:27.173 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:27.173 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:27.173 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:27.173 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:27.173 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:27.173 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:27.173 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:27.173 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:27.173 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:13:27.173 Found net devices under 0000:09:00.1: cvl_0_1 00:13:27.173 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:27.173 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:27.173 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:13:27.173 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:27.173 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:27.173 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:27.173 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:27.173 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:27.173 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:27.173 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:27.173 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:27.173 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:27.173 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:27.173 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:27.173 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:27.173 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:27.173 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:27.173 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:27.173 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:27.173 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:27.173 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:27.173 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:27.173 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:27.173 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:27.173 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:27.173 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:27.173 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:27.173 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.223 ms 00:13:27.173 00:13:27.173 --- 10.0.0.2 ping statistics --- 00:13:27.173 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:27.173 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:13:27.173 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:27.173 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:27.173 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.090 ms 00:13:27.173 00:13:27.173 --- 10.0.0.1 ping statistics --- 00:13:27.173 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:27.173 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:13:27.173 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:27.173 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:13:27.173 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:27.173 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:27.173 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:27.173 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:27.173 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:27.173 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:27.173 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:27.432 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:13:27.432 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:27.432 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:27.432 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:27.432 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=194568 00:13:27.432 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:27.432 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 194568 00:13:27.432 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # '[' -z 194568 ']' 00:13:27.432 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:27.432 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:27.432 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:27.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:27.432 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:27.432 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:27.432 [2024-07-26 14:07:35.247574] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:13:27.432 [2024-07-26 14:07:35.247655] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:27.432 EAL: No free 2048 kB hugepages reported on node 1 00:13:27.432 [2024-07-26 14:07:35.311157] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:27.432 [2024-07-26 14:07:35.420729] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:27.432 [2024-07-26 14:07:35.420790] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:27.432 [2024-07-26 14:07:35.420805] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:27.432 [2024-07-26 14:07:35.420816] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:27.432 [2024-07-26 14:07:35.420826] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:27.432 [2024-07-26 14:07:35.420860] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:27.690 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:27.690 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # return 0 00:13:27.690 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:27.690 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:27.691 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:27.691 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:27.691 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:27.691 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.691 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:27.691 [2024-07-26 14:07:35.562808] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:27.691 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.691 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:27.691 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.691 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:27.691 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.691 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:27.691 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.691 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:27.691 [2024-07-26 14:07:35.579029] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:27.691 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.691 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:27.691 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.691 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:27.691 NULL1 00:13:27.691 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.691 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:13:27.691 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.691 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:27.691 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.691 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:27.691 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.691 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:27.691 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.691 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:13:27.691 [2024-07-26 14:07:35.621639] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:13:27.691 [2024-07-26 14:07:35.621675] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid194593 ] 00:13:27.691 EAL: No free 2048 kB hugepages reported on node 1 00:13:28.257 Attached to nqn.2016-06.io.spdk:cnode1 00:13:28.257 Namespace ID: 1 size: 1GB 00:13:28.257 fused_ordering(0) 00:13:28.257 fused_ordering(1) 00:13:28.258 fused_ordering(2) 00:13:28.258 fused_ordering(3) 00:13:28.258 fused_ordering(4) 00:13:28.258 fused_ordering(5) 00:13:28.258 fused_ordering(6) 00:13:28.258 fused_ordering(7) 00:13:28.258 fused_ordering(8) 00:13:28.258 fused_ordering(9) 00:13:28.258 fused_ordering(10) 00:13:28.258 fused_ordering(11) 00:13:28.258 fused_ordering(12) 00:13:28.258 fused_ordering(13) 00:13:28.258 fused_ordering(14) 00:13:28.258 fused_ordering(15) 00:13:28.258 fused_ordering(16) 00:13:28.258 fused_ordering(17) 00:13:28.258 fused_ordering(18) 00:13:28.258 fused_ordering(19) 00:13:28.258 fused_ordering(20) 00:13:28.258 fused_ordering(21) 00:13:28.258 fused_ordering(22) 00:13:28.258 fused_ordering(23) 00:13:28.258 fused_ordering(24) 00:13:28.258 fused_ordering(25) 00:13:28.258 fused_ordering(26) 00:13:28.258 fused_ordering(27) 00:13:28.258 fused_ordering(28) 00:13:28.258 fused_ordering(29) 00:13:28.258 fused_ordering(30) 00:13:28.258 fused_ordering(31) 00:13:28.258 fused_ordering(32) 00:13:28.258 fused_ordering(33) 00:13:28.258 fused_ordering(34) 00:13:28.258 fused_ordering(35) 00:13:28.258 fused_ordering(36) 00:13:28.258 fused_ordering(37) 00:13:28.258 fused_ordering(38) 00:13:28.258 fused_ordering(39) 00:13:28.258 fused_ordering(40) 00:13:28.258 fused_ordering(41) 00:13:28.258 fused_ordering(42) 00:13:28.258 fused_ordering(43) 00:13:28.258 fused_ordering(44) 00:13:28.258 fused_ordering(45) 00:13:28.258 fused_ordering(46) 00:13:28.258 fused_ordering(47) 00:13:28.258 fused_ordering(48) 00:13:28.258 fused_ordering(49) 00:13:28.258 fused_ordering(50) 00:13:28.258 fused_ordering(51) 00:13:28.258 fused_ordering(52) 00:13:28.258 fused_ordering(53) 00:13:28.258 fused_ordering(54) 00:13:28.258 fused_ordering(55) 00:13:28.258 fused_ordering(56) 00:13:28.258 fused_ordering(57) 00:13:28.258 fused_ordering(58) 00:13:28.258 fused_ordering(59) 00:13:28.258 fused_ordering(60) 00:13:28.258 fused_ordering(61) 00:13:28.258 fused_ordering(62) 00:13:28.258 fused_ordering(63) 00:13:28.258 fused_ordering(64) 00:13:28.258 fused_ordering(65) 00:13:28.258 fused_ordering(66) 00:13:28.258 fused_ordering(67) 00:13:28.258 fused_ordering(68) 00:13:28.258 fused_ordering(69) 00:13:28.258 fused_ordering(70) 00:13:28.258 fused_ordering(71) 00:13:28.258 fused_ordering(72) 00:13:28.258 fused_ordering(73) 00:13:28.258 fused_ordering(74) 00:13:28.258 fused_ordering(75) 00:13:28.258 fused_ordering(76) 00:13:28.258 fused_ordering(77) 00:13:28.258 fused_ordering(78) 00:13:28.258 fused_ordering(79) 00:13:28.258 fused_ordering(80) 00:13:28.258 fused_ordering(81) 00:13:28.258 fused_ordering(82) 00:13:28.258 fused_ordering(83) 00:13:28.258 fused_ordering(84) 00:13:28.258 fused_ordering(85) 00:13:28.258 fused_ordering(86) 00:13:28.258 fused_ordering(87) 00:13:28.258 fused_ordering(88) 00:13:28.258 fused_ordering(89) 00:13:28.258 fused_ordering(90) 00:13:28.258 fused_ordering(91) 00:13:28.258 fused_ordering(92) 00:13:28.258 fused_ordering(93) 00:13:28.258 fused_ordering(94) 00:13:28.258 fused_ordering(95) 00:13:28.258 fused_ordering(96) 00:13:28.258 fused_ordering(97) 00:13:28.258 fused_ordering(98) 00:13:28.258 fused_ordering(99) 00:13:28.258 fused_ordering(100) 00:13:28.258 fused_ordering(101) 00:13:28.258 fused_ordering(102) 00:13:28.258 fused_ordering(103) 00:13:28.258 fused_ordering(104) 00:13:28.258 fused_ordering(105) 00:13:28.258 fused_ordering(106) 00:13:28.258 fused_ordering(107) 00:13:28.258 fused_ordering(108) 00:13:28.258 fused_ordering(109) 00:13:28.258 fused_ordering(110) 00:13:28.258 fused_ordering(111) 00:13:28.258 fused_ordering(112) 00:13:28.258 fused_ordering(113) 00:13:28.258 fused_ordering(114) 00:13:28.258 fused_ordering(115) 00:13:28.258 fused_ordering(116) 00:13:28.258 fused_ordering(117) 00:13:28.258 fused_ordering(118) 00:13:28.258 fused_ordering(119) 00:13:28.258 fused_ordering(120) 00:13:28.258 fused_ordering(121) 00:13:28.258 fused_ordering(122) 00:13:28.258 fused_ordering(123) 00:13:28.258 fused_ordering(124) 00:13:28.258 fused_ordering(125) 00:13:28.258 fused_ordering(126) 00:13:28.258 fused_ordering(127) 00:13:28.258 fused_ordering(128) 00:13:28.258 fused_ordering(129) 00:13:28.258 fused_ordering(130) 00:13:28.258 fused_ordering(131) 00:13:28.258 fused_ordering(132) 00:13:28.258 fused_ordering(133) 00:13:28.258 fused_ordering(134) 00:13:28.258 fused_ordering(135) 00:13:28.258 fused_ordering(136) 00:13:28.258 fused_ordering(137) 00:13:28.258 fused_ordering(138) 00:13:28.258 fused_ordering(139) 00:13:28.258 fused_ordering(140) 00:13:28.258 fused_ordering(141) 00:13:28.258 fused_ordering(142) 00:13:28.258 fused_ordering(143) 00:13:28.258 fused_ordering(144) 00:13:28.258 fused_ordering(145) 00:13:28.258 fused_ordering(146) 00:13:28.258 fused_ordering(147) 00:13:28.258 fused_ordering(148) 00:13:28.258 fused_ordering(149) 00:13:28.258 fused_ordering(150) 00:13:28.258 fused_ordering(151) 00:13:28.258 fused_ordering(152) 00:13:28.258 fused_ordering(153) 00:13:28.258 fused_ordering(154) 00:13:28.258 fused_ordering(155) 00:13:28.258 fused_ordering(156) 00:13:28.258 fused_ordering(157) 00:13:28.258 fused_ordering(158) 00:13:28.258 fused_ordering(159) 00:13:28.258 fused_ordering(160) 00:13:28.258 fused_ordering(161) 00:13:28.258 fused_ordering(162) 00:13:28.258 fused_ordering(163) 00:13:28.258 fused_ordering(164) 00:13:28.258 fused_ordering(165) 00:13:28.258 fused_ordering(166) 00:13:28.258 fused_ordering(167) 00:13:28.258 fused_ordering(168) 00:13:28.258 fused_ordering(169) 00:13:28.258 fused_ordering(170) 00:13:28.258 fused_ordering(171) 00:13:28.258 fused_ordering(172) 00:13:28.258 fused_ordering(173) 00:13:28.258 fused_ordering(174) 00:13:28.258 fused_ordering(175) 00:13:28.258 fused_ordering(176) 00:13:28.258 fused_ordering(177) 00:13:28.258 fused_ordering(178) 00:13:28.258 fused_ordering(179) 00:13:28.258 fused_ordering(180) 00:13:28.258 fused_ordering(181) 00:13:28.258 fused_ordering(182) 00:13:28.258 fused_ordering(183) 00:13:28.258 fused_ordering(184) 00:13:28.258 fused_ordering(185) 00:13:28.258 fused_ordering(186) 00:13:28.258 fused_ordering(187) 00:13:28.258 fused_ordering(188) 00:13:28.258 fused_ordering(189) 00:13:28.258 fused_ordering(190) 00:13:28.258 fused_ordering(191) 00:13:28.258 fused_ordering(192) 00:13:28.258 fused_ordering(193) 00:13:28.258 fused_ordering(194) 00:13:28.258 fused_ordering(195) 00:13:28.258 fused_ordering(196) 00:13:28.258 fused_ordering(197) 00:13:28.258 fused_ordering(198) 00:13:28.258 fused_ordering(199) 00:13:28.258 fused_ordering(200) 00:13:28.258 fused_ordering(201) 00:13:28.258 fused_ordering(202) 00:13:28.258 fused_ordering(203) 00:13:28.258 fused_ordering(204) 00:13:28.258 fused_ordering(205) 00:13:28.516 fused_ordering(206) 00:13:28.516 fused_ordering(207) 00:13:28.516 fused_ordering(208) 00:13:28.516 fused_ordering(209) 00:13:28.516 fused_ordering(210) 00:13:28.516 fused_ordering(211) 00:13:28.516 fused_ordering(212) 00:13:28.516 fused_ordering(213) 00:13:28.516 fused_ordering(214) 00:13:28.516 fused_ordering(215) 00:13:28.516 fused_ordering(216) 00:13:28.516 fused_ordering(217) 00:13:28.516 fused_ordering(218) 00:13:28.516 fused_ordering(219) 00:13:28.516 fused_ordering(220) 00:13:28.516 fused_ordering(221) 00:13:28.516 fused_ordering(222) 00:13:28.516 fused_ordering(223) 00:13:28.516 fused_ordering(224) 00:13:28.516 fused_ordering(225) 00:13:28.516 fused_ordering(226) 00:13:28.516 fused_ordering(227) 00:13:28.516 fused_ordering(228) 00:13:28.516 fused_ordering(229) 00:13:28.516 fused_ordering(230) 00:13:28.516 fused_ordering(231) 00:13:28.516 fused_ordering(232) 00:13:28.516 fused_ordering(233) 00:13:28.516 fused_ordering(234) 00:13:28.516 fused_ordering(235) 00:13:28.516 fused_ordering(236) 00:13:28.516 fused_ordering(237) 00:13:28.516 fused_ordering(238) 00:13:28.516 fused_ordering(239) 00:13:28.516 fused_ordering(240) 00:13:28.516 fused_ordering(241) 00:13:28.516 fused_ordering(242) 00:13:28.516 fused_ordering(243) 00:13:28.516 fused_ordering(244) 00:13:28.516 fused_ordering(245) 00:13:28.516 fused_ordering(246) 00:13:28.516 fused_ordering(247) 00:13:28.516 fused_ordering(248) 00:13:28.516 fused_ordering(249) 00:13:28.516 fused_ordering(250) 00:13:28.516 fused_ordering(251) 00:13:28.516 fused_ordering(252) 00:13:28.516 fused_ordering(253) 00:13:28.516 fused_ordering(254) 00:13:28.516 fused_ordering(255) 00:13:28.516 fused_ordering(256) 00:13:28.516 fused_ordering(257) 00:13:28.516 fused_ordering(258) 00:13:28.516 fused_ordering(259) 00:13:28.516 fused_ordering(260) 00:13:28.516 fused_ordering(261) 00:13:28.516 fused_ordering(262) 00:13:28.516 fused_ordering(263) 00:13:28.517 fused_ordering(264) 00:13:28.517 fused_ordering(265) 00:13:28.517 fused_ordering(266) 00:13:28.517 fused_ordering(267) 00:13:28.517 fused_ordering(268) 00:13:28.517 fused_ordering(269) 00:13:28.517 fused_ordering(270) 00:13:28.517 fused_ordering(271) 00:13:28.517 fused_ordering(272) 00:13:28.517 fused_ordering(273) 00:13:28.517 fused_ordering(274) 00:13:28.517 fused_ordering(275) 00:13:28.517 fused_ordering(276) 00:13:28.517 fused_ordering(277) 00:13:28.517 fused_ordering(278) 00:13:28.517 fused_ordering(279) 00:13:28.517 fused_ordering(280) 00:13:28.517 fused_ordering(281) 00:13:28.517 fused_ordering(282) 00:13:28.517 fused_ordering(283) 00:13:28.517 fused_ordering(284) 00:13:28.517 fused_ordering(285) 00:13:28.517 fused_ordering(286) 00:13:28.517 fused_ordering(287) 00:13:28.517 fused_ordering(288) 00:13:28.517 fused_ordering(289) 00:13:28.517 fused_ordering(290) 00:13:28.517 fused_ordering(291) 00:13:28.517 fused_ordering(292) 00:13:28.517 fused_ordering(293) 00:13:28.517 fused_ordering(294) 00:13:28.517 fused_ordering(295) 00:13:28.517 fused_ordering(296) 00:13:28.517 fused_ordering(297) 00:13:28.517 fused_ordering(298) 00:13:28.517 fused_ordering(299) 00:13:28.517 fused_ordering(300) 00:13:28.517 fused_ordering(301) 00:13:28.517 fused_ordering(302) 00:13:28.517 fused_ordering(303) 00:13:28.517 fused_ordering(304) 00:13:28.517 fused_ordering(305) 00:13:28.517 fused_ordering(306) 00:13:28.517 fused_ordering(307) 00:13:28.517 fused_ordering(308) 00:13:28.517 fused_ordering(309) 00:13:28.517 fused_ordering(310) 00:13:28.517 fused_ordering(311) 00:13:28.517 fused_ordering(312) 00:13:28.517 fused_ordering(313) 00:13:28.517 fused_ordering(314) 00:13:28.517 fused_ordering(315) 00:13:28.517 fused_ordering(316) 00:13:28.517 fused_ordering(317) 00:13:28.517 fused_ordering(318) 00:13:28.517 fused_ordering(319) 00:13:28.517 fused_ordering(320) 00:13:28.517 fused_ordering(321) 00:13:28.517 fused_ordering(322) 00:13:28.517 fused_ordering(323) 00:13:28.517 fused_ordering(324) 00:13:28.517 fused_ordering(325) 00:13:28.517 fused_ordering(326) 00:13:28.517 fused_ordering(327) 00:13:28.517 fused_ordering(328) 00:13:28.517 fused_ordering(329) 00:13:28.517 fused_ordering(330) 00:13:28.517 fused_ordering(331) 00:13:28.517 fused_ordering(332) 00:13:28.517 fused_ordering(333) 00:13:28.517 fused_ordering(334) 00:13:28.517 fused_ordering(335) 00:13:28.517 fused_ordering(336) 00:13:28.517 fused_ordering(337) 00:13:28.517 fused_ordering(338) 00:13:28.517 fused_ordering(339) 00:13:28.517 fused_ordering(340) 00:13:28.517 fused_ordering(341) 00:13:28.517 fused_ordering(342) 00:13:28.517 fused_ordering(343) 00:13:28.517 fused_ordering(344) 00:13:28.517 fused_ordering(345) 00:13:28.517 fused_ordering(346) 00:13:28.517 fused_ordering(347) 00:13:28.517 fused_ordering(348) 00:13:28.517 fused_ordering(349) 00:13:28.517 fused_ordering(350) 00:13:28.517 fused_ordering(351) 00:13:28.517 fused_ordering(352) 00:13:28.517 fused_ordering(353) 00:13:28.517 fused_ordering(354) 00:13:28.517 fused_ordering(355) 00:13:28.517 fused_ordering(356) 00:13:28.517 fused_ordering(357) 00:13:28.517 fused_ordering(358) 00:13:28.517 fused_ordering(359) 00:13:28.517 fused_ordering(360) 00:13:28.517 fused_ordering(361) 00:13:28.517 fused_ordering(362) 00:13:28.517 fused_ordering(363) 00:13:28.517 fused_ordering(364) 00:13:28.517 fused_ordering(365) 00:13:28.517 fused_ordering(366) 00:13:28.517 fused_ordering(367) 00:13:28.517 fused_ordering(368) 00:13:28.517 fused_ordering(369) 00:13:28.517 fused_ordering(370) 00:13:28.517 fused_ordering(371) 00:13:28.517 fused_ordering(372) 00:13:28.517 fused_ordering(373) 00:13:28.517 fused_ordering(374) 00:13:28.517 fused_ordering(375) 00:13:28.517 fused_ordering(376) 00:13:28.517 fused_ordering(377) 00:13:28.517 fused_ordering(378) 00:13:28.517 fused_ordering(379) 00:13:28.517 fused_ordering(380) 00:13:28.517 fused_ordering(381) 00:13:28.517 fused_ordering(382) 00:13:28.517 fused_ordering(383) 00:13:28.517 fused_ordering(384) 00:13:28.517 fused_ordering(385) 00:13:28.517 fused_ordering(386) 00:13:28.517 fused_ordering(387) 00:13:28.517 fused_ordering(388) 00:13:28.517 fused_ordering(389) 00:13:28.517 fused_ordering(390) 00:13:28.517 fused_ordering(391) 00:13:28.517 fused_ordering(392) 00:13:28.517 fused_ordering(393) 00:13:28.517 fused_ordering(394) 00:13:28.517 fused_ordering(395) 00:13:28.517 fused_ordering(396) 00:13:28.517 fused_ordering(397) 00:13:28.517 fused_ordering(398) 00:13:28.517 fused_ordering(399) 00:13:28.517 fused_ordering(400) 00:13:28.517 fused_ordering(401) 00:13:28.517 fused_ordering(402) 00:13:28.517 fused_ordering(403) 00:13:28.517 fused_ordering(404) 00:13:28.517 fused_ordering(405) 00:13:28.517 fused_ordering(406) 00:13:28.517 fused_ordering(407) 00:13:28.517 fused_ordering(408) 00:13:28.517 fused_ordering(409) 00:13:28.517 fused_ordering(410) 00:13:29.083 fused_ordering(411) 00:13:29.083 fused_ordering(412) 00:13:29.083 fused_ordering(413) 00:13:29.083 fused_ordering(414) 00:13:29.083 fused_ordering(415) 00:13:29.083 fused_ordering(416) 00:13:29.083 fused_ordering(417) 00:13:29.083 fused_ordering(418) 00:13:29.083 fused_ordering(419) 00:13:29.083 fused_ordering(420) 00:13:29.083 fused_ordering(421) 00:13:29.083 fused_ordering(422) 00:13:29.083 fused_ordering(423) 00:13:29.083 fused_ordering(424) 00:13:29.083 fused_ordering(425) 00:13:29.083 fused_ordering(426) 00:13:29.083 fused_ordering(427) 00:13:29.083 fused_ordering(428) 00:13:29.083 fused_ordering(429) 00:13:29.083 fused_ordering(430) 00:13:29.083 fused_ordering(431) 00:13:29.083 fused_ordering(432) 00:13:29.083 fused_ordering(433) 00:13:29.083 fused_ordering(434) 00:13:29.083 fused_ordering(435) 00:13:29.083 fused_ordering(436) 00:13:29.083 fused_ordering(437) 00:13:29.083 fused_ordering(438) 00:13:29.083 fused_ordering(439) 00:13:29.083 fused_ordering(440) 00:13:29.083 fused_ordering(441) 00:13:29.083 fused_ordering(442) 00:13:29.083 fused_ordering(443) 00:13:29.083 fused_ordering(444) 00:13:29.083 fused_ordering(445) 00:13:29.083 fused_ordering(446) 00:13:29.083 fused_ordering(447) 00:13:29.083 fused_ordering(448) 00:13:29.083 fused_ordering(449) 00:13:29.083 fused_ordering(450) 00:13:29.083 fused_ordering(451) 00:13:29.083 fused_ordering(452) 00:13:29.083 fused_ordering(453) 00:13:29.083 fused_ordering(454) 00:13:29.083 fused_ordering(455) 00:13:29.083 fused_ordering(456) 00:13:29.083 fused_ordering(457) 00:13:29.083 fused_ordering(458) 00:13:29.083 fused_ordering(459) 00:13:29.083 fused_ordering(460) 00:13:29.083 fused_ordering(461) 00:13:29.083 fused_ordering(462) 00:13:29.083 fused_ordering(463) 00:13:29.083 fused_ordering(464) 00:13:29.083 fused_ordering(465) 00:13:29.083 fused_ordering(466) 00:13:29.083 fused_ordering(467) 00:13:29.083 fused_ordering(468) 00:13:29.083 fused_ordering(469) 00:13:29.083 fused_ordering(470) 00:13:29.083 fused_ordering(471) 00:13:29.083 fused_ordering(472) 00:13:29.083 fused_ordering(473) 00:13:29.083 fused_ordering(474) 00:13:29.083 fused_ordering(475) 00:13:29.083 fused_ordering(476) 00:13:29.083 fused_ordering(477) 00:13:29.083 fused_ordering(478) 00:13:29.083 fused_ordering(479) 00:13:29.083 fused_ordering(480) 00:13:29.083 fused_ordering(481) 00:13:29.083 fused_ordering(482) 00:13:29.083 fused_ordering(483) 00:13:29.083 fused_ordering(484) 00:13:29.083 fused_ordering(485) 00:13:29.083 fused_ordering(486) 00:13:29.083 fused_ordering(487) 00:13:29.083 fused_ordering(488) 00:13:29.083 fused_ordering(489) 00:13:29.083 fused_ordering(490) 00:13:29.083 fused_ordering(491) 00:13:29.083 fused_ordering(492) 00:13:29.083 fused_ordering(493) 00:13:29.083 fused_ordering(494) 00:13:29.083 fused_ordering(495) 00:13:29.083 fused_ordering(496) 00:13:29.083 fused_ordering(497) 00:13:29.083 fused_ordering(498) 00:13:29.083 fused_ordering(499) 00:13:29.083 fused_ordering(500) 00:13:29.083 fused_ordering(501) 00:13:29.083 fused_ordering(502) 00:13:29.083 fused_ordering(503) 00:13:29.083 fused_ordering(504) 00:13:29.083 fused_ordering(505) 00:13:29.083 fused_ordering(506) 00:13:29.083 fused_ordering(507) 00:13:29.083 fused_ordering(508) 00:13:29.083 fused_ordering(509) 00:13:29.083 fused_ordering(510) 00:13:29.083 fused_ordering(511) 00:13:29.083 fused_ordering(512) 00:13:29.083 fused_ordering(513) 00:13:29.083 fused_ordering(514) 00:13:29.083 fused_ordering(515) 00:13:29.083 fused_ordering(516) 00:13:29.083 fused_ordering(517) 00:13:29.083 fused_ordering(518) 00:13:29.083 fused_ordering(519) 00:13:29.083 fused_ordering(520) 00:13:29.083 fused_ordering(521) 00:13:29.083 fused_ordering(522) 00:13:29.083 fused_ordering(523) 00:13:29.083 fused_ordering(524) 00:13:29.083 fused_ordering(525) 00:13:29.083 fused_ordering(526) 00:13:29.083 fused_ordering(527) 00:13:29.083 fused_ordering(528) 00:13:29.083 fused_ordering(529) 00:13:29.083 fused_ordering(530) 00:13:29.083 fused_ordering(531) 00:13:29.083 fused_ordering(532) 00:13:29.083 fused_ordering(533) 00:13:29.083 fused_ordering(534) 00:13:29.083 fused_ordering(535) 00:13:29.083 fused_ordering(536) 00:13:29.083 fused_ordering(537) 00:13:29.083 fused_ordering(538) 00:13:29.083 fused_ordering(539) 00:13:29.083 fused_ordering(540) 00:13:29.083 fused_ordering(541) 00:13:29.084 fused_ordering(542) 00:13:29.084 fused_ordering(543) 00:13:29.084 fused_ordering(544) 00:13:29.084 fused_ordering(545) 00:13:29.084 fused_ordering(546) 00:13:29.084 fused_ordering(547) 00:13:29.084 fused_ordering(548) 00:13:29.084 fused_ordering(549) 00:13:29.084 fused_ordering(550) 00:13:29.084 fused_ordering(551) 00:13:29.084 fused_ordering(552) 00:13:29.084 fused_ordering(553) 00:13:29.084 fused_ordering(554) 00:13:29.084 fused_ordering(555) 00:13:29.084 fused_ordering(556) 00:13:29.084 fused_ordering(557) 00:13:29.084 fused_ordering(558) 00:13:29.084 fused_ordering(559) 00:13:29.084 fused_ordering(560) 00:13:29.084 fused_ordering(561) 00:13:29.084 fused_ordering(562) 00:13:29.084 fused_ordering(563) 00:13:29.084 fused_ordering(564) 00:13:29.084 fused_ordering(565) 00:13:29.084 fused_ordering(566) 00:13:29.084 fused_ordering(567) 00:13:29.084 fused_ordering(568) 00:13:29.084 fused_ordering(569) 00:13:29.084 fused_ordering(570) 00:13:29.084 fused_ordering(571) 00:13:29.084 fused_ordering(572) 00:13:29.084 fused_ordering(573) 00:13:29.084 fused_ordering(574) 00:13:29.084 fused_ordering(575) 00:13:29.084 fused_ordering(576) 00:13:29.084 fused_ordering(577) 00:13:29.084 fused_ordering(578) 00:13:29.084 fused_ordering(579) 00:13:29.084 fused_ordering(580) 00:13:29.084 fused_ordering(581) 00:13:29.084 fused_ordering(582) 00:13:29.084 fused_ordering(583) 00:13:29.084 fused_ordering(584) 00:13:29.084 fused_ordering(585) 00:13:29.084 fused_ordering(586) 00:13:29.084 fused_ordering(587) 00:13:29.084 fused_ordering(588) 00:13:29.084 fused_ordering(589) 00:13:29.084 fused_ordering(590) 00:13:29.084 fused_ordering(591) 00:13:29.084 fused_ordering(592) 00:13:29.084 fused_ordering(593) 00:13:29.084 fused_ordering(594) 00:13:29.084 fused_ordering(595) 00:13:29.084 fused_ordering(596) 00:13:29.084 fused_ordering(597) 00:13:29.084 fused_ordering(598) 00:13:29.084 fused_ordering(599) 00:13:29.084 fused_ordering(600) 00:13:29.084 fused_ordering(601) 00:13:29.084 fused_ordering(602) 00:13:29.084 fused_ordering(603) 00:13:29.084 fused_ordering(604) 00:13:29.084 fused_ordering(605) 00:13:29.084 fused_ordering(606) 00:13:29.084 fused_ordering(607) 00:13:29.084 fused_ordering(608) 00:13:29.084 fused_ordering(609) 00:13:29.084 fused_ordering(610) 00:13:29.084 fused_ordering(611) 00:13:29.084 fused_ordering(612) 00:13:29.084 fused_ordering(613) 00:13:29.084 fused_ordering(614) 00:13:29.084 fused_ordering(615) 00:13:29.342 fused_ordering(616) 00:13:29.342 fused_ordering(617) 00:13:29.342 fused_ordering(618) 00:13:29.342 fused_ordering(619) 00:13:29.342 fused_ordering(620) 00:13:29.342 fused_ordering(621) 00:13:29.342 fused_ordering(622) 00:13:29.342 fused_ordering(623) 00:13:29.342 fused_ordering(624) 00:13:29.342 fused_ordering(625) 00:13:29.342 fused_ordering(626) 00:13:29.342 fused_ordering(627) 00:13:29.342 fused_ordering(628) 00:13:29.342 fused_ordering(629) 00:13:29.342 fused_ordering(630) 00:13:29.342 fused_ordering(631) 00:13:29.342 fused_ordering(632) 00:13:29.342 fused_ordering(633) 00:13:29.342 fused_ordering(634) 00:13:29.342 fused_ordering(635) 00:13:29.342 fused_ordering(636) 00:13:29.342 fused_ordering(637) 00:13:29.342 fused_ordering(638) 00:13:29.342 fused_ordering(639) 00:13:29.342 fused_ordering(640) 00:13:29.342 fused_ordering(641) 00:13:29.342 fused_ordering(642) 00:13:29.342 fused_ordering(643) 00:13:29.342 fused_ordering(644) 00:13:29.342 fused_ordering(645) 00:13:29.342 fused_ordering(646) 00:13:29.342 fused_ordering(647) 00:13:29.342 fused_ordering(648) 00:13:29.342 fused_ordering(649) 00:13:29.342 fused_ordering(650) 00:13:29.342 fused_ordering(651) 00:13:29.342 fused_ordering(652) 00:13:29.342 fused_ordering(653) 00:13:29.342 fused_ordering(654) 00:13:29.342 fused_ordering(655) 00:13:29.342 fused_ordering(656) 00:13:29.342 fused_ordering(657) 00:13:29.342 fused_ordering(658) 00:13:29.342 fused_ordering(659) 00:13:29.342 fused_ordering(660) 00:13:29.342 fused_ordering(661) 00:13:29.342 fused_ordering(662) 00:13:29.342 fused_ordering(663) 00:13:29.342 fused_ordering(664) 00:13:29.342 fused_ordering(665) 00:13:29.342 fused_ordering(666) 00:13:29.342 fused_ordering(667) 00:13:29.342 fused_ordering(668) 00:13:29.342 fused_ordering(669) 00:13:29.342 fused_ordering(670) 00:13:29.342 fused_ordering(671) 00:13:29.342 fused_ordering(672) 00:13:29.342 fused_ordering(673) 00:13:29.342 fused_ordering(674) 00:13:29.342 fused_ordering(675) 00:13:29.343 fused_ordering(676) 00:13:29.343 fused_ordering(677) 00:13:29.343 fused_ordering(678) 00:13:29.343 fused_ordering(679) 00:13:29.343 fused_ordering(680) 00:13:29.343 fused_ordering(681) 00:13:29.343 fused_ordering(682) 00:13:29.343 fused_ordering(683) 00:13:29.343 fused_ordering(684) 00:13:29.343 fused_ordering(685) 00:13:29.343 fused_ordering(686) 00:13:29.343 fused_ordering(687) 00:13:29.343 fused_ordering(688) 00:13:29.343 fused_ordering(689) 00:13:29.343 fused_ordering(690) 00:13:29.343 fused_ordering(691) 00:13:29.343 fused_ordering(692) 00:13:29.343 fused_ordering(693) 00:13:29.343 fused_ordering(694) 00:13:29.343 fused_ordering(695) 00:13:29.343 fused_ordering(696) 00:13:29.343 fused_ordering(697) 00:13:29.343 fused_ordering(698) 00:13:29.343 fused_ordering(699) 00:13:29.343 fused_ordering(700) 00:13:29.343 fused_ordering(701) 00:13:29.343 fused_ordering(702) 00:13:29.343 fused_ordering(703) 00:13:29.343 fused_ordering(704) 00:13:29.343 fused_ordering(705) 00:13:29.343 fused_ordering(706) 00:13:29.343 fused_ordering(707) 00:13:29.343 fused_ordering(708) 00:13:29.343 fused_ordering(709) 00:13:29.343 fused_ordering(710) 00:13:29.343 fused_ordering(711) 00:13:29.343 fused_ordering(712) 00:13:29.343 fused_ordering(713) 00:13:29.343 fused_ordering(714) 00:13:29.343 fused_ordering(715) 00:13:29.343 fused_ordering(716) 00:13:29.343 fused_ordering(717) 00:13:29.343 fused_ordering(718) 00:13:29.343 fused_ordering(719) 00:13:29.343 fused_ordering(720) 00:13:29.343 fused_ordering(721) 00:13:29.343 fused_ordering(722) 00:13:29.343 fused_ordering(723) 00:13:29.343 fused_ordering(724) 00:13:29.343 fused_ordering(725) 00:13:29.343 fused_ordering(726) 00:13:29.343 fused_ordering(727) 00:13:29.343 fused_ordering(728) 00:13:29.343 fused_ordering(729) 00:13:29.343 fused_ordering(730) 00:13:29.343 fused_ordering(731) 00:13:29.343 fused_ordering(732) 00:13:29.343 fused_ordering(733) 00:13:29.343 fused_ordering(734) 00:13:29.343 fused_ordering(735) 00:13:29.343 fused_ordering(736) 00:13:29.343 fused_ordering(737) 00:13:29.343 fused_ordering(738) 00:13:29.343 fused_ordering(739) 00:13:29.343 fused_ordering(740) 00:13:29.343 fused_ordering(741) 00:13:29.343 fused_ordering(742) 00:13:29.343 fused_ordering(743) 00:13:29.343 fused_ordering(744) 00:13:29.343 fused_ordering(745) 00:13:29.343 fused_ordering(746) 00:13:29.343 fused_ordering(747) 00:13:29.343 fused_ordering(748) 00:13:29.343 fused_ordering(749) 00:13:29.343 fused_ordering(750) 00:13:29.343 fused_ordering(751) 00:13:29.343 fused_ordering(752) 00:13:29.343 fused_ordering(753) 00:13:29.343 fused_ordering(754) 00:13:29.343 fused_ordering(755) 00:13:29.343 fused_ordering(756) 00:13:29.343 fused_ordering(757) 00:13:29.343 fused_ordering(758) 00:13:29.343 fused_ordering(759) 00:13:29.343 fused_ordering(760) 00:13:29.343 fused_ordering(761) 00:13:29.343 fused_ordering(762) 00:13:29.343 fused_ordering(763) 00:13:29.343 fused_ordering(764) 00:13:29.343 fused_ordering(765) 00:13:29.343 fused_ordering(766) 00:13:29.343 fused_ordering(767) 00:13:29.343 fused_ordering(768) 00:13:29.343 fused_ordering(769) 00:13:29.343 fused_ordering(770) 00:13:29.343 fused_ordering(771) 00:13:29.343 fused_ordering(772) 00:13:29.343 fused_ordering(773) 00:13:29.343 fused_ordering(774) 00:13:29.343 fused_ordering(775) 00:13:29.343 fused_ordering(776) 00:13:29.343 fused_ordering(777) 00:13:29.343 fused_ordering(778) 00:13:29.343 fused_ordering(779) 00:13:29.343 fused_ordering(780) 00:13:29.343 fused_ordering(781) 00:13:29.343 fused_ordering(782) 00:13:29.343 fused_ordering(783) 00:13:29.343 fused_ordering(784) 00:13:29.343 fused_ordering(785) 00:13:29.343 fused_ordering(786) 00:13:29.343 fused_ordering(787) 00:13:29.343 fused_ordering(788) 00:13:29.343 fused_ordering(789) 00:13:29.343 fused_ordering(790) 00:13:29.343 fused_ordering(791) 00:13:29.343 fused_ordering(792) 00:13:29.343 fused_ordering(793) 00:13:29.343 fused_ordering(794) 00:13:29.343 fused_ordering(795) 00:13:29.343 fused_ordering(796) 00:13:29.343 fused_ordering(797) 00:13:29.343 fused_ordering(798) 00:13:29.343 fused_ordering(799) 00:13:29.343 fused_ordering(800) 00:13:29.343 fused_ordering(801) 00:13:29.343 fused_ordering(802) 00:13:29.343 fused_ordering(803) 00:13:29.343 fused_ordering(804) 00:13:29.343 fused_ordering(805) 00:13:29.343 fused_ordering(806) 00:13:29.343 fused_ordering(807) 00:13:29.343 fused_ordering(808) 00:13:29.343 fused_ordering(809) 00:13:29.343 fused_ordering(810) 00:13:29.343 fused_ordering(811) 00:13:29.343 fused_ordering(812) 00:13:29.343 fused_ordering(813) 00:13:29.343 fused_ordering(814) 00:13:29.343 fused_ordering(815) 00:13:29.343 fused_ordering(816) 00:13:29.343 fused_ordering(817) 00:13:29.343 fused_ordering(818) 00:13:29.343 fused_ordering(819) 00:13:29.343 fused_ordering(820) 00:13:29.910 fused_ordering(821) 00:13:29.910 fused_ordering(822) 00:13:29.910 fused_ordering(823) 00:13:29.910 fused_ordering(824) 00:13:29.910 fused_ordering(825) 00:13:29.910 fused_ordering(826) 00:13:29.910 fused_ordering(827) 00:13:29.910 fused_ordering(828) 00:13:29.910 fused_ordering(829) 00:13:29.910 fused_ordering(830) 00:13:29.910 fused_ordering(831) 00:13:29.910 fused_ordering(832) 00:13:29.910 fused_ordering(833) 00:13:29.910 fused_ordering(834) 00:13:29.910 fused_ordering(835) 00:13:29.910 fused_ordering(836) 00:13:29.910 fused_ordering(837) 00:13:29.910 fused_ordering(838) 00:13:29.910 fused_ordering(839) 00:13:29.910 fused_ordering(840) 00:13:29.910 fused_ordering(841) 00:13:29.910 fused_ordering(842) 00:13:29.910 fused_ordering(843) 00:13:29.910 fused_ordering(844) 00:13:29.910 fused_ordering(845) 00:13:29.910 fused_ordering(846) 00:13:29.910 fused_ordering(847) 00:13:29.910 fused_ordering(848) 00:13:29.910 fused_ordering(849) 00:13:29.910 fused_ordering(850) 00:13:29.910 fused_ordering(851) 00:13:29.910 fused_ordering(852) 00:13:29.910 fused_ordering(853) 00:13:29.910 fused_ordering(854) 00:13:29.910 fused_ordering(855) 00:13:29.910 fused_ordering(856) 00:13:29.910 fused_ordering(857) 00:13:29.910 fused_ordering(858) 00:13:29.910 fused_ordering(859) 00:13:29.910 fused_ordering(860) 00:13:29.910 fused_ordering(861) 00:13:29.910 fused_ordering(862) 00:13:29.910 fused_ordering(863) 00:13:29.910 fused_ordering(864) 00:13:29.910 fused_ordering(865) 00:13:29.910 fused_ordering(866) 00:13:29.910 fused_ordering(867) 00:13:29.910 fused_ordering(868) 00:13:29.910 fused_ordering(869) 00:13:29.910 fused_ordering(870) 00:13:29.910 fused_ordering(871) 00:13:29.910 fused_ordering(872) 00:13:29.910 fused_ordering(873) 00:13:29.910 fused_ordering(874) 00:13:29.910 fused_ordering(875) 00:13:29.910 fused_ordering(876) 00:13:29.910 fused_ordering(877) 00:13:29.910 fused_ordering(878) 00:13:29.910 fused_ordering(879) 00:13:29.910 fused_ordering(880) 00:13:29.910 fused_ordering(881) 00:13:29.910 fused_ordering(882) 00:13:29.910 fused_ordering(883) 00:13:29.910 fused_ordering(884) 00:13:29.910 fused_ordering(885) 00:13:29.910 fused_ordering(886) 00:13:29.910 fused_ordering(887) 00:13:29.911 fused_ordering(888) 00:13:29.911 fused_ordering(889) 00:13:29.911 fused_ordering(890) 00:13:29.911 fused_ordering(891) 00:13:29.911 fused_ordering(892) 00:13:29.911 fused_ordering(893) 00:13:29.911 fused_ordering(894) 00:13:29.911 fused_ordering(895) 00:13:29.911 fused_ordering(896) 00:13:29.911 fused_ordering(897) 00:13:29.911 fused_ordering(898) 00:13:29.911 fused_ordering(899) 00:13:29.911 fused_ordering(900) 00:13:29.911 fused_ordering(901) 00:13:29.911 fused_ordering(902) 00:13:29.911 fused_ordering(903) 00:13:29.911 fused_ordering(904) 00:13:29.911 fused_ordering(905) 00:13:29.911 fused_ordering(906) 00:13:29.911 fused_ordering(907) 00:13:29.911 fused_ordering(908) 00:13:29.911 fused_ordering(909) 00:13:29.911 fused_ordering(910) 00:13:29.911 fused_ordering(911) 00:13:29.911 fused_ordering(912) 00:13:29.911 fused_ordering(913) 00:13:29.911 fused_ordering(914) 00:13:29.911 fused_ordering(915) 00:13:29.911 fused_ordering(916) 00:13:29.911 fused_ordering(917) 00:13:29.911 fused_ordering(918) 00:13:29.911 fused_ordering(919) 00:13:29.911 fused_ordering(920) 00:13:29.911 fused_ordering(921) 00:13:29.911 fused_ordering(922) 00:13:29.911 fused_ordering(923) 00:13:29.911 fused_ordering(924) 00:13:29.911 fused_ordering(925) 00:13:29.911 fused_ordering(926) 00:13:29.911 fused_ordering(927) 00:13:29.911 fused_ordering(928) 00:13:29.911 fused_ordering(929) 00:13:29.911 fused_ordering(930) 00:13:29.911 fused_ordering(931) 00:13:29.911 fused_ordering(932) 00:13:29.911 fused_ordering(933) 00:13:29.911 fused_ordering(934) 00:13:29.911 fused_ordering(935) 00:13:29.911 fused_ordering(936) 00:13:29.911 fused_ordering(937) 00:13:29.911 fused_ordering(938) 00:13:29.911 fused_ordering(939) 00:13:29.911 fused_ordering(940) 00:13:29.911 fused_ordering(941) 00:13:29.911 fused_ordering(942) 00:13:29.911 fused_ordering(943) 00:13:29.911 fused_ordering(944) 00:13:29.911 fused_ordering(945) 00:13:29.911 fused_ordering(946) 00:13:29.911 fused_ordering(947) 00:13:29.911 fused_ordering(948) 00:13:29.911 fused_ordering(949) 00:13:29.911 fused_ordering(950) 00:13:29.911 fused_ordering(951) 00:13:29.911 fused_ordering(952) 00:13:29.911 fused_ordering(953) 00:13:29.911 fused_ordering(954) 00:13:29.911 fused_ordering(955) 00:13:29.911 fused_ordering(956) 00:13:29.911 fused_ordering(957) 00:13:29.911 fused_ordering(958) 00:13:29.911 fused_ordering(959) 00:13:29.911 fused_ordering(960) 00:13:29.911 fused_ordering(961) 00:13:29.911 fused_ordering(962) 00:13:29.911 fused_ordering(963) 00:13:29.911 fused_ordering(964) 00:13:29.911 fused_ordering(965) 00:13:29.911 fused_ordering(966) 00:13:29.911 fused_ordering(967) 00:13:29.911 fused_ordering(968) 00:13:29.911 fused_ordering(969) 00:13:29.911 fused_ordering(970) 00:13:29.911 fused_ordering(971) 00:13:29.911 fused_ordering(972) 00:13:29.911 fused_ordering(973) 00:13:29.911 fused_ordering(974) 00:13:29.911 fused_ordering(975) 00:13:29.911 fused_ordering(976) 00:13:29.911 fused_ordering(977) 00:13:29.911 fused_ordering(978) 00:13:29.911 fused_ordering(979) 00:13:29.911 fused_ordering(980) 00:13:29.911 fused_ordering(981) 00:13:29.911 fused_ordering(982) 00:13:29.911 fused_ordering(983) 00:13:29.911 fused_ordering(984) 00:13:29.911 fused_ordering(985) 00:13:29.911 fused_ordering(986) 00:13:29.911 fused_ordering(987) 00:13:29.911 fused_ordering(988) 00:13:29.911 fused_ordering(989) 00:13:29.911 fused_ordering(990) 00:13:29.911 fused_ordering(991) 00:13:29.911 fused_ordering(992) 00:13:29.911 fused_ordering(993) 00:13:29.911 fused_ordering(994) 00:13:29.911 fused_ordering(995) 00:13:29.911 fused_ordering(996) 00:13:29.911 fused_ordering(997) 00:13:29.911 fused_ordering(998) 00:13:29.911 fused_ordering(999) 00:13:29.911 fused_ordering(1000) 00:13:29.911 fused_ordering(1001) 00:13:29.911 fused_ordering(1002) 00:13:29.911 fused_ordering(1003) 00:13:29.911 fused_ordering(1004) 00:13:29.911 fused_ordering(1005) 00:13:29.911 fused_ordering(1006) 00:13:29.911 fused_ordering(1007) 00:13:29.911 fused_ordering(1008) 00:13:29.911 fused_ordering(1009) 00:13:29.911 fused_ordering(1010) 00:13:29.911 fused_ordering(1011) 00:13:29.911 fused_ordering(1012) 00:13:29.911 fused_ordering(1013) 00:13:29.911 fused_ordering(1014) 00:13:29.911 fused_ordering(1015) 00:13:29.911 fused_ordering(1016) 00:13:29.911 fused_ordering(1017) 00:13:29.911 fused_ordering(1018) 00:13:29.911 fused_ordering(1019) 00:13:29.911 fused_ordering(1020) 00:13:29.911 fused_ordering(1021) 00:13:29.911 fused_ordering(1022) 00:13:29.911 fused_ordering(1023) 00:13:29.911 14:07:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:13:29.911 14:07:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:13:29.911 14:07:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:29.911 14:07:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:13:29.911 14:07:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:29.911 14:07:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:13:29.911 14:07:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:29.911 14:07:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:29.911 rmmod nvme_tcp 00:13:30.170 rmmod nvme_fabrics 00:13:30.170 rmmod nvme_keyring 00:13:30.170 14:07:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:30.170 14:07:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:13:30.170 14:07:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:13:30.170 14:07:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 194568 ']' 00:13:30.170 14:07:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 194568 00:13:30.170 14:07:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # '[' -z 194568 ']' 00:13:30.170 14:07:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # kill -0 194568 00:13:30.170 14:07:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # uname 00:13:30.170 14:07:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:30.170 14:07:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 194568 00:13:30.170 14:07:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:13:30.170 14:07:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:13:30.170 14:07:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@968 -- # echo 'killing process with pid 194568' 00:13:30.170 killing process with pid 194568 00:13:30.170 14:07:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@969 -- # kill 194568 00:13:30.170 14:07:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@974 -- # wait 194568 00:13:30.429 14:07:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:30.429 14:07:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:30.429 14:07:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:30.429 14:07:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:30.429 14:07:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:30.429 14:07:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:30.429 14:07:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:30.429 14:07:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:32.334 14:07:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:32.334 00:13:32.334 real 0m7.258s 00:13:32.334 user 0m5.197s 00:13:32.334 sys 0m2.671s 00:13:32.334 14:07:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:32.334 14:07:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:32.334 ************************************ 00:13:32.334 END TEST nvmf_fused_ordering 00:13:32.334 ************************************ 00:13:32.334 14:07:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:13:32.334 14:07:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:32.334 14:07:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:32.334 14:07:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:32.334 ************************************ 00:13:32.334 START TEST nvmf_ns_masking 00:13:32.334 ************************************ 00:13:32.334 14:07:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1125 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:13:32.594 * Looking for test storage... 00:13:32.594 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:32.594 14:07:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:32.594 14:07:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:13:32.594 14:07:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:32.594 14:07:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:32.594 14:07:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:32.594 14:07:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:32.594 14:07:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:32.594 14:07:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:32.594 14:07:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:32.594 14:07:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:32.594 14:07:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:32.594 14:07:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:32.594 14:07:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:32.594 14:07:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:13:32.594 14:07:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:32.594 14:07:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:32.594 14:07:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:32.594 14:07:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:32.594 14:07:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:32.594 14:07:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:32.594 14:07:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:32.594 14:07:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:32.594 14:07:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:32.594 14:07:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:32.594 14:07:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:32.594 14:07:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:13:32.594 14:07:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:32.594 14:07:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:13:32.594 14:07:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:32.594 14:07:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:32.594 14:07:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:32.594 14:07:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:32.594 14:07:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:32.594 14:07:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:32.594 14:07:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:32.594 14:07:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:32.594 14:07:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:32.594 14:07:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:13:32.594 14:07:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:13:32.594 14:07:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:13:32.594 14:07:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=d4a8470d-b955-4d3f-ba41-0a43a6a6a05d 00:13:32.594 14:07:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:13:32.594 14:07:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=2bbc861b-b426-4138-b37a-1b478560613b 00:13:32.594 14:07:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:13:32.594 14:07:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:13:32.594 14:07:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:13:32.594 14:07:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:13:32.594 14:07:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=eff0f1a3-b36f-4ec5-bdb7-7ac308a1e716 00:13:32.594 14:07:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:13:32.594 14:07:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:32.594 14:07:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:32.594 14:07:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:32.594 14:07:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:32.594 14:07:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:32.594 14:07:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:32.594 14:07:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:32.594 14:07:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:32.594 14:07:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:32.594 14:07:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:32.594 14:07:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:13:32.594 14:07:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:34.500 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:34.500 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:13:34.500 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:34.500 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:34.500 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:34.500 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:34.500 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:34.500 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:13:34.500 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:34.500 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:13:34.500 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:13:34.500 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:13:34.500 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:13:34.500 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:13:34.500 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:13:34.500 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:34.500 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:34.500 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:34.500 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:34.500 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:34.500 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:34.500 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:34.500 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:34.500 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:34.500 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:34.500 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:34.500 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:34.500 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:34.500 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:34.500 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:34.500 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:34.500 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:34.500 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:34.500 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:13:34.500 Found 0000:09:00.0 (0x8086 - 0x159b) 00:13:34.500 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:34.500 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:34.500 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:34.500 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:34.500 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:34.500 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:34.500 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:13:34.500 Found 0000:09:00.1 (0x8086 - 0x159b) 00:13:34.500 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:34.500 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:34.500 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:34.500 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:34.500 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:34.500 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:34.500 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:34.500 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:34.500 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:34.500 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:34.500 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:34.500 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:34.500 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:34.500 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:34.500 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:34.500 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:13:34.500 Found net devices under 0000:09:00.0: cvl_0_0 00:13:34.500 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:34.500 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:34.500 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:34.500 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:34.500 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:34.500 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:34.500 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:34.500 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:34.500 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:13:34.500 Found net devices under 0000:09:00.1: cvl_0_1 00:13:34.500 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:34.500 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:34.500 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:13:34.500 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:34.500 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:34.500 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:34.500 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:34.500 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:34.500 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:34.500 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:34.500 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:34.501 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:34.501 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:34.501 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:34.501 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:34.501 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:34.501 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:34.501 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:34.501 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:34.760 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:34.760 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:34.760 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:34.760 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:34.760 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:34.760 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:34.760 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:34.760 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:34.760 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.166 ms 00:13:34.760 00:13:34.760 --- 10.0.0.2 ping statistics --- 00:13:34.760 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:34.760 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:13:34.760 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:34.760 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:34.760 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.139 ms 00:13:34.760 00:13:34.760 --- 10.0.0.1 ping statistics --- 00:13:34.760 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:34.760 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:13:34.760 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:34.760 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:13:34.760 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:34.760 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:34.760 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:34.760 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:34.760 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:34.760 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:34.760 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:34.760 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:13:34.760 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:34.760 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:34.760 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:34.760 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=196792 00:13:34.760 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 196792 00:13:34.760 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 196792 ']' 00:13:34.760 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:34.760 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:34.760 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:13:34.760 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:34.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:34.760 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:34.760 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:34.760 [2024-07-26 14:07:42.703450] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:13:34.760 [2024-07-26 14:07:42.703543] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:34.760 EAL: No free 2048 kB hugepages reported on node 1 00:13:34.760 [2024-07-26 14:07:42.773866] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:35.018 [2024-07-26 14:07:42.883664] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:35.018 [2024-07-26 14:07:42.883718] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:35.018 [2024-07-26 14:07:42.883746] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:35.018 [2024-07-26 14:07:42.883757] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:35.018 [2024-07-26 14:07:42.883767] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:35.018 [2024-07-26 14:07:42.883802] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:35.952 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:35.952 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:13:35.952 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:35.952 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:35.952 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:35.952 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:35.952 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:35.952 [2024-07-26 14:07:43.934256] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:35.952 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:13:35.952 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:13:35.952 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:36.520 Malloc1 00:13:36.520 14:07:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:36.520 Malloc2 00:13:36.520 14:07:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:37.087 14:07:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:13:37.344 14:07:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:37.602 [2024-07-26 14:07:45.391161] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:37.602 14:07:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:13:37.602 14:07:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I eff0f1a3-b36f-4ec5-bdb7-7ac308a1e716 -a 10.0.0.2 -s 4420 -i 4 00:13:37.602 14:07:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:13:37.602 14:07:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:13:37.602 14:07:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:37.602 14:07:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:37.602 14:07:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:13:40.131 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:40.131 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:40.131 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:40.131 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:40.131 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:40.131 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:13:40.131 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:40.131 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:40.131 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:40.131 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:40.131 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:13:40.131 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:40.131 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:40.131 [ 0]:0x1 00:13:40.131 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:40.131 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:40.131 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=82774897c7ed4d789c057f70f772a8ad 00:13:40.131 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 82774897c7ed4d789c057f70f772a8ad != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:40.131 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:13:40.131 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:13:40.131 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:40.131 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:40.131 [ 0]:0x1 00:13:40.131 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:40.131 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:40.131 14:07:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=82774897c7ed4d789c057f70f772a8ad 00:13:40.131 14:07:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 82774897c7ed4d789c057f70f772a8ad != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:40.131 14:07:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:13:40.131 14:07:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:40.131 14:07:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:40.131 [ 1]:0x2 00:13:40.131 14:07:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:40.131 14:07:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:40.131 14:07:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9f37e4cc6bd1482caa03cecbba5d1eda 00:13:40.131 14:07:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9f37e4cc6bd1482caa03cecbba5d1eda != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:40.131 14:07:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:13:40.131 14:07:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:40.131 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:40.131 14:07:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:40.388 14:07:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:13:40.644 14:07:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:13:40.644 14:07:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I eff0f1a3-b36f-4ec5-bdb7-7ac308a1e716 -a 10.0.0.2 -s 4420 -i 4 00:13:40.900 14:07:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:13:40.900 14:07:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:13:40.900 14:07:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:40.900 14:07:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:13:40.900 14:07:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:13:40.900 14:07:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:13:43.423 14:07:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:43.423 14:07:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:43.423 14:07:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:43.423 14:07:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:43.423 14:07:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:43.423 14:07:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:13:43.423 14:07:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:43.423 14:07:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:43.423 14:07:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:43.423 14:07:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:43.423 14:07:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:13:43.423 14:07:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:13:43.423 14:07:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:13:43.423 14:07:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:13:43.423 14:07:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:43.423 14:07:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:13:43.423 14:07:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:43.423 14:07:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:13:43.423 14:07:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:43.423 14:07:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:43.423 14:07:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:43.423 14:07:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:43.423 14:07:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:43.423 14:07:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:43.423 14:07:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:13:43.423 14:07:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:43.423 14:07:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:43.423 14:07:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:43.423 14:07:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:13:43.423 14:07:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:43.423 14:07:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:43.423 [ 0]:0x2 00:13:43.423 14:07:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:43.423 14:07:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:43.423 14:07:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9f37e4cc6bd1482caa03cecbba5d1eda 00:13:43.423 14:07:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9f37e4cc6bd1482caa03cecbba5d1eda != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:43.423 14:07:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:43.423 14:07:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:13:43.423 14:07:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:43.423 14:07:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:43.423 [ 0]:0x1 00:13:43.423 14:07:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:43.423 14:07:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:43.423 14:07:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=82774897c7ed4d789c057f70f772a8ad 00:13:43.423 14:07:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 82774897c7ed4d789c057f70f772a8ad != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:43.423 14:07:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:13:43.423 14:07:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:43.423 14:07:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:43.423 [ 1]:0x2 00:13:43.423 14:07:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:43.423 14:07:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:43.681 14:07:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9f37e4cc6bd1482caa03cecbba5d1eda 00:13:43.681 14:07:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9f37e4cc6bd1482caa03cecbba5d1eda != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:43.681 14:07:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:43.681 14:07:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:13:43.681 14:07:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:13:43.681 14:07:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:13:43.681 14:07:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:13:43.681 14:07:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:43.681 14:07:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:13:43.681 14:07:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:43.681 14:07:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:13:43.681 14:07:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:43.681 14:07:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:43.681 14:07:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:43.681 14:07:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:43.939 14:07:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:43.939 14:07:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:43.939 14:07:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:13:43.939 14:07:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:43.939 14:07:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:43.939 14:07:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:43.939 14:07:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:13:43.939 14:07:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:43.939 14:07:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:43.939 [ 0]:0x2 00:13:43.939 14:07:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:43.939 14:07:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:43.939 14:07:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9f37e4cc6bd1482caa03cecbba5d1eda 00:13:43.939 14:07:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9f37e4cc6bd1482caa03cecbba5d1eda != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:43.939 14:07:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:13:43.939 14:07:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:43.939 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:43.939 14:07:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:44.196 14:07:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:13:44.196 14:07:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I eff0f1a3-b36f-4ec5-bdb7-7ac308a1e716 -a 10.0.0.2 -s 4420 -i 4 00:13:44.453 14:07:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:13:44.453 14:07:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:13:44.454 14:07:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:44.454 14:07:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:13:44.454 14:07:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:13:44.454 14:07:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:13:46.350 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:46.350 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:46.350 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:46.350 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:13:46.350 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:46.350 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:13:46.350 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:46.350 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:46.608 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:46.608 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:46.608 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:13:46.608 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:46.608 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:46.608 [ 0]:0x1 00:13:46.608 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:46.608 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:46.608 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=82774897c7ed4d789c057f70f772a8ad 00:13:46.608 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 82774897c7ed4d789c057f70f772a8ad != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:46.608 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:13:46.608 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:46.608 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:46.608 [ 1]:0x2 00:13:46.608 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:46.608 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:46.867 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9f37e4cc6bd1482caa03cecbba5d1eda 00:13:46.867 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9f37e4cc6bd1482caa03cecbba5d1eda != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:46.867 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:47.125 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:13:47.125 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:13:47.125 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:13:47.125 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:13:47.125 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:47.125 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:13:47.125 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:47.125 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:13:47.125 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:47.125 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:47.125 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:47.125 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:47.125 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:47.125 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:47.125 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:13:47.125 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:47.125 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:47.125 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:47.125 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:13:47.125 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:47.125 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:47.125 [ 0]:0x2 00:13:47.125 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:47.125 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:47.125 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9f37e4cc6bd1482caa03cecbba5d1eda 00:13:47.125 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9f37e4cc6bd1482caa03cecbba5d1eda != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:47.125 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:47.125 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:13:47.125 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:47.125 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:47.125 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:47.125 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:47.125 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:47.125 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:47.125 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:47.125 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:47.125 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:13:47.125 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:47.383 [2024-07-26 14:07:55.292985] nvmf_rpc.c:1798:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:13:47.383 request: 00:13:47.383 { 00:13:47.383 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:47.383 "nsid": 2, 00:13:47.383 "host": "nqn.2016-06.io.spdk:host1", 00:13:47.383 "method": "nvmf_ns_remove_host", 00:13:47.383 "req_id": 1 00:13:47.383 } 00:13:47.383 Got JSON-RPC error response 00:13:47.383 response: 00:13:47.383 { 00:13:47.383 "code": -32602, 00:13:47.383 "message": "Invalid parameters" 00:13:47.383 } 00:13:47.383 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:13:47.383 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:47.383 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:47.383 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:47.383 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:13:47.383 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:13:47.383 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:13:47.383 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:13:47.383 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:47.383 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:13:47.383 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:47.383 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:13:47.383 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:47.383 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:47.383 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:47.383 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:47.383 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:47.383 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:47.383 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:13:47.383 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:47.383 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:47.383 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:47.383 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:13:47.383 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:47.383 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:47.383 [ 0]:0x2 00:13:47.383 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:47.383 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:47.642 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9f37e4cc6bd1482caa03cecbba5d1eda 00:13:47.642 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9f37e4cc6bd1482caa03cecbba5d1eda != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:47.642 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:13:47.642 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:47.642 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:47.642 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=198425 00:13:47.642 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:13:47.642 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:13:47.642 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 198425 /var/tmp/host.sock 00:13:47.642 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 198425 ']' 00:13:47.642 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:13:47.642 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:47.642 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:13:47.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:13:47.642 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:47.642 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:47.642 [2024-07-26 14:07:55.516441] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:13:47.642 [2024-07-26 14:07:55.516554] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid198425 ] 00:13:47.642 EAL: No free 2048 kB hugepages reported on node 1 00:13:47.642 [2024-07-26 14:07:55.579488] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:47.900 [2024-07-26 14:07:55.687924] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:48.157 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:48.157 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:13:48.157 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:48.415 14:07:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:48.674 14:07:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid d4a8470d-b955-4d3f-ba41-0a43a6a6a05d 00:13:48.674 14:07:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:13:48.674 14:07:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g D4A8470DB9554D3FBA410A43A6A6A05D -i 00:13:48.932 14:07:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 2bbc861b-b426-4138-b37a-1b478560613b 00:13:48.932 14:07:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:13:48.932 14:07:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 2BBC861BB4264138B37A1B478560613B -i 00:13:49.190 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:49.448 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:13:49.706 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:13:49.706 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:13:49.964 nvme0n1 00:13:49.964 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:13:49.964 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:13:50.530 nvme1n2 00:13:50.530 14:07:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:13:50.530 14:07:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:13:50.530 14:07:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:13:50.530 14:07:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:13:50.530 14:07:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:13:50.787 14:07:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:13:50.787 14:07:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:13:50.787 14:07:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:13:50.787 14:07:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:13:51.045 14:07:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ d4a8470d-b955-4d3f-ba41-0a43a6a6a05d == \d\4\a\8\4\7\0\d\-\b\9\5\5\-\4\d\3\f\-\b\a\4\1\-\0\a\4\3\a\6\a\6\a\0\5\d ]] 00:13:51.045 14:07:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:13:51.045 14:07:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:13:51.045 14:07:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:13:51.303 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 2bbc861b-b426-4138-b37a-1b478560613b == \2\b\b\c\8\6\1\b\-\b\4\2\6\-\4\1\3\8\-\b\3\7\a\-\1\b\4\7\8\5\6\0\6\1\3\b ]] 00:13:51.303 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 198425 00:13:51.303 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 198425 ']' 00:13:51.303 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 198425 00:13:51.303 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:13:51.303 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:51.303 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 198425 00:13:51.303 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:13:51.303 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:13:51.303 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 198425' 00:13:51.303 killing process with pid 198425 00:13:51.303 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 198425 00:13:51.303 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 198425 00:13:51.868 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:51.868 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:13:51.868 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:13:51.868 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:51.868 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:13:51.868 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:51.868 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:13:51.868 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:51.868 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:51.868 rmmod nvme_tcp 00:13:52.126 rmmod nvme_fabrics 00:13:52.126 rmmod nvme_keyring 00:13:52.126 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:52.126 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:13:52.126 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:13:52.126 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 196792 ']' 00:13:52.126 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 196792 00:13:52.126 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 196792 ']' 00:13:52.126 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 196792 00:13:52.126 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:13:52.127 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:52.127 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 196792 00:13:52.127 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:52.127 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:52.127 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 196792' 00:13:52.127 killing process with pid 196792 00:13:52.127 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 196792 00:13:52.127 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 196792 00:13:52.387 14:08:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:52.387 14:08:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:52.387 14:08:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:52.387 14:08:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:52.387 14:08:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:52.387 14:08:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:52.387 14:08:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:52.387 14:08:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:54.926 14:08:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:54.926 00:13:54.926 real 0m21.977s 00:13:54.926 user 0m28.559s 00:13:54.926 sys 0m4.168s 00:13:54.926 14:08:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:54.926 14:08:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:54.926 ************************************ 00:13:54.926 END TEST nvmf_ns_masking 00:13:54.926 ************************************ 00:13:54.926 14:08:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:13:54.926 14:08:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:13:54.926 14:08:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:54.926 14:08:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:54.926 14:08:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:54.926 ************************************ 00:13:54.926 START TEST nvmf_nvme_cli 00:13:54.926 ************************************ 00:13:54.926 14:08:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:13:54.926 * Looking for test storage... 00:13:54.926 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:54.926 14:08:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:54.926 14:08:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:13:54.926 14:08:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:54.926 14:08:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:54.926 14:08:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:54.926 14:08:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:54.926 14:08:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:54.926 14:08:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:54.926 14:08:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:54.926 14:08:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:54.926 14:08:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:54.926 14:08:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:54.926 14:08:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:54.926 14:08:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:13:54.926 14:08:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:54.926 14:08:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:54.926 14:08:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:54.926 14:08:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:54.926 14:08:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:54.926 14:08:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:54.926 14:08:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:54.926 14:08:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:54.926 14:08:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:54.926 14:08:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:54.926 14:08:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:54.926 14:08:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:13:54.926 14:08:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:54.926 14:08:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:13:54.926 14:08:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:54.926 14:08:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:54.926 14:08:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:54.926 14:08:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:54.926 14:08:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:54.926 14:08:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:54.926 14:08:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:54.926 14:08:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:54.926 14:08:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:54.926 14:08:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:54.926 14:08:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:13:54.926 14:08:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:13:54.926 14:08:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:54.926 14:08:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:54.926 14:08:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:54.926 14:08:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:54.926 14:08:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:54.926 14:08:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:54.926 14:08:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:54.926 14:08:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:54.926 14:08:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:54.926 14:08:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:54.926 14:08:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:13:54.926 14:08:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:56.829 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:56.829 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:13:56.829 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:56.829 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:56.829 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:56.829 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:56.829 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:56.829 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:13:56.829 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:56.829 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:13:56.829 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:13:56.829 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:13:56.829 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:13:56.829 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:13:56.829 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:13:56.829 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:56.829 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:56.829 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:56.829 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:56.829 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:56.829 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:56.829 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:56.829 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:56.829 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:56.829 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:56.829 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:56.829 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:56.829 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:56.829 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:56.829 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:56.829 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:56.829 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:56.829 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:56.830 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:13:56.830 Found 0000:09:00.0 (0x8086 - 0x159b) 00:13:56.830 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:56.830 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:56.830 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:56.830 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:56.830 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:56.830 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:56.830 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:13:56.830 Found 0000:09:00.1 (0x8086 - 0x159b) 00:13:56.830 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:56.830 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:56.830 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:56.830 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:56.830 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:56.830 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:56.830 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:56.830 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:56.830 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:56.830 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:56.830 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:56.830 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:56.830 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:56.830 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:56.830 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:56.830 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:13:56.830 Found net devices under 0000:09:00.0: cvl_0_0 00:13:56.830 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:56.830 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:56.830 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:56.830 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:56.830 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:56.830 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:56.830 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:56.830 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:56.830 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:13:56.830 Found net devices under 0000:09:00.1: cvl_0_1 00:13:56.830 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:56.830 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:56.830 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:13:56.830 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:56.830 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:56.830 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:56.830 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:56.830 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:56.830 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:56.830 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:56.830 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:56.830 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:56.830 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:56.830 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:56.830 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:56.830 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:56.830 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:56.830 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:56.830 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:56.830 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:56.830 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:56.830 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:56.830 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:56.830 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:56.830 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:56.830 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:56.830 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:56.830 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.110 ms 00:13:56.830 00:13:56.830 --- 10.0.0.2 ping statistics --- 00:13:56.830 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:56.830 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:13:56.830 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:56.830 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:56.830 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.174 ms 00:13:56.830 00:13:56.830 --- 10.0.0.1 ping statistics --- 00:13:56.830 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:56.830 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:13:56.830 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:56.830 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:13:56.830 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:56.830 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:56.830 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:56.830 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:56.830 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:56.830 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:56.830 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:56.830 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:13:56.830 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:56.830 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:56.830 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:56.830 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=200979 00:13:56.830 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:56.830 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 200979 00:13:56.830 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@831 -- # '[' -z 200979 ']' 00:13:56.830 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:56.830 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:56.830 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:56.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:56.830 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:56.830 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:56.830 [2024-07-26 14:08:04.581650] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:13:56.830 [2024-07-26 14:08:04.581731] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:56.830 EAL: No free 2048 kB hugepages reported on node 1 00:13:56.830 [2024-07-26 14:08:04.645159] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:56.830 [2024-07-26 14:08:04.746921] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:56.830 [2024-07-26 14:08:04.746973] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:56.830 [2024-07-26 14:08:04.746992] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:56.830 [2024-07-26 14:08:04.747009] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:56.830 [2024-07-26 14:08:04.747022] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:56.830 [2024-07-26 14:08:04.747110] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:56.830 [2024-07-26 14:08:04.747217] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:56.830 [2024-07-26 14:08:04.747302] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:56.830 [2024-07-26 14:08:04.747309] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:57.089 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:57.089 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # return 0 00:13:57.089 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:57.089 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:57.089 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:57.089 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:57.089 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:57.089 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.089 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:57.089 [2024-07-26 14:08:04.901229] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:57.089 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.089 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:57.089 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.089 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:57.089 Malloc0 00:13:57.089 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.089 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:13:57.089 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.089 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:57.089 Malloc1 00:13:57.089 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.089 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:13:57.089 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.089 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:57.089 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.089 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:57.089 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.089 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:57.089 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.089 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:57.089 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.089 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:57.089 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.089 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:57.089 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.089 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:57.089 [2024-07-26 14:08:04.987089] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:57.089 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.089 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:57.089 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.089 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:57.089 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.089 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 4420 00:13:57.347 00:13:57.347 Discovery Log Number of Records 2, Generation counter 2 00:13:57.347 =====Discovery Log Entry 0====== 00:13:57.347 trtype: tcp 00:13:57.347 adrfam: ipv4 00:13:57.347 subtype: current discovery subsystem 00:13:57.347 treq: not required 00:13:57.347 portid: 0 00:13:57.347 trsvcid: 4420 00:13:57.347 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:57.347 traddr: 10.0.0.2 00:13:57.347 eflags: explicit discovery connections, duplicate discovery information 00:13:57.347 sectype: none 00:13:57.347 =====Discovery Log Entry 1====== 00:13:57.347 trtype: tcp 00:13:57.347 adrfam: ipv4 00:13:57.347 subtype: nvme subsystem 00:13:57.347 treq: not required 00:13:57.347 portid: 0 00:13:57.347 trsvcid: 4420 00:13:57.347 subnqn: nqn.2016-06.io.spdk:cnode1 00:13:57.347 traddr: 10.0.0.2 00:13:57.347 eflags: none 00:13:57.347 sectype: none 00:13:57.347 14:08:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:13:57.347 14:08:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:13:57.347 14:08:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:13:57.347 14:08:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:57.347 14:08:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:13:57.347 14:08:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:13:57.347 14:08:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:57.347 14:08:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:13:57.347 14:08:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:57.347 14:08:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:13:57.347 14:08:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:57.913 14:08:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:13:57.913 14:08:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:13:57.913 14:08:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:57.913 14:08:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:13:57.913 14:08:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:13:57.913 14:08:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:13:59.810 14:08:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:59.810 14:08:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:59.810 14:08:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:59.810 14:08:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:13:59.810 14:08:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:59.810 14:08:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:13:59.810 14:08:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:13:59.810 14:08:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:13:59.810 14:08:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:59.810 14:08:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:14:00.068 14:08:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:14:00.068 14:08:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:00.068 14:08:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:14:00.068 14:08:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:00.068 14:08:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:00.068 14:08:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:14:00.068 14:08:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:00.068 14:08:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:00.068 14:08:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:14:00.068 14:08:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:00.068 14:08:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:14:00.068 /dev/nvme0n1 ]] 00:14:00.068 14:08:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:14:00.068 14:08:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:14:00.068 14:08:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:14:00.068 14:08:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:00.068 14:08:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:14:00.068 14:08:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:14:00.068 14:08:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:00.068 14:08:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:14:00.068 14:08:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:00.068 14:08:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:00.068 14:08:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:14:00.068 14:08:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:00.068 14:08:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:00.068 14:08:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:14:00.068 14:08:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:00.068 14:08:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:14:00.068 14:08:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:00.326 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:00.326 14:08:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:00.326 14:08:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:14:00.326 14:08:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:00.326 14:08:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:00.326 14:08:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:00.326 14:08:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:00.326 14:08:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:14:00.326 14:08:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:14:00.327 14:08:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:00.327 14:08:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.327 14:08:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:00.327 14:08:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.327 14:08:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:00.327 14:08:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:14:00.327 14:08:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:00.327 14:08:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:14:00.327 14:08:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:00.327 14:08:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:14:00.327 14:08:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:00.327 14:08:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:00.327 rmmod nvme_tcp 00:14:00.327 rmmod nvme_fabrics 00:14:00.327 rmmod nvme_keyring 00:14:00.327 14:08:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:00.585 14:08:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:14:00.585 14:08:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:14:00.585 14:08:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 200979 ']' 00:14:00.585 14:08:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 200979 00:14:00.585 14:08:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@950 -- # '[' -z 200979 ']' 00:14:00.585 14:08:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # kill -0 200979 00:14:00.585 14:08:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # uname 00:14:00.585 14:08:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:00.585 14:08:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 200979 00:14:00.585 14:08:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:00.585 14:08:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:00.585 14:08:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@968 -- # echo 'killing process with pid 200979' 00:14:00.585 killing process with pid 200979 00:14:00.585 14:08:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@969 -- # kill 200979 00:14:00.585 14:08:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@974 -- # wait 200979 00:14:00.845 14:08:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:00.845 14:08:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:00.845 14:08:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:00.845 14:08:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:00.845 14:08:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:00.845 14:08:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:00.845 14:08:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:00.845 14:08:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:02.753 14:08:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:02.753 00:14:02.753 real 0m8.350s 00:14:02.753 user 0m15.989s 00:14:02.753 sys 0m2.147s 00:14:02.753 14:08:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:02.753 14:08:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:02.753 ************************************ 00:14:02.753 END TEST nvmf_nvme_cli 00:14:02.753 ************************************ 00:14:02.753 14:08:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:14:02.753 14:08:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:02.753 14:08:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:02.753 14:08:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:02.753 14:08:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:03.012 ************************************ 00:14:03.012 START TEST nvmf_vfio_user 00:14:03.012 ************************************ 00:14:03.012 14:08:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:03.012 * Looking for test storage... 00:14:03.012 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:03.012 14:08:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:03.012 14:08:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:14:03.012 14:08:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:03.012 14:08:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:03.012 14:08:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:03.012 14:08:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:03.012 14:08:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:03.012 14:08:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:03.012 14:08:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:03.012 14:08:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:03.012 14:08:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:03.012 14:08:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:03.012 14:08:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:03.012 14:08:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:14:03.012 14:08:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:03.012 14:08:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:03.012 14:08:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:03.012 14:08:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:03.012 14:08:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:03.012 14:08:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:03.012 14:08:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:03.012 14:08:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:03.012 14:08:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:03.012 14:08:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:03.012 14:08:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:03.012 14:08:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:14:03.012 14:08:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:03.012 14:08:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:14:03.012 14:08:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:03.012 14:08:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:03.012 14:08:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:03.012 14:08:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:03.012 14:08:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:03.012 14:08:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:03.012 14:08:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:03.012 14:08:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:03.012 14:08:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:03.012 14:08:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:03.012 14:08:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:14:03.012 14:08:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:03.012 14:08:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:03.012 14:08:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:03.012 14:08:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:14:03.012 14:08:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:14:03.012 14:08:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:14:03.012 14:08:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:14:03.012 14:08:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=201830 00:14:03.012 14:08:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:14:03.012 14:08:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 201830' 00:14:03.012 Process pid: 201830 00:14:03.012 14:08:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:03.012 14:08:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 201830 00:14:03.012 14:08:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 201830 ']' 00:14:03.012 14:08:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:03.012 14:08:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:03.012 14:08:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:03.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:03.012 14:08:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:03.012 14:08:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:03.012 [2024-07-26 14:08:10.887428] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:14:03.012 [2024-07-26 14:08:10.887518] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:03.012 EAL: No free 2048 kB hugepages reported on node 1 00:14:03.012 [2024-07-26 14:08:10.944417] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:03.271 [2024-07-26 14:08:11.051779] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:03.271 [2024-07-26 14:08:11.051827] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:03.271 [2024-07-26 14:08:11.051855] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:03.271 [2024-07-26 14:08:11.051867] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:03.271 [2024-07-26 14:08:11.051877] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:03.271 [2024-07-26 14:08:11.051938] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:03.271 [2024-07-26 14:08:11.052019] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:03.271 [2024-07-26 14:08:11.052021] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:03.271 [2024-07-26 14:08:11.051980] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:03.271 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:03.271 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:14:03.271 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:04.203 14:08:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:14:04.461 14:08:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:04.461 14:08:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:04.461 14:08:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:04.461 14:08:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:04.461 14:08:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:04.719 Malloc1 00:14:04.719 14:08:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:04.976 14:08:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:05.235 14:08:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:05.492 14:08:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:05.492 14:08:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:05.492 14:08:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:05.750 Malloc2 00:14:05.750 14:08:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:14:06.007 14:08:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:14:06.265 14:08:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:14:06.523 14:08:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:14:06.523 14:08:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:14:06.523 14:08:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:06.523 14:08:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:06.523 14:08:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:14:06.523 14:08:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:06.523 [2024-07-26 14:08:14.503017] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:14:06.523 [2024-07-26 14:08:14.503060] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid202256 ] 00:14:06.523 EAL: No free 2048 kB hugepages reported on node 1 00:14:06.523 [2024-07-26 14:08:14.536930] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:14:06.783 [2024-07-26 14:08:14.545061] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:06.783 [2024-07-26 14:08:14.545091] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f2b3840b000 00:14:06.783 [2024-07-26 14:08:14.547541] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:06.783 [2024-07-26 14:08:14.548055] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:06.783 [2024-07-26 14:08:14.549058] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:06.783 [2024-07-26 14:08:14.550063] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:06.783 [2024-07-26 14:08:14.551064] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:06.783 [2024-07-26 14:08:14.552068] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:06.783 [2024-07-26 14:08:14.553075] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:06.783 [2024-07-26 14:08:14.554081] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:06.783 [2024-07-26 14:08:14.555087] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:06.783 [2024-07-26 14:08:14.555107] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f2b38400000 00:14:06.783 [2024-07-26 14:08:14.556233] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:06.783 [2024-07-26 14:08:14.574928] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:14:06.783 [2024-07-26 14:08:14.574966] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:14:06.783 [2024-07-26 14:08:14.577227] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:06.783 [2024-07-26 14:08:14.577290] nvme_pcie_common.c: 133:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:06.783 [2024-07-26 14:08:14.577392] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:14:06.783 [2024-07-26 14:08:14.577423] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:14:06.783 [2024-07-26 14:08:14.577435] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:14:06.783 [2024-07-26 14:08:14.578221] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:14:06.783 [2024-07-26 14:08:14.578245] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:14:06.783 [2024-07-26 14:08:14.578258] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:14:06.783 [2024-07-26 14:08:14.579221] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:06.783 [2024-07-26 14:08:14.579241] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:14:06.783 [2024-07-26 14:08:14.579255] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:14:06.783 [2024-07-26 14:08:14.580227] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:14:06.783 [2024-07-26 14:08:14.580246] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:06.783 [2024-07-26 14:08:14.581230] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:14:06.783 [2024-07-26 14:08:14.581249] nvme_ctrlr.c:3873:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:14:06.783 [2024-07-26 14:08:14.581258] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:14:06.783 [2024-07-26 14:08:14.581269] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:06.783 [2024-07-26 14:08:14.581379] nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:14:06.783 [2024-07-26 14:08:14.581387] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:06.783 [2024-07-26 14:08:14.581396] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:14:06.783 [2024-07-26 14:08:14.582238] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:14:06.783 [2024-07-26 14:08:14.583243] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:14:06.783 [2024-07-26 14:08:14.584248] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:06.783 [2024-07-26 14:08:14.585243] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:06.783 [2024-07-26 14:08:14.585338] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:06.783 [2024-07-26 14:08:14.586264] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:14:06.783 [2024-07-26 14:08:14.586283] nvme_ctrlr.c:3908:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:06.783 [2024-07-26 14:08:14.586292] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:14:06.783 [2024-07-26 14:08:14.586316] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:14:06.783 [2024-07-26 14:08:14.586329] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:14:06.783 [2024-07-26 14:08:14.586360] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:06.783 [2024-07-26 14:08:14.586370] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:06.783 [2024-07-26 14:08:14.586376] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:06.783 [2024-07-26 14:08:14.586398] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:06.783 [2024-07-26 14:08:14.586464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:06.783 [2024-07-26 14:08:14.586483] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:14:06.783 [2024-07-26 14:08:14.586491] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:14:06.783 [2024-07-26 14:08:14.586499] nvme_ctrlr.c:2064:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:14:06.783 [2024-07-26 14:08:14.586520] nvme_ctrlr.c:2075:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:06.783 [2024-07-26 14:08:14.586552] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:14:06.783 [2024-07-26 14:08:14.586563] nvme_ctrlr.c:2103:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:14:06.783 [2024-07-26 14:08:14.586589] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:14:06.783 [2024-07-26 14:08:14.586605] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:14:06.783 [2024-07-26 14:08:14.586627] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:06.783 [2024-07-26 14:08:14.586648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:06.783 [2024-07-26 14:08:14.586672] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:06.783 [2024-07-26 14:08:14.586686] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:06.783 [2024-07-26 14:08:14.586699] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:06.783 [2024-07-26 14:08:14.586712] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:06.783 [2024-07-26 14:08:14.586721] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:14:06.783 [2024-07-26 14:08:14.586739] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:06.783 [2024-07-26 14:08:14.586758] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:06.783 [2024-07-26 14:08:14.586771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:06.783 [2024-07-26 14:08:14.586783] nvme_ctrlr.c:3014:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:14:06.783 [2024-07-26 14:08:14.586792] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:06.783 [2024-07-26 14:08:14.586808] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:14:06.783 [2024-07-26 14:08:14.586820] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:14:06.783 [2024-07-26 14:08:14.586833] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:06.783 [2024-07-26 14:08:14.586860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:06.784 [2024-07-26 14:08:14.586945] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:14:06.784 [2024-07-26 14:08:14.586962] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:14:06.784 [2024-07-26 14:08:14.586977] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:06.784 [2024-07-26 14:08:14.586985] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:06.784 [2024-07-26 14:08:14.586991] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:06.784 [2024-07-26 14:08:14.587000] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:06.784 [2024-07-26 14:08:14.587014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:06.784 [2024-07-26 14:08:14.587032] nvme_ctrlr.c:4697:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:14:06.784 [2024-07-26 14:08:14.587049] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:14:06.784 [2024-07-26 14:08:14.587064] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:14:06.784 [2024-07-26 14:08:14.587076] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:06.784 [2024-07-26 14:08:14.587084] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:06.784 [2024-07-26 14:08:14.587090] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:06.784 [2024-07-26 14:08:14.587099] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:06.784 [2024-07-26 14:08:14.587128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:06.784 [2024-07-26 14:08:14.587152] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:06.784 [2024-07-26 14:08:14.587166] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:06.784 [2024-07-26 14:08:14.587182] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:06.784 [2024-07-26 14:08:14.587191] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:06.784 [2024-07-26 14:08:14.587197] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:06.784 [2024-07-26 14:08:14.587206] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:06.784 [2024-07-26 14:08:14.587222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:06.784 [2024-07-26 14:08:14.587236] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:06.784 [2024-07-26 14:08:14.587248] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:14:06.784 [2024-07-26 14:08:14.587263] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:14:06.784 [2024-07-26 14:08:14.587276] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:14:06.784 [2024-07-26 14:08:14.587285] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:06.784 [2024-07-26 14:08:14.587294] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:14:06.784 [2024-07-26 14:08:14.587303] nvme_ctrlr.c:3114:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:14:06.784 [2024-07-26 14:08:14.587311] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:14:06.784 [2024-07-26 14:08:14.587320] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:14:06.784 [2024-07-26 14:08:14.587346] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:06.784 [2024-07-26 14:08:14.587364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:06.784 [2024-07-26 14:08:14.587383] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:06.784 [2024-07-26 14:08:14.587395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:06.784 [2024-07-26 14:08:14.587412] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:06.784 [2024-07-26 14:08:14.587423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:06.784 [2024-07-26 14:08:14.587439] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:06.784 [2024-07-26 14:08:14.587451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:06.784 [2024-07-26 14:08:14.587474] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:06.784 [2024-07-26 14:08:14.587485] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:06.784 [2024-07-26 14:08:14.587491] nvme_pcie_common.c:1239:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:06.784 [2024-07-26 14:08:14.587497] nvme_pcie_common.c:1255:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:06.784 [2024-07-26 14:08:14.587503] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:14:06.784 [2024-07-26 14:08:14.587541] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:06.784 [2024-07-26 14:08:14.587557] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:06.784 [2024-07-26 14:08:14.587566] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:06.784 [2024-07-26 14:08:14.587572] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:06.784 [2024-07-26 14:08:14.587581] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:06.784 [2024-07-26 14:08:14.587592] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:06.784 [2024-07-26 14:08:14.587601] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:06.784 [2024-07-26 14:08:14.587607] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:06.784 [2024-07-26 14:08:14.587616] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:06.784 [2024-07-26 14:08:14.587628] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:06.784 [2024-07-26 14:08:14.587636] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:06.784 [2024-07-26 14:08:14.587642] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:06.784 [2024-07-26 14:08:14.587651] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:06.784 [2024-07-26 14:08:14.587663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:06.784 [2024-07-26 14:08:14.587684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:06.784 [2024-07-26 14:08:14.587704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:06.784 [2024-07-26 14:08:14.587717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:06.784 ===================================================== 00:14:06.784 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:06.784 ===================================================== 00:14:06.784 Controller Capabilities/Features 00:14:06.784 ================================ 00:14:06.784 Vendor ID: 4e58 00:14:06.784 Subsystem Vendor ID: 4e58 00:14:06.784 Serial Number: SPDK1 00:14:06.784 Model Number: SPDK bdev Controller 00:14:06.784 Firmware Version: 24.09 00:14:06.784 Recommended Arb Burst: 6 00:14:06.784 IEEE OUI Identifier: 8d 6b 50 00:14:06.784 Multi-path I/O 00:14:06.784 May have multiple subsystem ports: Yes 00:14:06.784 May have multiple controllers: Yes 00:14:06.784 Associated with SR-IOV VF: No 00:14:06.784 Max Data Transfer Size: 131072 00:14:06.784 Max Number of Namespaces: 32 00:14:06.784 Max Number of I/O Queues: 127 00:14:06.784 NVMe Specification Version (VS): 1.3 00:14:06.784 NVMe Specification Version (Identify): 1.3 00:14:06.784 Maximum Queue Entries: 256 00:14:06.784 Contiguous Queues Required: Yes 00:14:06.784 Arbitration Mechanisms Supported 00:14:06.784 Weighted Round Robin: Not Supported 00:14:06.784 Vendor Specific: Not Supported 00:14:06.784 Reset Timeout: 15000 ms 00:14:06.784 Doorbell Stride: 4 bytes 00:14:06.784 NVM Subsystem Reset: Not Supported 00:14:06.784 Command Sets Supported 00:14:06.784 NVM Command Set: Supported 00:14:06.784 Boot Partition: Not Supported 00:14:06.784 Memory Page Size Minimum: 4096 bytes 00:14:06.784 Memory Page Size Maximum: 4096 bytes 00:14:06.784 Persistent Memory Region: Not Supported 00:14:06.784 Optional Asynchronous Events Supported 00:14:06.784 Namespace Attribute Notices: Supported 00:14:06.784 Firmware Activation Notices: Not Supported 00:14:06.784 ANA Change Notices: Not Supported 00:14:06.784 PLE Aggregate Log Change Notices: Not Supported 00:14:06.784 LBA Status Info Alert Notices: Not Supported 00:14:06.784 EGE Aggregate Log Change Notices: Not Supported 00:14:06.784 Normal NVM Subsystem Shutdown event: Not Supported 00:14:06.784 Zone Descriptor Change Notices: Not Supported 00:14:06.784 Discovery Log Change Notices: Not Supported 00:14:06.784 Controller Attributes 00:14:06.785 128-bit Host Identifier: Supported 00:14:06.785 Non-Operational Permissive Mode: Not Supported 00:14:06.785 NVM Sets: Not Supported 00:14:06.785 Read Recovery Levels: Not Supported 00:14:06.785 Endurance Groups: Not Supported 00:14:06.785 Predictable Latency Mode: Not Supported 00:14:06.785 Traffic Based Keep ALive: Not Supported 00:14:06.785 Namespace Granularity: Not Supported 00:14:06.785 SQ Associations: Not Supported 00:14:06.785 UUID List: Not Supported 00:14:06.785 Multi-Domain Subsystem: Not Supported 00:14:06.785 Fixed Capacity Management: Not Supported 00:14:06.785 Variable Capacity Management: Not Supported 00:14:06.785 Delete Endurance Group: Not Supported 00:14:06.785 Delete NVM Set: Not Supported 00:14:06.785 Extended LBA Formats Supported: Not Supported 00:14:06.785 Flexible Data Placement Supported: Not Supported 00:14:06.785 00:14:06.785 Controller Memory Buffer Support 00:14:06.785 ================================ 00:14:06.785 Supported: No 00:14:06.785 00:14:06.785 Persistent Memory Region Support 00:14:06.785 ================================ 00:14:06.785 Supported: No 00:14:06.785 00:14:06.785 Admin Command Set Attributes 00:14:06.785 ============================ 00:14:06.785 Security Send/Receive: Not Supported 00:14:06.785 Format NVM: Not Supported 00:14:06.785 Firmware Activate/Download: Not Supported 00:14:06.785 Namespace Management: Not Supported 00:14:06.785 Device Self-Test: Not Supported 00:14:06.785 Directives: Not Supported 00:14:06.785 NVMe-MI: Not Supported 00:14:06.785 Virtualization Management: Not Supported 00:14:06.785 Doorbell Buffer Config: Not Supported 00:14:06.785 Get LBA Status Capability: Not Supported 00:14:06.785 Command & Feature Lockdown Capability: Not Supported 00:14:06.785 Abort Command Limit: 4 00:14:06.785 Async Event Request Limit: 4 00:14:06.785 Number of Firmware Slots: N/A 00:14:06.785 Firmware Slot 1 Read-Only: N/A 00:14:06.785 Firmware Activation Without Reset: N/A 00:14:06.785 Multiple Update Detection Support: N/A 00:14:06.785 Firmware Update Granularity: No Information Provided 00:14:06.785 Per-Namespace SMART Log: No 00:14:06.785 Asymmetric Namespace Access Log Page: Not Supported 00:14:06.785 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:14:06.785 Command Effects Log Page: Supported 00:14:06.785 Get Log Page Extended Data: Supported 00:14:06.785 Telemetry Log Pages: Not Supported 00:14:06.785 Persistent Event Log Pages: Not Supported 00:14:06.785 Supported Log Pages Log Page: May Support 00:14:06.785 Commands Supported & Effects Log Page: Not Supported 00:14:06.785 Feature Identifiers & Effects Log Page:May Support 00:14:06.785 NVMe-MI Commands & Effects Log Page: May Support 00:14:06.785 Data Area 4 for Telemetry Log: Not Supported 00:14:06.785 Error Log Page Entries Supported: 128 00:14:06.785 Keep Alive: Supported 00:14:06.785 Keep Alive Granularity: 10000 ms 00:14:06.785 00:14:06.785 NVM Command Set Attributes 00:14:06.785 ========================== 00:14:06.785 Submission Queue Entry Size 00:14:06.785 Max: 64 00:14:06.785 Min: 64 00:14:06.785 Completion Queue Entry Size 00:14:06.785 Max: 16 00:14:06.785 Min: 16 00:14:06.785 Number of Namespaces: 32 00:14:06.785 Compare Command: Supported 00:14:06.785 Write Uncorrectable Command: Not Supported 00:14:06.785 Dataset Management Command: Supported 00:14:06.785 Write Zeroes Command: Supported 00:14:06.785 Set Features Save Field: Not Supported 00:14:06.785 Reservations: Not Supported 00:14:06.785 Timestamp: Not Supported 00:14:06.785 Copy: Supported 00:14:06.785 Volatile Write Cache: Present 00:14:06.785 Atomic Write Unit (Normal): 1 00:14:06.785 Atomic Write Unit (PFail): 1 00:14:06.785 Atomic Compare & Write Unit: 1 00:14:06.785 Fused Compare & Write: Supported 00:14:06.785 Scatter-Gather List 00:14:06.785 SGL Command Set: Supported (Dword aligned) 00:14:06.785 SGL Keyed: Not Supported 00:14:06.785 SGL Bit Bucket Descriptor: Not Supported 00:14:06.785 SGL Metadata Pointer: Not Supported 00:14:06.785 Oversized SGL: Not Supported 00:14:06.785 SGL Metadata Address: Not Supported 00:14:06.785 SGL Offset: Not Supported 00:14:06.785 Transport SGL Data Block: Not Supported 00:14:06.785 Replay Protected Memory Block: Not Supported 00:14:06.785 00:14:06.785 Firmware Slot Information 00:14:06.785 ========================= 00:14:06.785 Active slot: 1 00:14:06.785 Slot 1 Firmware Revision: 24.09 00:14:06.785 00:14:06.785 00:14:06.785 Commands Supported and Effects 00:14:06.785 ============================== 00:14:06.785 Admin Commands 00:14:06.785 -------------- 00:14:06.785 Get Log Page (02h): Supported 00:14:06.785 Identify (06h): Supported 00:14:06.785 Abort (08h): Supported 00:14:06.785 Set Features (09h): Supported 00:14:06.785 Get Features (0Ah): Supported 00:14:06.785 Asynchronous Event Request (0Ch): Supported 00:14:06.785 Keep Alive (18h): Supported 00:14:06.785 I/O Commands 00:14:06.785 ------------ 00:14:06.785 Flush (00h): Supported LBA-Change 00:14:06.785 Write (01h): Supported LBA-Change 00:14:06.785 Read (02h): Supported 00:14:06.785 Compare (05h): Supported 00:14:06.785 Write Zeroes (08h): Supported LBA-Change 00:14:06.785 Dataset Management (09h): Supported LBA-Change 00:14:06.785 Copy (19h): Supported LBA-Change 00:14:06.785 00:14:06.785 Error Log 00:14:06.785 ========= 00:14:06.785 00:14:06.785 Arbitration 00:14:06.785 =========== 00:14:06.785 Arbitration Burst: 1 00:14:06.785 00:14:06.785 Power Management 00:14:06.785 ================ 00:14:06.785 Number of Power States: 1 00:14:06.785 Current Power State: Power State #0 00:14:06.785 Power State #0: 00:14:06.785 Max Power: 0.00 W 00:14:06.785 Non-Operational State: Operational 00:14:06.785 Entry Latency: Not Reported 00:14:06.785 Exit Latency: Not Reported 00:14:06.785 Relative Read Throughput: 0 00:14:06.785 Relative Read Latency: 0 00:14:06.785 Relative Write Throughput: 0 00:14:06.785 Relative Write Latency: 0 00:14:06.785 Idle Power: Not Reported 00:14:06.785 Active Power: Not Reported 00:14:06.785 Non-Operational Permissive Mode: Not Supported 00:14:06.785 00:14:06.785 Health Information 00:14:06.785 ================== 00:14:06.785 Critical Warnings: 00:14:06.785 Available Spare Space: OK 00:14:06.785 Temperature: OK 00:14:06.785 Device Reliability: OK 00:14:06.785 Read Only: No 00:14:06.785 Volatile Memory Backup: OK 00:14:06.785 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:06.785 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:06.785 Available Spare: 0% 00:14:06.785 Available Sp[2024-07-26 14:08:14.587850] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:06.785 [2024-07-26 14:08:14.587867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:06.785 [2024-07-26 14:08:14.587909] nvme_ctrlr.c:4361:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:14:06.785 [2024-07-26 14:08:14.587927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.785 [2024-07-26 14:08:14.587938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.785 [2024-07-26 14:08:14.587948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.785 [2024-07-26 14:08:14.587958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.785 [2024-07-26 14:08:14.590539] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:06.785 [2024-07-26 14:08:14.590562] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:14:06.785 [2024-07-26 14:08:14.591282] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:06.785 [2024-07-26 14:08:14.591358] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:14:06.785 [2024-07-26 14:08:14.591372] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:14:06.785 [2024-07-26 14:08:14.592293] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:14:06.785 [2024-07-26 14:08:14.592317] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:14:06.785 [2024-07-26 14:08:14.592375] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:14:06.785 [2024-07-26 14:08:14.594333] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:06.785 are Threshold: 0% 00:14:06.785 Life Percentage Used: 0% 00:14:06.786 Data Units Read: 0 00:14:06.786 Data Units Written: 0 00:14:06.786 Host Read Commands: 0 00:14:06.786 Host Write Commands: 0 00:14:06.786 Controller Busy Time: 0 minutes 00:14:06.786 Power Cycles: 0 00:14:06.786 Power On Hours: 0 hours 00:14:06.786 Unsafe Shutdowns: 0 00:14:06.786 Unrecoverable Media Errors: 0 00:14:06.786 Lifetime Error Log Entries: 0 00:14:06.786 Warning Temperature Time: 0 minutes 00:14:06.786 Critical Temperature Time: 0 minutes 00:14:06.786 00:14:06.786 Number of Queues 00:14:06.786 ================ 00:14:06.786 Number of I/O Submission Queues: 127 00:14:06.786 Number of I/O Completion Queues: 127 00:14:06.786 00:14:06.786 Active Namespaces 00:14:06.786 ================= 00:14:06.786 Namespace ID:1 00:14:06.786 Error Recovery Timeout: Unlimited 00:14:06.786 Command Set Identifier: NVM (00h) 00:14:06.786 Deallocate: Supported 00:14:06.786 Deallocated/Unwritten Error: Not Supported 00:14:06.786 Deallocated Read Value: Unknown 00:14:06.786 Deallocate in Write Zeroes: Not Supported 00:14:06.786 Deallocated Guard Field: 0xFFFF 00:14:06.786 Flush: Supported 00:14:06.786 Reservation: Supported 00:14:06.786 Namespace Sharing Capabilities: Multiple Controllers 00:14:06.786 Size (in LBAs): 131072 (0GiB) 00:14:06.786 Capacity (in LBAs): 131072 (0GiB) 00:14:06.786 Utilization (in LBAs): 131072 (0GiB) 00:14:06.786 NGUID: 808E6E318D7D43FA8F5BE8483AC3DC5D 00:14:06.786 UUID: 808e6e31-8d7d-43fa-8f5b-e8483ac3dc5d 00:14:06.786 Thin Provisioning: Not Supported 00:14:06.786 Per-NS Atomic Units: Yes 00:14:06.786 Atomic Boundary Size (Normal): 0 00:14:06.786 Atomic Boundary Size (PFail): 0 00:14:06.786 Atomic Boundary Offset: 0 00:14:06.786 Maximum Single Source Range Length: 65535 00:14:06.786 Maximum Copy Length: 65535 00:14:06.786 Maximum Source Range Count: 1 00:14:06.786 NGUID/EUI64 Never Reused: No 00:14:06.786 Namespace Write Protected: No 00:14:06.786 Number of LBA Formats: 1 00:14:06.786 Current LBA Format: LBA Format #00 00:14:06.786 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:06.786 00:14:06.786 14:08:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:06.786 EAL: No free 2048 kB hugepages reported on node 1 00:14:07.044 [2024-07-26 14:08:14.835425] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:12.307 Initializing NVMe Controllers 00:14:12.307 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:12.307 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:14:12.307 Initialization complete. Launching workers. 00:14:12.307 ======================================================== 00:14:12.307 Latency(us) 00:14:12.307 Device Information : IOPS MiB/s Average min max 00:14:12.307 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 35030.67 136.84 3653.39 1161.36 8466.90 00:14:12.307 ======================================================== 00:14:12.307 Total : 35030.67 136.84 3653.39 1161.36 8466.90 00:14:12.307 00:14:12.307 [2024-07-26 14:08:19.861287] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:12.307 14:08:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:14:12.307 EAL: No free 2048 kB hugepages reported on node 1 00:14:12.307 [2024-07-26 14:08:20.106583] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:17.568 Initializing NVMe Controllers 00:14:17.568 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:17.568 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:14:17.568 Initialization complete. Launching workers. 00:14:17.568 ======================================================== 00:14:17.568 Latency(us) 00:14:17.568 Device Information : IOPS MiB/s Average min max 00:14:17.568 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16007.60 62.53 8004.59 6022.97 15979.54 00:14:17.568 ======================================================== 00:14:17.568 Total : 16007.60 62.53 8004.59 6022.97 15979.54 00:14:17.568 00:14:17.568 [2024-07-26 14:08:25.144465] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:17.568 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:14:17.568 EAL: No free 2048 kB hugepages reported on node 1 00:14:17.568 [2024-07-26 14:08:25.354521] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:22.828 [2024-07-26 14:08:30.442010] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:22.828 Initializing NVMe Controllers 00:14:22.828 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:22.828 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:22.828 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:14:22.828 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:14:22.828 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:14:22.828 Initialization complete. Launching workers. 00:14:22.828 Starting thread on core 2 00:14:22.828 Starting thread on core 3 00:14:22.828 Starting thread on core 1 00:14:22.828 14:08:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:14:22.828 EAL: No free 2048 kB hugepages reported on node 1 00:14:22.828 [2024-07-26 14:08:30.748457] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:26.119 [2024-07-26 14:08:33.813411] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:26.119 Initializing NVMe Controllers 00:14:26.119 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:26.119 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:26.119 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:14:26.119 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:14:26.119 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:14:26.119 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:14:26.119 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:14:26.119 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:14:26.119 Initialization complete. Launching workers. 00:14:26.119 Starting thread on core 1 with urgent priority queue 00:14:26.119 Starting thread on core 2 with urgent priority queue 00:14:26.119 Starting thread on core 3 with urgent priority queue 00:14:26.119 Starting thread on core 0 with urgent priority queue 00:14:26.119 SPDK bdev Controller (SPDK1 ) core 0: 5209.33 IO/s 19.20 secs/100000 ios 00:14:26.119 SPDK bdev Controller (SPDK1 ) core 1: 5049.67 IO/s 19.80 secs/100000 ios 00:14:26.119 SPDK bdev Controller (SPDK1 ) core 2: 5110.00 IO/s 19.57 secs/100000 ios 00:14:26.119 SPDK bdev Controller (SPDK1 ) core 3: 5448.33 IO/s 18.35 secs/100000 ios 00:14:26.119 ======================================================== 00:14:26.119 00:14:26.119 14:08:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:14:26.119 EAL: No free 2048 kB hugepages reported on node 1 00:14:26.119 [2024-07-26 14:08:34.123133] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:26.376 Initializing NVMe Controllers 00:14:26.376 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:26.376 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:26.376 Namespace ID: 1 size: 0GB 00:14:26.376 Initialization complete. 00:14:26.376 INFO: using host memory buffer for IO 00:14:26.376 Hello world! 00:14:26.376 [2024-07-26 14:08:34.156694] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:26.376 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:14:26.376 EAL: No free 2048 kB hugepages reported on node 1 00:14:26.634 [2024-07-26 14:08:34.459004] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:27.569 Initializing NVMe Controllers 00:14:27.569 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:27.569 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:27.569 Initialization complete. Launching workers. 00:14:27.569 submit (in ns) avg, min, max = 7878.0, 3500.0, 4005366.7 00:14:27.569 complete (in ns) avg, min, max = 27687.4, 2074.4, 4016140.0 00:14:27.569 00:14:27.569 Submit histogram 00:14:27.569 ================ 00:14:27.569 Range in us Cumulative Count 00:14:27.569 3.484 - 3.508: 0.1502% ( 20) 00:14:27.569 3.508 - 3.532: 1.4799% ( 177) 00:14:27.569 3.532 - 3.556: 3.7109% ( 297) 00:14:27.569 3.556 - 3.579: 9.3525% ( 751) 00:14:27.569 3.579 - 3.603: 15.2494% ( 785) 00:14:27.569 3.603 - 3.627: 23.7906% ( 1137) 00:14:27.569 3.627 - 3.650: 31.8059% ( 1067) 00:14:27.569 3.650 - 3.674: 39.7912% ( 1063) 00:14:27.569 3.674 - 3.698: 46.6572% ( 914) 00:14:27.569 3.698 - 3.721: 52.9823% ( 842) 00:14:27.569 3.721 - 3.745: 57.5270% ( 605) 00:14:27.569 3.745 - 3.769: 61.5535% ( 536) 00:14:27.569 3.769 - 3.793: 65.5273% ( 529) 00:14:27.569 3.793 - 3.816: 68.6373% ( 414) 00:14:27.569 3.816 - 3.840: 72.4684% ( 510) 00:14:27.569 3.840 - 3.864: 76.6076% ( 551) 00:14:27.569 3.864 - 3.887: 80.0180% ( 454) 00:14:27.569 3.887 - 3.911: 83.0829% ( 408) 00:14:27.569 3.911 - 3.935: 85.6746% ( 345) 00:14:27.569 3.935 - 3.959: 87.6352% ( 261) 00:14:27.569 3.959 - 3.982: 89.0550% ( 189) 00:14:27.569 3.982 - 4.006: 90.4297% ( 183) 00:14:27.569 4.006 - 4.030: 91.3762% ( 126) 00:14:27.569 4.030 - 4.053: 92.2401% ( 115) 00:14:27.569 4.053 - 4.077: 93.0814% ( 112) 00:14:27.569 4.077 - 4.101: 93.7425% ( 88) 00:14:27.569 4.101 - 4.124: 94.3134% ( 76) 00:14:27.569 4.124 - 4.148: 94.8167% ( 67) 00:14:27.569 4.148 - 4.172: 95.1623% ( 46) 00:14:27.569 4.172 - 4.196: 95.4477% ( 38) 00:14:27.569 4.196 - 4.219: 95.6430% ( 26) 00:14:27.569 4.219 - 4.243: 95.8308% ( 25) 00:14:27.569 4.243 - 4.267: 95.9811% ( 20) 00:14:27.569 4.267 - 4.290: 96.1538% ( 23) 00:14:27.569 4.290 - 4.314: 96.2515% ( 13) 00:14:27.569 4.314 - 4.338: 96.3792% ( 17) 00:14:27.569 4.338 - 4.361: 96.5144% ( 18) 00:14:27.569 4.361 - 4.385: 96.6046% ( 12) 00:14:27.569 4.385 - 4.409: 96.6496% ( 6) 00:14:27.569 4.409 - 4.433: 96.6797% ( 4) 00:14:27.569 4.433 - 4.456: 96.7022% ( 3) 00:14:27.569 4.456 - 4.480: 96.7323% ( 4) 00:14:27.569 4.480 - 4.504: 96.7398% ( 1) 00:14:27.569 4.504 - 4.527: 96.7473% ( 1) 00:14:27.569 4.527 - 4.551: 96.7773% ( 4) 00:14:27.569 4.551 - 4.575: 96.7924% ( 2) 00:14:27.569 4.575 - 4.599: 96.7999% ( 1) 00:14:27.569 4.599 - 4.622: 96.8074% ( 1) 00:14:27.569 4.622 - 4.646: 96.8149% ( 1) 00:14:27.569 4.670 - 4.693: 96.8224% ( 1) 00:14:27.569 4.693 - 4.717: 96.8374% ( 2) 00:14:27.569 4.717 - 4.741: 96.8525% ( 2) 00:14:27.569 4.741 - 4.764: 96.8750% ( 3) 00:14:27.569 4.764 - 4.788: 96.8975% ( 3) 00:14:27.569 4.788 - 4.812: 96.9126% ( 2) 00:14:27.569 4.812 - 4.836: 96.9651% ( 7) 00:14:27.569 4.836 - 4.859: 97.0177% ( 7) 00:14:27.569 4.859 - 4.883: 97.0778% ( 8) 00:14:27.569 4.883 - 4.907: 97.1154% ( 5) 00:14:27.569 4.907 - 4.930: 97.1680% ( 7) 00:14:27.569 4.930 - 4.954: 97.2206% ( 7) 00:14:27.569 4.954 - 4.978: 97.2656% ( 6) 00:14:27.569 4.978 - 5.001: 97.3182% ( 7) 00:14:27.569 5.001 - 5.025: 97.3633% ( 6) 00:14:27.569 5.025 - 5.049: 97.4309% ( 9) 00:14:27.569 5.049 - 5.073: 97.4459% ( 2) 00:14:27.569 5.073 - 5.096: 97.4684% ( 3) 00:14:27.569 5.096 - 5.120: 97.5135% ( 6) 00:14:27.569 5.120 - 5.144: 97.5436% ( 4) 00:14:27.569 5.144 - 5.167: 97.5962% ( 7) 00:14:27.569 5.167 - 5.191: 97.6037% ( 1) 00:14:27.569 5.191 - 5.215: 97.6412% ( 5) 00:14:27.569 5.215 - 5.239: 97.6562% ( 2) 00:14:27.569 5.239 - 5.262: 97.6788% ( 3) 00:14:27.569 5.262 - 5.286: 97.7013% ( 3) 00:14:27.569 5.286 - 5.310: 97.7163% ( 2) 00:14:27.569 5.310 - 5.333: 97.7314% ( 2) 00:14:27.569 5.333 - 5.357: 97.7539% ( 3) 00:14:27.569 5.357 - 5.381: 97.7764% ( 3) 00:14:27.569 5.381 - 5.404: 97.7990% ( 3) 00:14:27.569 5.404 - 5.428: 97.8140% ( 2) 00:14:27.569 5.428 - 5.452: 97.8215% ( 1) 00:14:27.569 5.452 - 5.476: 97.8441% ( 3) 00:14:27.569 5.476 - 5.499: 97.8666% ( 3) 00:14:27.569 5.499 - 5.523: 97.8741% ( 1) 00:14:27.569 5.523 - 5.547: 97.8816% ( 1) 00:14:27.569 5.594 - 5.618: 97.8966% ( 2) 00:14:27.569 5.618 - 5.641: 97.9117% ( 2) 00:14:27.569 5.641 - 5.665: 97.9192% ( 1) 00:14:27.569 5.665 - 5.689: 97.9267% ( 1) 00:14:27.569 5.689 - 5.713: 97.9342% ( 1) 00:14:27.569 5.855 - 5.879: 97.9492% ( 2) 00:14:27.569 5.902 - 5.926: 97.9567% ( 1) 00:14:27.569 5.973 - 5.997: 97.9642% ( 1) 00:14:27.569 6.021 - 6.044: 97.9793% ( 2) 00:14:27.569 6.163 - 6.210: 98.0018% ( 3) 00:14:27.569 6.637 - 6.684: 98.0093% ( 1) 00:14:27.569 6.684 - 6.732: 98.0168% ( 1) 00:14:27.569 6.732 - 6.779: 98.0469% ( 4) 00:14:27.569 6.779 - 6.827: 98.0544% ( 1) 00:14:27.569 6.921 - 6.969: 98.0619% ( 1) 00:14:27.569 7.064 - 7.111: 98.0844% ( 3) 00:14:27.569 7.206 - 7.253: 98.0919% ( 1) 00:14:27.569 7.396 - 7.443: 98.0995% ( 1) 00:14:27.569 7.538 - 7.585: 98.1070% ( 1) 00:14:27.569 7.585 - 7.633: 98.1370% ( 4) 00:14:27.569 7.633 - 7.680: 98.1520% ( 2) 00:14:27.569 7.680 - 7.727: 98.1596% ( 1) 00:14:27.569 7.775 - 7.822: 98.1671% ( 1) 00:14:27.569 7.822 - 7.870: 98.1746% ( 1) 00:14:27.569 8.059 - 8.107: 98.1821% ( 1) 00:14:27.569 8.107 - 8.154: 98.1896% ( 1) 00:14:27.569 8.154 - 8.201: 98.1971% ( 1) 00:14:27.569 8.201 - 8.249: 98.2046% ( 1) 00:14:27.569 8.296 - 8.344: 98.2197% ( 2) 00:14:27.569 8.344 - 8.391: 98.2272% ( 1) 00:14:27.569 8.391 - 8.439: 98.2347% ( 1) 00:14:27.569 8.486 - 8.533: 98.2497% ( 2) 00:14:27.569 8.628 - 8.676: 98.2572% ( 1) 00:14:27.569 8.676 - 8.723: 98.2647% ( 1) 00:14:27.569 8.723 - 8.770: 98.2797% ( 2) 00:14:27.569 8.818 - 8.865: 98.3023% ( 3) 00:14:27.569 8.865 - 8.913: 98.3098% ( 1) 00:14:27.569 9.055 - 9.102: 98.3248% ( 2) 00:14:27.569 9.102 - 9.150: 98.3398% ( 2) 00:14:27.570 9.197 - 9.244: 98.3549% ( 2) 00:14:27.570 9.339 - 9.387: 98.3624% ( 1) 00:14:27.570 9.434 - 9.481: 98.3699% ( 1) 00:14:27.570 9.529 - 9.576: 98.3849% ( 2) 00:14:27.570 9.576 - 9.624: 98.4075% ( 3) 00:14:27.570 9.624 - 9.671: 98.4150% ( 1) 00:14:27.570 9.719 - 9.766: 98.4300% ( 2) 00:14:27.570 9.766 - 9.813: 98.4450% ( 2) 00:14:27.570 9.908 - 9.956: 98.4600% ( 2) 00:14:27.570 9.956 - 10.003: 98.4675% ( 1) 00:14:27.570 10.098 - 10.145: 98.4751% ( 1) 00:14:27.570 10.193 - 10.240: 98.4826% ( 1) 00:14:27.570 10.287 - 10.335: 98.4901% ( 1) 00:14:27.570 10.335 - 10.382: 98.5051% ( 2) 00:14:27.570 10.477 - 10.524: 98.5126% ( 1) 00:14:27.570 10.524 - 10.572: 98.5201% ( 1) 00:14:27.570 10.572 - 10.619: 98.5276% ( 1) 00:14:27.570 10.619 - 10.667: 98.5352% ( 1) 00:14:27.570 10.667 - 10.714: 98.5427% ( 1) 00:14:27.570 10.761 - 10.809: 98.5577% ( 2) 00:14:27.570 10.951 - 10.999: 98.5652% ( 1) 00:14:27.570 11.093 - 11.141: 98.5802% ( 2) 00:14:27.570 11.141 - 11.188: 98.5877% ( 1) 00:14:27.570 11.283 - 11.330: 98.5953% ( 1) 00:14:27.570 11.425 - 11.473: 98.6028% ( 1) 00:14:27.570 11.567 - 11.615: 98.6103% ( 1) 00:14:27.570 11.757 - 11.804: 98.6178% ( 1) 00:14:27.570 11.804 - 11.852: 98.6328% ( 2) 00:14:27.570 11.947 - 11.994: 98.6403% ( 1) 00:14:27.570 12.041 - 12.089: 98.6553% ( 2) 00:14:27.570 12.136 - 12.231: 98.6629% ( 1) 00:14:27.570 12.421 - 12.516: 98.6854% ( 3) 00:14:27.570 12.516 - 12.610: 98.6929% ( 1) 00:14:27.570 12.610 - 12.705: 98.7004% ( 1) 00:14:27.570 12.705 - 12.800: 98.7079% ( 1) 00:14:27.570 12.990 - 13.084: 98.7305% ( 3) 00:14:27.570 13.084 - 13.179: 98.7455% ( 2) 00:14:27.570 13.179 - 13.274: 98.7605% ( 2) 00:14:27.570 13.274 - 13.369: 98.7755% ( 2) 00:14:27.570 13.653 - 13.748: 98.7906% ( 2) 00:14:27.570 13.843 - 13.938: 98.7981% ( 1) 00:14:27.570 13.938 - 14.033: 98.8131% ( 2) 00:14:27.570 14.127 - 14.222: 98.8281% ( 2) 00:14:27.570 14.507 - 14.601: 98.8431% ( 2) 00:14:27.570 14.601 - 14.696: 98.8582% ( 2) 00:14:27.570 14.886 - 14.981: 98.8657% ( 1) 00:14:27.570 14.981 - 15.076: 98.8732% ( 1) 00:14:27.570 15.076 - 15.170: 98.8807% ( 1) 00:14:27.570 15.360 - 15.455: 98.8882% ( 1) 00:14:27.570 15.550 - 15.644: 98.8957% ( 1) 00:14:27.570 15.644 - 15.739: 98.9032% ( 1) 00:14:27.570 16.972 - 17.067: 98.9183% ( 2) 00:14:27.570 17.161 - 17.256: 98.9333% ( 2) 00:14:27.570 17.256 - 17.351: 98.9558% ( 3) 00:14:27.570 17.351 - 17.446: 98.9709% ( 2) 00:14:27.570 17.446 - 17.541: 98.9934% ( 3) 00:14:27.570 17.541 - 17.636: 99.0309% ( 5) 00:14:27.570 17.636 - 17.730: 99.0685% ( 5) 00:14:27.570 17.730 - 17.825: 99.0986% ( 4) 00:14:27.570 17.825 - 17.920: 99.1136% ( 2) 00:14:27.570 17.920 - 18.015: 99.1812% ( 9) 00:14:27.570 18.015 - 18.110: 99.2188% ( 5) 00:14:27.570 18.110 - 18.204: 99.2864% ( 9) 00:14:27.570 18.204 - 18.299: 99.3540% ( 9) 00:14:27.570 18.299 - 18.394: 99.3840% ( 4) 00:14:27.570 18.394 - 18.489: 99.4967% ( 15) 00:14:27.570 18.489 - 18.584: 99.5493% ( 7) 00:14:27.570 18.584 - 18.679: 99.6169% ( 9) 00:14:27.570 18.679 - 18.773: 99.6319% ( 2) 00:14:27.570 18.773 - 18.868: 99.6770% ( 6) 00:14:27.570 18.868 - 18.963: 99.7145% ( 5) 00:14:27.570 18.963 - 19.058: 99.7371% ( 3) 00:14:27.570 19.058 - 19.153: 99.7596% ( 3) 00:14:27.570 19.342 - 19.437: 99.7671% ( 1) 00:14:27.570 19.437 - 19.532: 99.7746% ( 1) 00:14:27.570 19.532 - 19.627: 99.7822% ( 1) 00:14:27.570 19.627 - 19.721: 99.7897% ( 1) 00:14:27.570 19.721 - 19.816: 99.7972% ( 1) 00:14:27.570 19.911 - 20.006: 99.8047% ( 1) 00:14:27.570 20.196 - 20.290: 99.8122% ( 1) 00:14:27.570 21.049 - 21.144: 99.8197% ( 1) 00:14:27.570 21.902 - 21.997: 99.8347% ( 2) 00:14:27.570 23.135 - 23.230: 99.8422% ( 1) 00:14:27.570 24.841 - 25.031: 99.8498% ( 1) 00:14:27.570 25.410 - 25.600: 99.8573% ( 1) 00:14:27.570 26.169 - 26.359: 99.8648% ( 1) 00:14:27.570 28.255 - 28.444: 99.8873% ( 3) 00:14:27.570 28.444 - 28.634: 99.8948% ( 1) 00:14:27.570 31.479 - 31.668: 99.9023% ( 1) 00:14:27.570 3980.705 - 4004.978: 99.9925% ( 12) 00:14:27.570 4004.978 - 4029.250: 100.0000% ( 1) 00:14:27.570 00:14:27.570 Complete histogram 00:14:27.570 ================== 00:14:27.570 Range in us Cumulative Count 00:14:27.570 2.074 - 2.086: 8.0754% ( 1075) 00:14:27.570 2.086 - 2.098: 40.0766% ( 4260) 00:14:27.570 2.098 - 2.110: 46.5294% ( 859) 00:14:27.570 2.110 - 2.121: 53.2602% ( 896) 00:14:27.570 2.121 - 2.133: 60.5018% ( 964) 00:14:27.570 2.133 - 2.145: 62.3122% ( 241) 00:14:27.570 2.145 - 2.157: 68.3293% ( 801) 00:14:27.570 2.157 - 2.169: 76.1343% ( 1039) 00:14:27.570 2.169 - 2.181: 77.3963% ( 168) 00:14:27.570 2.181 - 2.193: 80.1382% ( 365) 00:14:27.570 2.193 - 2.204: 82.6022% ( 328) 00:14:27.570 2.204 - 2.216: 83.1581% ( 74) 00:14:27.570 2.216 - 2.228: 85.4342% ( 303) 00:14:27.570 2.228 - 2.240: 89.2278% ( 505) 00:14:27.570 2.240 - 2.252: 91.4213% ( 292) 00:14:27.570 2.252 - 2.264: 92.7133% ( 172) 00:14:27.570 2.264 - 2.276: 93.4871% ( 103) 00:14:27.570 2.276 - 2.287: 93.7200% ( 31) 00:14:27.570 2.287 - 2.299: 94.0279% ( 41) 00:14:27.570 2.299 - 2.311: 94.3660% ( 45) 00:14:27.570 2.311 - 2.323: 95.0947% ( 97) 00:14:27.570 2.323 - 2.335: 95.2674% ( 23) 00:14:27.570 2.335 - 2.347: 95.3275% ( 8) 00:14:27.570 2.347 - 2.359: 95.3726% ( 6) 00:14:27.570 2.359 - 2.370: 95.4252% ( 7) 00:14:27.570 2.370 - 2.382: 95.5379% ( 15) 00:14:27.570 2.382 - 2.394: 95.7407% ( 27) 00:14:27.570 2.394 - 2.406: 95.9660% ( 30) 00:14:27.570 2.406 - 2.418: 96.1163% ( 20) 00:14:27.570 2.418 - 2.430: 96.1914% ( 10) 00:14:27.570 2.430 - 2.441: 96.3266% ( 18) 00:14:27.570 2.441 - 2.453: 96.4994% ( 23) 00:14:27.570 2.453 - 2.465: 96.6496% ( 20) 00:14:27.570 2.465 - 2.477: 96.8224% ( 23) 00:14:27.570 2.477 - 2.489: 97.0102% ( 25) 00:14:27.570 2.489 - 2.501: 97.1980% ( 25) 00:14:27.570 2.501 - 2.513: 97.3633% ( 22) 00:14:27.570 2.513 - 2.524: 97.5135% ( 20) 00:14:27.570 2.524 - 2.536: 97.6112% ( 13) 00:14:27.570 2.536 - 2.548: 97.7013% ( 12) 00:14:27.570 2.548 - 2.560: 97.7689% ( 9) 00:14:27.570 2.560 - 2.572: 97.8365% ( 9) 00:14:27.570 2.572 - 2.584: 97.9492% ( 15) 00:14:27.570 2.584 - 2.596: 97.9868% ( 5) 00:14:27.570 2.596 - 2.607: 98.0544% ( 9) 00:14:27.570 2.607 - 2.619: 98.0844% ( 4) 00:14:27.570 2.619 - 2.631: 98.1370% ( 7) 00:14:27.570 2.631 - 2.643: 98.1596% ( 3) 00:14:27.570 2.643 - 2.655: 98.1896% ( 4) 00:14:27.570 2.667 - 2.679: 98.1971% ( 1) 00:14:27.570 2.679 - 2.690: 98.2046% ( 1) 00:14:27.570 2.690 - 2.702: 98.2197% ( 2) 00:14:27.570 2.702 - 2.714: 98.2347% ( 2) 00:14:27.570 2.738 - 2.750: 98.2497% ( 2) 00:14:27.570 2.773 - 2.785: 98.2572% ( 1) 00:14:27.570 2.797 - 2.809: 98.2647% ( 1) 00:14:27.570 2.821 - 2.833: 98.2722% ( 1) 00:14:27.570 2.892 - 2.904: 98.2797% ( 1) 00:14:27.570 2.927 - 2.939: 98.2873% ( 1) 00:14:27.570 2.999 - 3.010: 98.3023% ( 2) 00:14:27.570 3.081 - 3.105: 98.3173% ( 2) 00:14:27.570 3.105 - 3.129: 98.3323% ( 2) 00:14:27.570 3.129 - 3.153: 98.3549% ( 3) 00:14:27.570 3.153 - 3.176: 98.3699% ( 2) 00:14:27.570 3.176 - 3.200: 98.3849% ( 2) 00:14:27.570 3.200 - 3.224: 98.4075% ( 3) 00:14:27.570 3.224 - 3.247: 98.4300% ( 3) 00:14:27.570 3.247 - 3.271: 98.4450% ( 2) 00:14:27.570 3.295 - 3.319: 98.4751% ( 4) 00:14:27.570 3.319 - 3.342: 98.4826% ( 1) 00:14:27.570 3.342 - 3.366: 98.4976% ( 2) 00:14:27.570 3.366 - 3.390: 98.5051% ( 1) 00:14:27.570 3.390 - 3.413: 98.5201% ( 2) 00:14:27.570 3.413 - 3.437: 98.5276% ( 1) 00:14:27.570 3.437 - 3.461: 98.5352% ( 1) 00:14:27.570 3.461 - 3.484: 98.5427% ( 1) 00:14:27.570 3.484 - 3.508: 98.5577% ( 2) 00:14:27.570 3.508 - 3.532: 98.5727% ( 2) 00:14:27.570 3.532 - 3.556: 98.5802% ( 1) 00:14:27.570 3.579 - 3.603: 98.5877% ( 1) 00:14:27.570 3.650 - 3.674: 98.5953% ( 1) 00:14:27.570 3.793 - 3.816: 98.6028% ( 1) 00:14:27.570 3.816 - 3.840: 98.6103% ( 1) 00:14:27.570 3.840 - 3.864: 98.6253% ( 2) 00:14:27.570 3.935 - 3.959: 98.6328% ( 1) 00:14:27.570 4.006 - 4.030: 98.6403% ( 1) 00:14:27.570 4.030 - 4.053: 98.6553% ( 2) 00:14:27.570 4.433 - 4.456: 98.6629% ( 1) 00:14:27.571 4.527 - 4.551: 98.6704% ( 1) 00:14:27.571 5.689 - 5.713: 98.6854% ( 2) 00:14:27.571 5.713 - 5.736: 98.6929% ( 1) 00:14:27.571 5.736 - 5.760: 98.7004% ( 1) 00:14:27.571 5.997 - 6.021: 98.7079% ( 1) 00:14:27.571 6.068 - 6.116: 98.7154% ( 1) 00:14:27.571 6.400 - 6.447: 98.7230% ( 1) 00:14:27.571 6.779 - 6.827: 98.7305% ( 1) 00:14:27.571 6.921 - 6.969: 98.7380% ( 1) 00:14:27.571 6.969 - 7.016: 98.7455% ( 1) 00:14:27.571 7.016 - 7.064: 98.7530% ( 1) 00:14:27.571 7.585 - 7.633: 98.7605% ( 1) 00:14:27.571 7.633 - 7.680: 98.7680% ( 1) 00:14:27.571 7.727 - 7.775: 98.7755% ( 1) 00:14:27.571 7.964 - 8.012: 98.7831% ( 1) 00:14:27.571 8.012 - 8.059: 98.7906% ( 1) 00:14:27.571 8.154 - 8.201: 98.8056% ( 2) 00:14:27.571 10.572 - 10.619: 98.8131% ( 1) 00:14:27.571 10.714 - 10.761: 98.8206% ( 1) 00:14:27.571 15.360 - 15.455: 98.8281% ( 1) 00:14:27.571 15.550 - 15.644: 98.8356% ( 1) 00:14:27.571 15.644 - 15.739: 98.8507% ( 2) 00:14:27.571 15.739 - 15.834: 98.8807% ( 4) 00:14:27.571 15.834 - 15.929: 98.9108% ( 4) 00:14:27.571 15.929 - 16.024: 98.9333% ( 3) 00:14:27.571 16.024 - 16.119: 98.9483% ( 2) 00:14:27.571 16.119 - 16.213: 98.9633% ( 2) 00:14:27.571 16.213 - 16.308: 99.0084% ( 6) 00:14:27.571 16.308 - 16.403: 99.0685% ( 8) 00:14:27.571 16.403 - 16.498: 99.1061% ( 5) 00:14:27.571 16.498 - 16.593: 99.1136% ( 1) 00:14:27.571 16.593 - 16.687: 99.1286% ( 2) 00:14:27.571 16.687 - 16.782: 99.1737% ( 6) 00:14:27.571 16.782 - 16.877: 99.2188% ( 6) 00:14:27.571 16.877 - 16.972: 99.2413%[2024-07-26 14:08:35.486290] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:27.571 ( 3) 00:14:27.571 17.067 - 17.161: 99.2563% ( 2) 00:14:27.571 17.161 - 17.256: 99.2713% ( 2) 00:14:27.571 17.256 - 17.351: 99.2864% ( 2) 00:14:27.571 17.446 - 17.541: 99.2939% ( 1) 00:14:27.571 17.541 - 17.636: 99.3164% ( 3) 00:14:27.571 17.636 - 17.730: 99.3239% ( 1) 00:14:27.571 17.730 - 17.825: 99.3314% ( 1) 00:14:27.571 18.015 - 18.110: 99.3389% ( 1) 00:14:27.571 18.110 - 18.204: 99.3465% ( 1) 00:14:27.571 18.773 - 18.868: 99.3540% ( 1) 00:14:27.571 21.523 - 21.618: 99.3615% ( 1) 00:14:27.571 3276.800 - 3301.073: 99.3690% ( 1) 00:14:27.571 3325.345 - 3349.618: 99.3765% ( 1) 00:14:27.571 3980.705 - 4004.978: 99.8122% ( 58) 00:14:27.571 4004.978 - 4029.250: 100.0000% ( 25) 00:14:27.571 00:14:27.571 14:08:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:14:27.571 14:08:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:27.571 14:08:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:14:27.571 14:08:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:14:27.571 14:08:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:27.829 [ 00:14:27.829 { 00:14:27.829 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:27.829 "subtype": "Discovery", 00:14:27.829 "listen_addresses": [], 00:14:27.829 "allow_any_host": true, 00:14:27.829 "hosts": [] 00:14:27.829 }, 00:14:27.829 { 00:14:27.829 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:27.829 "subtype": "NVMe", 00:14:27.829 "listen_addresses": [ 00:14:27.829 { 00:14:27.829 "trtype": "VFIOUSER", 00:14:27.829 "adrfam": "IPv4", 00:14:27.829 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:27.829 "trsvcid": "0" 00:14:27.829 } 00:14:27.829 ], 00:14:27.829 "allow_any_host": true, 00:14:27.829 "hosts": [], 00:14:27.829 "serial_number": "SPDK1", 00:14:27.829 "model_number": "SPDK bdev Controller", 00:14:27.829 "max_namespaces": 32, 00:14:27.829 "min_cntlid": 1, 00:14:27.829 "max_cntlid": 65519, 00:14:27.829 "namespaces": [ 00:14:27.829 { 00:14:27.829 "nsid": 1, 00:14:27.829 "bdev_name": "Malloc1", 00:14:27.829 "name": "Malloc1", 00:14:27.829 "nguid": "808E6E318D7D43FA8F5BE8483AC3DC5D", 00:14:27.829 "uuid": "808e6e31-8d7d-43fa-8f5b-e8483ac3dc5d" 00:14:27.829 } 00:14:27.829 ] 00:14:27.829 }, 00:14:27.829 { 00:14:27.829 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:27.829 "subtype": "NVMe", 00:14:27.829 "listen_addresses": [ 00:14:27.829 { 00:14:27.829 "trtype": "VFIOUSER", 00:14:27.829 "adrfam": "IPv4", 00:14:27.829 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:27.829 "trsvcid": "0" 00:14:27.829 } 00:14:27.829 ], 00:14:27.829 "allow_any_host": true, 00:14:27.829 "hosts": [], 00:14:27.829 "serial_number": "SPDK2", 00:14:27.829 "model_number": "SPDK bdev Controller", 00:14:27.829 "max_namespaces": 32, 00:14:27.829 "min_cntlid": 1, 00:14:27.829 "max_cntlid": 65519, 00:14:27.829 "namespaces": [ 00:14:27.829 { 00:14:27.829 "nsid": 1, 00:14:27.829 "bdev_name": "Malloc2", 00:14:27.829 "name": "Malloc2", 00:14:27.829 "nguid": "A6587B9FADAC4344AD843AA692D4CF4C", 00:14:27.829 "uuid": "a6587b9f-adac-4344-ad84-3aa692d4cf4c" 00:14:27.829 } 00:14:27.829 ] 00:14:27.829 } 00:14:27.829 ] 00:14:27.829 14:08:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:14:27.829 14:08:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=204784 00:14:27.829 14:08:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:14:27.829 14:08:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:14:27.829 14:08:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:14:27.829 14:08:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:27.829 14:08:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:14:27.829 14:08:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # i=1 00:14:27.829 14:08:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # sleep 0.1 00:14:28.087 EAL: No free 2048 kB hugepages reported on node 1 00:14:28.087 14:08:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:28.087 14:08:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:14:28.087 14:08:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # i=2 00:14:28.087 14:08:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # sleep 0.1 00:14:28.087 [2024-07-26 14:08:35.993395] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:28.087 14:08:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:28.087 14:08:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:28.087 14:08:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:14:28.087 14:08:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:14:28.087 14:08:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:14:28.345 Malloc3 00:14:28.345 14:08:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:14:28.603 [2024-07-26 14:08:36.573731] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:28.603 14:08:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:28.861 Asynchronous Event Request test 00:14:28.861 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:28.861 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:28.861 Registering asynchronous event callbacks... 00:14:28.861 Starting namespace attribute notice tests for all controllers... 00:14:28.861 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:14:28.861 aer_cb - Changed Namespace 00:14:28.861 Cleaning up... 00:14:28.861 [ 00:14:28.861 { 00:14:28.861 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:28.861 "subtype": "Discovery", 00:14:28.861 "listen_addresses": [], 00:14:28.861 "allow_any_host": true, 00:14:28.861 "hosts": [] 00:14:28.861 }, 00:14:28.861 { 00:14:28.861 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:28.861 "subtype": "NVMe", 00:14:28.861 "listen_addresses": [ 00:14:28.861 { 00:14:28.861 "trtype": "VFIOUSER", 00:14:28.861 "adrfam": "IPv4", 00:14:28.861 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:28.861 "trsvcid": "0" 00:14:28.861 } 00:14:28.861 ], 00:14:28.861 "allow_any_host": true, 00:14:28.861 "hosts": [], 00:14:28.861 "serial_number": "SPDK1", 00:14:28.861 "model_number": "SPDK bdev Controller", 00:14:28.861 "max_namespaces": 32, 00:14:28.861 "min_cntlid": 1, 00:14:28.861 "max_cntlid": 65519, 00:14:28.861 "namespaces": [ 00:14:28.861 { 00:14:28.861 "nsid": 1, 00:14:28.861 "bdev_name": "Malloc1", 00:14:28.861 "name": "Malloc1", 00:14:28.861 "nguid": "808E6E318D7D43FA8F5BE8483AC3DC5D", 00:14:28.861 "uuid": "808e6e31-8d7d-43fa-8f5b-e8483ac3dc5d" 00:14:28.861 }, 00:14:28.861 { 00:14:28.861 "nsid": 2, 00:14:28.861 "bdev_name": "Malloc3", 00:14:28.861 "name": "Malloc3", 00:14:28.861 "nguid": "9C27C05DC3144CC1A37FFC4D2EB0442C", 00:14:28.861 "uuid": "9c27c05d-c314-4cc1-a37f-fc4d2eb0442c" 00:14:28.861 } 00:14:28.861 ] 00:14:28.861 }, 00:14:28.861 { 00:14:28.861 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:28.861 "subtype": "NVMe", 00:14:28.861 "listen_addresses": [ 00:14:28.861 { 00:14:28.861 "trtype": "VFIOUSER", 00:14:28.861 "adrfam": "IPv4", 00:14:28.861 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:28.861 "trsvcid": "0" 00:14:28.861 } 00:14:28.861 ], 00:14:28.861 "allow_any_host": true, 00:14:28.861 "hosts": [], 00:14:28.861 "serial_number": "SPDK2", 00:14:28.861 "model_number": "SPDK bdev Controller", 00:14:28.861 "max_namespaces": 32, 00:14:28.861 "min_cntlid": 1, 00:14:28.861 "max_cntlid": 65519, 00:14:28.861 "namespaces": [ 00:14:28.862 { 00:14:28.862 "nsid": 1, 00:14:28.862 "bdev_name": "Malloc2", 00:14:28.862 "name": "Malloc2", 00:14:28.862 "nguid": "A6587B9FADAC4344AD843AA692D4CF4C", 00:14:28.862 "uuid": "a6587b9f-adac-4344-ad84-3aa692d4cf4c" 00:14:28.862 } 00:14:28.862 ] 00:14:28.862 } 00:14:28.862 ] 00:14:28.862 14:08:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 204784 00:14:28.862 14:08:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:28.862 14:08:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:14:28.862 14:08:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:14:28.862 14:08:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:28.862 [2024-07-26 14:08:36.853350] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:14:28.862 [2024-07-26 14:08:36.853396] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid204923 ] 00:14:28.862 EAL: No free 2048 kB hugepages reported on node 1 00:14:29.121 [2024-07-26 14:08:36.888929] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:14:29.121 [2024-07-26 14:08:36.891262] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:29.121 [2024-07-26 14:08:36.891292] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f5e4399b000 00:14:29.121 [2024-07-26 14:08:36.892260] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:29.121 [2024-07-26 14:08:36.893263] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:29.121 [2024-07-26 14:08:36.894269] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:29.121 [2024-07-26 14:08:36.895275] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:29.121 [2024-07-26 14:08:36.896281] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:29.121 [2024-07-26 14:08:36.897286] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:29.121 [2024-07-26 14:08:36.898289] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:29.121 [2024-07-26 14:08:36.899295] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:29.121 [2024-07-26 14:08:36.900310] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:29.121 [2024-07-26 14:08:36.900331] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f5e43990000 00:14:29.121 [2024-07-26 14:08:36.901455] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:29.121 [2024-07-26 14:08:36.915172] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:14:29.121 [2024-07-26 14:08:36.915207] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:14:29.121 [2024-07-26 14:08:36.920321] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:14:29.121 [2024-07-26 14:08:36.920379] nvme_pcie_common.c: 133:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:29.121 [2024-07-26 14:08:36.920471] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:14:29.122 [2024-07-26 14:08:36.920494] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:14:29.122 [2024-07-26 14:08:36.920505] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:14:29.122 [2024-07-26 14:08:36.921323] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:14:29.122 [2024-07-26 14:08:36.921349] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:14:29.122 [2024-07-26 14:08:36.921363] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:14:29.122 [2024-07-26 14:08:36.922327] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:14:29.122 [2024-07-26 14:08:36.922347] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:14:29.122 [2024-07-26 14:08:36.922361] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:14:29.122 [2024-07-26 14:08:36.923332] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:14:29.122 [2024-07-26 14:08:36.923353] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:29.122 [2024-07-26 14:08:36.924334] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:14:29.122 [2024-07-26 14:08:36.924358] nvme_ctrlr.c:3873:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:14:29.122 [2024-07-26 14:08:36.924369] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:14:29.122 [2024-07-26 14:08:36.924380] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:29.122 [2024-07-26 14:08:36.924490] nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:14:29.122 [2024-07-26 14:08:36.924498] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:29.122 [2024-07-26 14:08:36.924506] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:14:29.122 [2024-07-26 14:08:36.925344] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:14:29.122 [2024-07-26 14:08:36.926352] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:14:29.122 [2024-07-26 14:08:36.927359] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:14:29.122 [2024-07-26 14:08:36.928360] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:29.122 [2024-07-26 14:08:36.928428] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:29.122 [2024-07-26 14:08:36.929376] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:14:29.122 [2024-07-26 14:08:36.929396] nvme_ctrlr.c:3908:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:29.122 [2024-07-26 14:08:36.929405] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:14:29.122 [2024-07-26 14:08:36.929428] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:14:29.122 [2024-07-26 14:08:36.929441] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:14:29.122 [2024-07-26 14:08:36.929467] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:29.122 [2024-07-26 14:08:36.929476] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:29.122 [2024-07-26 14:08:36.929482] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:29.122 [2024-07-26 14:08:36.929502] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:29.122 [2024-07-26 14:08:36.937542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:29.122 [2024-07-26 14:08:36.937567] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:14:29.122 [2024-07-26 14:08:36.937576] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:14:29.122 [2024-07-26 14:08:36.937584] nvme_ctrlr.c:2064:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:14:29.122 [2024-07-26 14:08:36.937592] nvme_ctrlr.c:2075:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:29.122 [2024-07-26 14:08:36.937600] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:14:29.122 [2024-07-26 14:08:36.937613] nvme_ctrlr.c:2103:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:14:29.122 [2024-07-26 14:08:36.937621] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:14:29.122 [2024-07-26 14:08:36.937635] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:14:29.122 [2024-07-26 14:08:36.937655] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:29.122 [2024-07-26 14:08:36.945541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:29.122 [2024-07-26 14:08:36.945569] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:29.122 [2024-07-26 14:08:36.945583] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:29.122 [2024-07-26 14:08:36.945595] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:29.122 [2024-07-26 14:08:36.945607] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:29.122 [2024-07-26 14:08:36.945616] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:14:29.122 [2024-07-26 14:08:36.945631] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:29.122 [2024-07-26 14:08:36.945646] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:29.122 [2024-07-26 14:08:36.953556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:29.122 [2024-07-26 14:08:36.953574] nvme_ctrlr.c:3014:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:14:29.122 [2024-07-26 14:08:36.953583] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:29.122 [2024-07-26 14:08:36.953600] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:14:29.122 [2024-07-26 14:08:36.953611] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:14:29.122 [2024-07-26 14:08:36.953625] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:29.122 [2024-07-26 14:08:36.961538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:29.122 [2024-07-26 14:08:36.961613] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:14:29.122 [2024-07-26 14:08:36.961631] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:14:29.122 [2024-07-26 14:08:36.961645] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:29.122 [2024-07-26 14:08:36.961653] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:29.122 [2024-07-26 14:08:36.961659] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:29.122 [2024-07-26 14:08:36.961669] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:29.122 [2024-07-26 14:08:36.969537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:29.122 [2024-07-26 14:08:36.969562] nvme_ctrlr.c:4697:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:14:29.122 [2024-07-26 14:08:36.969583] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:14:29.122 [2024-07-26 14:08:36.969599] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:14:29.122 [2024-07-26 14:08:36.969612] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:29.122 [2024-07-26 14:08:36.969620] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:29.122 [2024-07-26 14:08:36.969626] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:29.122 [2024-07-26 14:08:36.969636] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:29.122 [2024-07-26 14:08:36.977543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:29.122 [2024-07-26 14:08:36.977571] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:29.122 [2024-07-26 14:08:36.977587] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:29.122 [2024-07-26 14:08:36.977600] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:29.122 [2024-07-26 14:08:36.977609] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:29.122 [2024-07-26 14:08:36.977615] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:29.122 [2024-07-26 14:08:36.977624] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:29.122 [2024-07-26 14:08:36.985536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:29.122 [2024-07-26 14:08:36.985558] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:29.123 [2024-07-26 14:08:36.985571] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:14:29.123 [2024-07-26 14:08:36.985588] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:14:29.123 [2024-07-26 14:08:36.985601] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:14:29.123 [2024-07-26 14:08:36.985610] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:29.123 [2024-07-26 14:08:36.985619] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:14:29.123 [2024-07-26 14:08:36.985628] nvme_ctrlr.c:3114:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:14:29.123 [2024-07-26 14:08:36.985636] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:14:29.123 [2024-07-26 14:08:36.985644] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:14:29.123 [2024-07-26 14:08:36.985673] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:29.123 [2024-07-26 14:08:36.993538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:29.123 [2024-07-26 14:08:36.993564] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:29.123 [2024-07-26 14:08:37.001537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:29.123 [2024-07-26 14:08:37.001563] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:29.123 [2024-07-26 14:08:37.009540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:29.123 [2024-07-26 14:08:37.009564] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:29.123 [2024-07-26 14:08:37.017538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:29.123 [2024-07-26 14:08:37.017570] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:29.123 [2024-07-26 14:08:37.017581] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:29.123 [2024-07-26 14:08:37.017587] nvme_pcie_common.c:1239:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:29.123 [2024-07-26 14:08:37.017593] nvme_pcie_common.c:1255:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:29.123 [2024-07-26 14:08:37.017599] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:14:29.123 [2024-07-26 14:08:37.017608] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:29.123 [2024-07-26 14:08:37.017620] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:29.123 [2024-07-26 14:08:37.017629] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:29.123 [2024-07-26 14:08:37.017634] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:29.123 [2024-07-26 14:08:37.017643] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:29.123 [2024-07-26 14:08:37.017654] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:29.123 [2024-07-26 14:08:37.017663] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:29.123 [2024-07-26 14:08:37.017668] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:29.123 [2024-07-26 14:08:37.017677] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:29.123 [2024-07-26 14:08:37.017690] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:29.123 [2024-07-26 14:08:37.017698] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:29.123 [2024-07-26 14:08:37.017704] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:29.123 [2024-07-26 14:08:37.017712] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:29.123 [2024-07-26 14:08:37.025555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:29.123 [2024-07-26 14:08:37.025582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:29.123 [2024-07-26 14:08:37.025600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:29.123 [2024-07-26 14:08:37.025615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:29.123 ===================================================== 00:14:29.123 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:29.123 ===================================================== 00:14:29.123 Controller Capabilities/Features 00:14:29.123 ================================ 00:14:29.123 Vendor ID: 4e58 00:14:29.123 Subsystem Vendor ID: 4e58 00:14:29.123 Serial Number: SPDK2 00:14:29.123 Model Number: SPDK bdev Controller 00:14:29.123 Firmware Version: 24.09 00:14:29.123 Recommended Arb Burst: 6 00:14:29.123 IEEE OUI Identifier: 8d 6b 50 00:14:29.123 Multi-path I/O 00:14:29.123 May have multiple subsystem ports: Yes 00:14:29.123 May have multiple controllers: Yes 00:14:29.123 Associated with SR-IOV VF: No 00:14:29.123 Max Data Transfer Size: 131072 00:14:29.123 Max Number of Namespaces: 32 00:14:29.123 Max Number of I/O Queues: 127 00:14:29.123 NVMe Specification Version (VS): 1.3 00:14:29.123 NVMe Specification Version (Identify): 1.3 00:14:29.123 Maximum Queue Entries: 256 00:14:29.123 Contiguous Queues Required: Yes 00:14:29.123 Arbitration Mechanisms Supported 00:14:29.123 Weighted Round Robin: Not Supported 00:14:29.123 Vendor Specific: Not Supported 00:14:29.123 Reset Timeout: 15000 ms 00:14:29.123 Doorbell Stride: 4 bytes 00:14:29.123 NVM Subsystem Reset: Not Supported 00:14:29.123 Command Sets Supported 00:14:29.123 NVM Command Set: Supported 00:14:29.123 Boot Partition: Not Supported 00:14:29.123 Memory Page Size Minimum: 4096 bytes 00:14:29.123 Memory Page Size Maximum: 4096 bytes 00:14:29.123 Persistent Memory Region: Not Supported 00:14:29.123 Optional Asynchronous Events Supported 00:14:29.123 Namespace Attribute Notices: Supported 00:14:29.123 Firmware Activation Notices: Not Supported 00:14:29.123 ANA Change Notices: Not Supported 00:14:29.123 PLE Aggregate Log Change Notices: Not Supported 00:14:29.123 LBA Status Info Alert Notices: Not Supported 00:14:29.123 EGE Aggregate Log Change Notices: Not Supported 00:14:29.123 Normal NVM Subsystem Shutdown event: Not Supported 00:14:29.123 Zone Descriptor Change Notices: Not Supported 00:14:29.123 Discovery Log Change Notices: Not Supported 00:14:29.123 Controller Attributes 00:14:29.123 128-bit Host Identifier: Supported 00:14:29.123 Non-Operational Permissive Mode: Not Supported 00:14:29.123 NVM Sets: Not Supported 00:14:29.123 Read Recovery Levels: Not Supported 00:14:29.123 Endurance Groups: Not Supported 00:14:29.123 Predictable Latency Mode: Not Supported 00:14:29.123 Traffic Based Keep ALive: Not Supported 00:14:29.123 Namespace Granularity: Not Supported 00:14:29.123 SQ Associations: Not Supported 00:14:29.123 UUID List: Not Supported 00:14:29.123 Multi-Domain Subsystem: Not Supported 00:14:29.123 Fixed Capacity Management: Not Supported 00:14:29.123 Variable Capacity Management: Not Supported 00:14:29.123 Delete Endurance Group: Not Supported 00:14:29.123 Delete NVM Set: Not Supported 00:14:29.123 Extended LBA Formats Supported: Not Supported 00:14:29.123 Flexible Data Placement Supported: Not Supported 00:14:29.123 00:14:29.123 Controller Memory Buffer Support 00:14:29.123 ================================ 00:14:29.123 Supported: No 00:14:29.123 00:14:29.123 Persistent Memory Region Support 00:14:29.123 ================================ 00:14:29.123 Supported: No 00:14:29.123 00:14:29.123 Admin Command Set Attributes 00:14:29.123 ============================ 00:14:29.123 Security Send/Receive: Not Supported 00:14:29.123 Format NVM: Not Supported 00:14:29.123 Firmware Activate/Download: Not Supported 00:14:29.123 Namespace Management: Not Supported 00:14:29.123 Device Self-Test: Not Supported 00:14:29.123 Directives: Not Supported 00:14:29.123 NVMe-MI: Not Supported 00:14:29.123 Virtualization Management: Not Supported 00:14:29.123 Doorbell Buffer Config: Not Supported 00:14:29.123 Get LBA Status Capability: Not Supported 00:14:29.123 Command & Feature Lockdown Capability: Not Supported 00:14:29.123 Abort Command Limit: 4 00:14:29.123 Async Event Request Limit: 4 00:14:29.123 Number of Firmware Slots: N/A 00:14:29.123 Firmware Slot 1 Read-Only: N/A 00:14:29.123 Firmware Activation Without Reset: N/A 00:14:29.123 Multiple Update Detection Support: N/A 00:14:29.123 Firmware Update Granularity: No Information Provided 00:14:29.123 Per-Namespace SMART Log: No 00:14:29.123 Asymmetric Namespace Access Log Page: Not Supported 00:14:29.123 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:14:29.123 Command Effects Log Page: Supported 00:14:29.123 Get Log Page Extended Data: Supported 00:14:29.123 Telemetry Log Pages: Not Supported 00:14:29.123 Persistent Event Log Pages: Not Supported 00:14:29.123 Supported Log Pages Log Page: May Support 00:14:29.123 Commands Supported & Effects Log Page: Not Supported 00:14:29.124 Feature Identifiers & Effects Log Page:May Support 00:14:29.124 NVMe-MI Commands & Effects Log Page: May Support 00:14:29.124 Data Area 4 for Telemetry Log: Not Supported 00:14:29.124 Error Log Page Entries Supported: 128 00:14:29.124 Keep Alive: Supported 00:14:29.124 Keep Alive Granularity: 10000 ms 00:14:29.124 00:14:29.124 NVM Command Set Attributes 00:14:29.124 ========================== 00:14:29.124 Submission Queue Entry Size 00:14:29.124 Max: 64 00:14:29.124 Min: 64 00:14:29.124 Completion Queue Entry Size 00:14:29.124 Max: 16 00:14:29.124 Min: 16 00:14:29.124 Number of Namespaces: 32 00:14:29.124 Compare Command: Supported 00:14:29.124 Write Uncorrectable Command: Not Supported 00:14:29.124 Dataset Management Command: Supported 00:14:29.124 Write Zeroes Command: Supported 00:14:29.124 Set Features Save Field: Not Supported 00:14:29.124 Reservations: Not Supported 00:14:29.124 Timestamp: Not Supported 00:14:29.124 Copy: Supported 00:14:29.124 Volatile Write Cache: Present 00:14:29.124 Atomic Write Unit (Normal): 1 00:14:29.124 Atomic Write Unit (PFail): 1 00:14:29.124 Atomic Compare & Write Unit: 1 00:14:29.124 Fused Compare & Write: Supported 00:14:29.124 Scatter-Gather List 00:14:29.124 SGL Command Set: Supported (Dword aligned) 00:14:29.124 SGL Keyed: Not Supported 00:14:29.124 SGL Bit Bucket Descriptor: Not Supported 00:14:29.124 SGL Metadata Pointer: Not Supported 00:14:29.124 Oversized SGL: Not Supported 00:14:29.124 SGL Metadata Address: Not Supported 00:14:29.124 SGL Offset: Not Supported 00:14:29.124 Transport SGL Data Block: Not Supported 00:14:29.124 Replay Protected Memory Block: Not Supported 00:14:29.124 00:14:29.124 Firmware Slot Information 00:14:29.124 ========================= 00:14:29.124 Active slot: 1 00:14:29.124 Slot 1 Firmware Revision: 24.09 00:14:29.124 00:14:29.124 00:14:29.124 Commands Supported and Effects 00:14:29.124 ============================== 00:14:29.124 Admin Commands 00:14:29.124 -------------- 00:14:29.124 Get Log Page (02h): Supported 00:14:29.124 Identify (06h): Supported 00:14:29.124 Abort (08h): Supported 00:14:29.124 Set Features (09h): Supported 00:14:29.124 Get Features (0Ah): Supported 00:14:29.124 Asynchronous Event Request (0Ch): Supported 00:14:29.124 Keep Alive (18h): Supported 00:14:29.124 I/O Commands 00:14:29.124 ------------ 00:14:29.124 Flush (00h): Supported LBA-Change 00:14:29.124 Write (01h): Supported LBA-Change 00:14:29.124 Read (02h): Supported 00:14:29.124 Compare (05h): Supported 00:14:29.124 Write Zeroes (08h): Supported LBA-Change 00:14:29.124 Dataset Management (09h): Supported LBA-Change 00:14:29.124 Copy (19h): Supported LBA-Change 00:14:29.124 00:14:29.124 Error Log 00:14:29.124 ========= 00:14:29.124 00:14:29.124 Arbitration 00:14:29.124 =========== 00:14:29.124 Arbitration Burst: 1 00:14:29.124 00:14:29.124 Power Management 00:14:29.124 ================ 00:14:29.124 Number of Power States: 1 00:14:29.124 Current Power State: Power State #0 00:14:29.124 Power State #0: 00:14:29.124 Max Power: 0.00 W 00:14:29.124 Non-Operational State: Operational 00:14:29.124 Entry Latency: Not Reported 00:14:29.124 Exit Latency: Not Reported 00:14:29.124 Relative Read Throughput: 0 00:14:29.124 Relative Read Latency: 0 00:14:29.124 Relative Write Throughput: 0 00:14:29.124 Relative Write Latency: 0 00:14:29.124 Idle Power: Not Reported 00:14:29.124 Active Power: Not Reported 00:14:29.124 Non-Operational Permissive Mode: Not Supported 00:14:29.124 00:14:29.124 Health Information 00:14:29.124 ================== 00:14:29.124 Critical Warnings: 00:14:29.124 Available Spare Space: OK 00:14:29.124 Temperature: OK 00:14:29.124 Device Reliability: OK 00:14:29.124 Read Only: No 00:14:29.124 Volatile Memory Backup: OK 00:14:29.124 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:29.124 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:29.124 Available Spare: 0% 00:14:29.124 Available Sp[2024-07-26 14:08:37.025732] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:29.124 [2024-07-26 14:08:37.033538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:29.124 [2024-07-26 14:08:37.033586] nvme_ctrlr.c:4361:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:14:29.124 [2024-07-26 14:08:37.033604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.124 [2024-07-26 14:08:37.033615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.124 [2024-07-26 14:08:37.033625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.124 [2024-07-26 14:08:37.033635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.124 [2024-07-26 14:08:37.033713] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:14:29.124 [2024-07-26 14:08:37.033735] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:14:29.124 [2024-07-26 14:08:37.034715] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:29.124 [2024-07-26 14:08:37.034788] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:14:29.124 [2024-07-26 14:08:37.034803] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:14:29.124 [2024-07-26 14:08:37.035730] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:14:29.124 [2024-07-26 14:08:37.035755] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:14:29.124 [2024-07-26 14:08:37.035809] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:14:29.124 [2024-07-26 14:08:37.037013] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:29.124 are Threshold: 0% 00:14:29.124 Life Percentage Used: 0% 00:14:29.124 Data Units Read: 0 00:14:29.124 Data Units Written: 0 00:14:29.124 Host Read Commands: 0 00:14:29.124 Host Write Commands: 0 00:14:29.124 Controller Busy Time: 0 minutes 00:14:29.124 Power Cycles: 0 00:14:29.124 Power On Hours: 0 hours 00:14:29.124 Unsafe Shutdowns: 0 00:14:29.124 Unrecoverable Media Errors: 0 00:14:29.124 Lifetime Error Log Entries: 0 00:14:29.124 Warning Temperature Time: 0 minutes 00:14:29.124 Critical Temperature Time: 0 minutes 00:14:29.124 00:14:29.124 Number of Queues 00:14:29.124 ================ 00:14:29.124 Number of I/O Submission Queues: 127 00:14:29.124 Number of I/O Completion Queues: 127 00:14:29.124 00:14:29.124 Active Namespaces 00:14:29.124 ================= 00:14:29.124 Namespace ID:1 00:14:29.124 Error Recovery Timeout: Unlimited 00:14:29.124 Command Set Identifier: NVM (00h) 00:14:29.124 Deallocate: Supported 00:14:29.124 Deallocated/Unwritten Error: Not Supported 00:14:29.124 Deallocated Read Value: Unknown 00:14:29.124 Deallocate in Write Zeroes: Not Supported 00:14:29.124 Deallocated Guard Field: 0xFFFF 00:14:29.124 Flush: Supported 00:14:29.124 Reservation: Supported 00:14:29.124 Namespace Sharing Capabilities: Multiple Controllers 00:14:29.124 Size (in LBAs): 131072 (0GiB) 00:14:29.124 Capacity (in LBAs): 131072 (0GiB) 00:14:29.124 Utilization (in LBAs): 131072 (0GiB) 00:14:29.124 NGUID: A6587B9FADAC4344AD843AA692D4CF4C 00:14:29.124 UUID: a6587b9f-adac-4344-ad84-3aa692d4cf4c 00:14:29.124 Thin Provisioning: Not Supported 00:14:29.124 Per-NS Atomic Units: Yes 00:14:29.124 Atomic Boundary Size (Normal): 0 00:14:29.124 Atomic Boundary Size (PFail): 0 00:14:29.124 Atomic Boundary Offset: 0 00:14:29.124 Maximum Single Source Range Length: 65535 00:14:29.124 Maximum Copy Length: 65535 00:14:29.124 Maximum Source Range Count: 1 00:14:29.124 NGUID/EUI64 Never Reused: No 00:14:29.124 Namespace Write Protected: No 00:14:29.124 Number of LBA Formats: 1 00:14:29.124 Current LBA Format: LBA Format #00 00:14:29.124 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:29.124 00:14:29.124 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:29.124 EAL: No free 2048 kB hugepages reported on node 1 00:14:29.382 [2024-07-26 14:08:37.267472] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:34.647 Initializing NVMe Controllers 00:14:34.647 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:34.647 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:14:34.647 Initialization complete. Launching workers. 00:14:34.647 ======================================================== 00:14:34.647 Latency(us) 00:14:34.647 Device Information : IOPS MiB/s Average min max 00:14:34.647 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 35330.39 138.01 3622.02 1133.27 8989.30 00:14:34.647 ======================================================== 00:14:34.647 Total : 35330.39 138.01 3622.02 1133.27 8989.30 00:14:34.647 00:14:34.647 [2024-07-26 14:08:42.371910] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:34.647 14:08:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:14:34.647 EAL: No free 2048 kB hugepages reported on node 1 00:14:34.647 [2024-07-26 14:08:42.614569] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:39.951 Initializing NVMe Controllers 00:14:39.951 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:39.951 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:14:39.951 Initialization complete. Launching workers. 00:14:39.951 ======================================================== 00:14:39.951 Latency(us) 00:14:39.951 Device Information : IOPS MiB/s Average min max 00:14:39.951 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 32788.99 128.08 3903.23 1195.44 7813.72 00:14:39.951 ======================================================== 00:14:39.951 Total : 32788.99 128.08 3903.23 1195.44 7813.72 00:14:39.951 00:14:39.951 [2024-07-26 14:08:47.636728] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:39.951 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:14:39.951 EAL: No free 2048 kB hugepages reported on node 1 00:14:39.951 [2024-07-26 14:08:47.839656] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:45.216 [2024-07-26 14:08:52.978670] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:45.216 Initializing NVMe Controllers 00:14:45.216 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:45.216 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:45.216 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:14:45.216 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:14:45.216 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:14:45.216 Initialization complete. Launching workers. 00:14:45.216 Starting thread on core 2 00:14:45.216 Starting thread on core 3 00:14:45.216 Starting thread on core 1 00:14:45.216 14:08:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:14:45.216 EAL: No free 2048 kB hugepages reported on node 1 00:14:45.474 [2024-07-26 14:08:53.286033] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:48.758 [2024-07-26 14:08:56.359781] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:48.758 Initializing NVMe Controllers 00:14:48.758 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:48.758 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:48.758 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:14:48.758 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:14:48.758 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:14:48.758 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:14:48.758 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:14:48.758 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:14:48.758 Initialization complete. Launching workers. 00:14:48.758 Starting thread on core 1 with urgent priority queue 00:14:48.758 Starting thread on core 2 with urgent priority queue 00:14:48.758 Starting thread on core 3 with urgent priority queue 00:14:48.758 Starting thread on core 0 with urgent priority queue 00:14:48.758 SPDK bdev Controller (SPDK2 ) core 0: 4437.67 IO/s 22.53 secs/100000 ios 00:14:48.758 SPDK bdev Controller (SPDK2 ) core 1: 4306.00 IO/s 23.22 secs/100000 ios 00:14:48.758 SPDK bdev Controller (SPDK2 ) core 2: 4588.67 IO/s 21.79 secs/100000 ios 00:14:48.758 SPDK bdev Controller (SPDK2 ) core 3: 4798.67 IO/s 20.84 secs/100000 ios 00:14:48.758 ======================================================== 00:14:48.758 00:14:48.758 14:08:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:14:48.758 EAL: No free 2048 kB hugepages reported on node 1 00:14:48.758 [2024-07-26 14:08:56.652987] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:48.758 Initializing NVMe Controllers 00:14:48.758 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:48.758 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:48.758 Namespace ID: 1 size: 0GB 00:14:48.758 Initialization complete. 00:14:48.758 INFO: using host memory buffer for IO 00:14:48.758 Hello world! 00:14:48.758 [2024-07-26 14:08:56.666062] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:48.758 14:08:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:14:48.758 EAL: No free 2048 kB hugepages reported on node 1 00:14:49.016 [2024-07-26 14:08:56.949933] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:50.390 Initializing NVMe Controllers 00:14:50.390 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:50.390 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:50.390 Initialization complete. Launching workers. 00:14:50.390 submit (in ns) avg, min, max = 5663.1, 3510.0, 4017338.9 00:14:50.390 complete (in ns) avg, min, max = 26101.2, 2065.6, 4015717.8 00:14:50.390 00:14:50.390 Submit histogram 00:14:50.390 ================ 00:14:50.390 Range in us Cumulative Count 00:14:50.390 3.508 - 3.532: 0.5014% ( 68) 00:14:50.390 3.532 - 3.556: 2.0351% ( 208) 00:14:50.390 3.556 - 3.579: 6.0389% ( 543) 00:14:50.390 3.579 - 3.603: 11.8714% ( 791) 00:14:50.390 3.603 - 3.627: 19.7832% ( 1073) 00:14:50.390 3.627 - 3.650: 28.2112% ( 1143) 00:14:50.390 3.650 - 3.674: 35.7174% ( 1018) 00:14:50.390 3.674 - 3.698: 43.0910% ( 1000) 00:14:50.390 3.698 - 3.721: 50.7447% ( 1038) 00:14:50.390 3.721 - 3.745: 57.7422% ( 949) 00:14:50.390 3.745 - 3.769: 62.4023% ( 632) 00:14:50.390 3.769 - 3.793: 66.8485% ( 603) 00:14:50.390 3.793 - 3.816: 69.7758% ( 397) 00:14:50.390 3.816 - 3.840: 73.4036% ( 492) 00:14:50.390 3.840 - 3.864: 77.2969% ( 528) 00:14:50.390 3.864 - 3.887: 80.6297% ( 452) 00:14:50.390 3.887 - 3.911: 83.3432% ( 368) 00:14:50.390 3.911 - 3.935: 86.0050% ( 361) 00:14:50.390 3.935 - 3.959: 88.3351% ( 316) 00:14:50.390 3.959 - 3.982: 90.1489% ( 246) 00:14:50.390 3.982 - 4.006: 91.5720% ( 193) 00:14:50.390 4.006 - 4.030: 92.9435% ( 186) 00:14:50.390 4.030 - 4.053: 93.9979% ( 143) 00:14:50.390 4.053 - 4.077: 94.9639% ( 131) 00:14:50.390 4.077 - 4.101: 95.6496% ( 93) 00:14:50.390 4.101 - 4.124: 96.1068% ( 62) 00:14:50.390 4.124 - 4.148: 96.4238% ( 43) 00:14:50.390 4.148 - 4.172: 96.6229% ( 27) 00:14:50.390 4.172 - 4.196: 96.8073% ( 25) 00:14:50.390 4.196 - 4.219: 96.9252% ( 16) 00:14:50.390 4.219 - 4.243: 97.0063% ( 11) 00:14:50.390 4.243 - 4.267: 97.0727% ( 9) 00:14:50.390 4.267 - 4.290: 97.1686% ( 13) 00:14:50.390 4.290 - 4.314: 97.2275% ( 8) 00:14:50.390 4.314 - 4.338: 97.2865% ( 8) 00:14:50.390 4.338 - 4.361: 97.3529% ( 9) 00:14:50.390 4.361 - 4.385: 97.4193% ( 9) 00:14:50.390 4.385 - 4.409: 97.4856% ( 9) 00:14:50.390 4.433 - 4.456: 97.5004% ( 2) 00:14:50.390 4.456 - 4.480: 97.5151% ( 2) 00:14:50.390 4.480 - 4.504: 97.5225% ( 1) 00:14:50.390 4.504 - 4.527: 97.5299% ( 1) 00:14:50.390 4.527 - 4.551: 97.5372% ( 1) 00:14:50.390 4.741 - 4.764: 97.5520% ( 2) 00:14:50.390 4.764 - 4.788: 97.5667% ( 2) 00:14:50.390 4.788 - 4.812: 97.5741% ( 1) 00:14:50.390 4.836 - 4.859: 97.5962% ( 3) 00:14:50.390 4.859 - 4.883: 97.6626% ( 9) 00:14:50.390 4.883 - 4.907: 97.7363% ( 10) 00:14:50.390 4.907 - 4.930: 97.8396% ( 14) 00:14:50.390 4.930 - 4.954: 97.9280% ( 12) 00:14:50.390 4.954 - 4.978: 97.9723% ( 6) 00:14:50.390 4.978 - 5.001: 98.0608% ( 12) 00:14:50.390 5.001 - 5.025: 98.1124% ( 7) 00:14:50.390 5.025 - 5.049: 98.1566% ( 6) 00:14:50.390 5.049 - 5.073: 98.2156% ( 8) 00:14:50.390 5.073 - 5.096: 98.2598% ( 6) 00:14:50.390 5.096 - 5.120: 98.2967% ( 5) 00:14:50.390 5.120 - 5.144: 98.3188% ( 3) 00:14:50.390 5.144 - 5.167: 98.3483% ( 4) 00:14:50.390 5.167 - 5.191: 98.3852% ( 5) 00:14:50.390 5.191 - 5.215: 98.4073% ( 3) 00:14:50.390 5.215 - 5.239: 98.4516% ( 6) 00:14:50.390 5.239 - 5.262: 98.4663% ( 2) 00:14:50.390 5.262 - 5.286: 98.5032% ( 5) 00:14:50.390 5.286 - 5.310: 98.5327% ( 4) 00:14:50.390 5.310 - 5.333: 98.5474% ( 2) 00:14:50.390 5.333 - 5.357: 98.5769% ( 4) 00:14:50.390 5.357 - 5.381: 98.5917% ( 2) 00:14:50.390 5.381 - 5.404: 98.6285% ( 5) 00:14:50.390 5.404 - 5.428: 98.6359% ( 1) 00:14:50.390 5.428 - 5.452: 98.6433% ( 1) 00:14:50.390 5.499 - 5.523: 98.6580% ( 2) 00:14:50.390 5.523 - 5.547: 98.6728% ( 2) 00:14:50.390 5.547 - 5.570: 98.6949% ( 3) 00:14:50.390 5.570 - 5.594: 98.7023% ( 1) 00:14:50.390 5.618 - 5.641: 98.7096% ( 1) 00:14:50.390 5.641 - 5.665: 98.7244% ( 2) 00:14:50.390 5.713 - 5.736: 98.7391% ( 2) 00:14:50.390 5.736 - 5.760: 98.7465% ( 1) 00:14:50.390 5.760 - 5.784: 98.7539% ( 1) 00:14:50.390 5.831 - 5.855: 98.7612% ( 1) 00:14:50.390 5.879 - 5.902: 98.7760% ( 2) 00:14:50.390 5.950 - 5.973: 98.7834% ( 1) 00:14:50.390 6.163 - 6.210: 98.7907% ( 1) 00:14:50.390 6.447 - 6.495: 98.7981% ( 1) 00:14:50.390 6.495 - 6.542: 98.8055% ( 1) 00:14:50.390 6.732 - 6.779: 98.8129% ( 1) 00:14:50.390 6.779 - 6.827: 98.8202% ( 1) 00:14:50.390 6.874 - 6.921: 98.8276% ( 1) 00:14:50.390 7.016 - 7.064: 98.8350% ( 1) 00:14:50.390 7.064 - 7.111: 98.8424% ( 1) 00:14:50.390 7.206 - 7.253: 98.8497% ( 1) 00:14:50.390 7.301 - 7.348: 98.8571% ( 1) 00:14:50.390 7.396 - 7.443: 98.8645% ( 1) 00:14:50.390 7.490 - 7.538: 98.8792% ( 2) 00:14:50.390 7.585 - 7.633: 98.8866% ( 1) 00:14:50.390 7.727 - 7.775: 98.8940% ( 1) 00:14:50.390 7.822 - 7.870: 98.9013% ( 1) 00:14:50.390 7.917 - 7.964: 98.9087% ( 1) 00:14:50.390 7.964 - 8.012: 98.9161% ( 1) 00:14:50.390 8.107 - 8.154: 98.9235% ( 1) 00:14:50.390 8.344 - 8.391: 98.9308% ( 1) 00:14:50.390 8.533 - 8.581: 98.9456% ( 2) 00:14:50.390 8.676 - 8.723: 98.9603% ( 2) 00:14:50.390 8.723 - 8.770: 98.9677% ( 1) 00:14:50.390 8.818 - 8.865: 98.9751% ( 1) 00:14:50.390 8.865 - 8.913: 98.9898% ( 2) 00:14:50.390 8.913 - 8.960: 98.9972% ( 1) 00:14:50.390 8.960 - 9.007: 99.0046% ( 1) 00:14:50.390 9.007 - 9.055: 99.0119% ( 1) 00:14:50.390 9.055 - 9.102: 99.0267% ( 2) 00:14:50.390 9.197 - 9.244: 99.0414% ( 2) 00:14:50.390 9.244 - 9.292: 99.0488% ( 1) 00:14:50.390 9.292 - 9.339: 99.0562% ( 1) 00:14:50.390 9.339 - 9.387: 99.0636% ( 1) 00:14:50.390 9.387 - 9.434: 99.0709% ( 1) 00:14:50.390 9.481 - 9.529: 99.0783% ( 1) 00:14:50.390 9.576 - 9.624: 99.0857% ( 1) 00:14:50.390 9.719 - 9.766: 99.0931% ( 1) 00:14:50.390 9.813 - 9.861: 99.1004% ( 1) 00:14:50.390 10.287 - 10.335: 99.1225% ( 3) 00:14:50.390 10.430 - 10.477: 99.1299% ( 1) 00:14:50.390 10.619 - 10.667: 99.1373% ( 1) 00:14:50.390 10.856 - 10.904: 99.1447% ( 1) 00:14:50.390 11.141 - 11.188: 99.1594% ( 2) 00:14:50.390 11.236 - 11.283: 99.1668% ( 1) 00:14:50.390 11.567 - 11.615: 99.1742% ( 1) 00:14:50.390 11.710 - 11.757: 99.1815% ( 1) 00:14:50.390 11.852 - 11.899: 99.1963% ( 2) 00:14:50.390 12.326 - 12.421: 99.2037% ( 1) 00:14:50.390 12.610 - 12.705: 99.2110% ( 1) 00:14:50.390 12.990 - 13.084: 99.2184% ( 1) 00:14:50.390 13.084 - 13.179: 99.2258% ( 1) 00:14:50.390 13.369 - 13.464: 99.2332% ( 1) 00:14:50.390 13.464 - 13.559: 99.2405% ( 1) 00:14:50.390 13.559 - 13.653: 99.2479% ( 1) 00:14:50.390 13.653 - 13.748: 99.2553% ( 1) 00:14:50.390 14.033 - 14.127: 99.2626% ( 1) 00:14:50.390 14.507 - 14.601: 99.2848% ( 3) 00:14:50.390 14.601 - 14.696: 99.2921% ( 1) 00:14:50.390 14.696 - 14.791: 99.2995% ( 1) 00:14:50.390 15.076 - 15.170: 99.3069% ( 1) 00:14:50.390 17.256 - 17.351: 99.3216% ( 2) 00:14:50.390 17.351 - 17.446: 99.3438% ( 3) 00:14:50.390 17.446 - 17.541: 99.3585% ( 2) 00:14:50.390 17.541 - 17.636: 99.3732% ( 2) 00:14:50.390 17.636 - 17.730: 99.4322% ( 8) 00:14:50.390 17.730 - 17.825: 99.4839% ( 7) 00:14:50.390 17.825 - 17.920: 99.5355% ( 7) 00:14:50.390 17.920 - 18.015: 99.5871% ( 7) 00:14:50.390 18.015 - 18.110: 99.6387% ( 7) 00:14:50.390 18.110 - 18.204: 99.6682% ( 4) 00:14:50.390 18.204 - 18.299: 99.7051% ( 5) 00:14:50.390 18.299 - 18.394: 99.7419% ( 5) 00:14:50.390 18.394 - 18.489: 99.7714% ( 4) 00:14:50.390 18.489 - 18.584: 99.8009% ( 4) 00:14:50.391 18.584 - 18.679: 99.8230% ( 3) 00:14:50.391 18.679 - 18.773: 99.8673% ( 6) 00:14:50.391 18.773 - 18.868: 99.8820% ( 2) 00:14:50.391 18.868 - 18.963: 99.8894% ( 1) 00:14:50.391 18.963 - 19.058: 99.8968% ( 1) 00:14:50.391 19.058 - 19.153: 99.9263% ( 4) 00:14:50.391 19.153 - 19.247: 99.9336% ( 1) 00:14:50.391 19.247 - 19.342: 99.9410% ( 1) 00:14:50.391 27.686 - 27.876: 99.9484% ( 1) 00:14:50.391 28.255 - 28.444: 99.9558% ( 1) 00:14:50.391 3980.705 - 4004.978: 99.9926% ( 5) 00:14:50.391 4004.978 - 4029.250: 100.0000% ( 1) 00:14:50.391 00:14:50.391 Complete histogram 00:14:50.391 ================== 00:14:50.391 Range in us Cumulative Count 00:14:50.391 2.062 - 2.074: 1.7402% ( 236) 00:14:50.391 2.074 - 2.086: 34.1248% ( 4392) 00:14:50.391 2.086 - 2.098: 47.1464% ( 1766) 00:14:50.391 2.098 - 2.110: 50.2876% ( 426) 00:14:50.391 2.110 - 2.121: 59.6815% ( 1274) 00:14:50.391 2.121 - 2.133: 62.4170% ( 371) 00:14:50.391 2.133 - 2.145: 67.5638% ( 698) 00:14:50.391 2.145 - 2.157: 80.3716% ( 1737) 00:14:50.391 2.157 - 2.169: 82.8860% ( 341) 00:14:50.391 2.169 - 2.181: 85.1128% ( 302) 00:14:50.391 2.181 - 2.193: 89.0798% ( 538) 00:14:50.391 2.193 - 2.204: 90.2227% ( 155) 00:14:50.391 2.204 - 2.216: 91.0706% ( 115) 00:14:50.391 2.216 - 2.228: 92.5232% ( 197) 00:14:50.391 2.228 - 2.240: 94.3224% ( 244) 00:14:50.391 2.240 - 2.252: 95.0671% ( 101) 00:14:50.391 2.252 - 2.264: 95.2736% ( 28) 00:14:50.391 2.264 - 2.276: 95.3915% ( 16) 00:14:50.391 2.276 - 2.287: 95.4726% ( 11) 00:14:50.391 2.287 - 2.299: 95.6127% ( 19) 00:14:50.391 2.299 - 2.311: 95.7602% ( 20) 00:14:50.391 2.311 - 2.323: 95.8708% ( 15) 00:14:50.391 2.323 - 2.335: 95.8929% ( 3) 00:14:50.391 2.335 - 2.347: 95.9151% ( 3) 00:14:50.391 2.347 - 2.359: 95.9519% ( 5) 00:14:50.391 2.359 - 2.370: 96.0773% ( 17) 00:14:50.391 2.370 - 2.382: 96.2837% ( 28) 00:14:50.391 2.382 - 2.394: 96.5713% ( 39) 00:14:50.391 2.394 - 2.406: 96.9326% ( 49) 00:14:50.391 2.406 - 2.418: 97.1686% ( 32) 00:14:50.391 2.418 - 2.430: 97.3676% ( 27) 00:14:50.391 2.430 - 2.441: 97.5667% ( 27) 00:14:50.391 2.441 - 2.453: 97.7216% ( 21) 00:14:50.391 2.453 - 2.465: 97.8912% ( 23) 00:14:50.391 2.465 - 2.477: 97.9723% ( 11) 00:14:50.391 2.477 - 2.489: 98.0239% ( 7) 00:14:50.391 2.489 - 2.501: 98.0829% ( 8) 00:14:50.391 2.501 - 2.513: 98.1124% ( 4) 00:14:50.391 2.513 - 2.524: 98.1197% ( 1) 00:14:50.391 2.524 - 2.536: 98.1271% ( 1) 00:14:50.391 2.536 - 2.548: 98.1419% ( 2) 00:14:50.391 2.548 - 2.560: 98.1492% ( 1) 00:14:50.391 2.560 - 2.572: 98.1566% ( 1) 00:14:50.391 2.596 - 2.607: 98.1640% ( 1) 00:14:50.391 2.619 - 2.631: 98.1714% ( 1) 00:14:50.391 2.631 - 2.643: 98.1787% ( 1) 00:14:50.391 2.643 - 2.655: 98.2009% ( 3) 00:14:50.391 2.679 - 2.690: 98.2082% ( 1) 00:14:50.391 2.726 - 2.738: 98.2230% ( 2) 00:14:50.391 2.761 - 2.773: 98.2303% ( 1) 00:14:50.391 2.797 - 2.809: 98.2377% ( 1) 00:14:50.391 2.809 - 2.821: 98.2451% ( 1) 00:14:50.391 2.821 - 2.833: 98.2525% ( 1) 00:14:50.391 2.844 - 2.856: 98.2598% ( 1) 00:14:50.391 2.856 - 2.868: 98.2672% ( 1) 00:14:50.391 2.892 - 2.904: 98.2746% ( 1) 00:14:50.391 2.927 - 2.939: 98.2820% ( 1) 00:14:50.391 2.939 - 2.951: 98.3115% ( 4) 00:14:50.391 2.951 - 2.963: 98.3188% ( 1) 00:14:50.391 2.975 - 2.987: 98.3483% ( 4) 00:14:50.391 2.987 - 2.999: 98.3557% ( 1) 00:14:50.391 3.034 - 3.058: 98.3926% ( 5) 00:14:50.391 3.058 - 3.081: 98.4294% ( 5) 00:14:50.391 3.081 - 3.105: 98.4368% ( 1) 00:14:50.391 3.129 - 3.153: 98.4442% ( 1) 00:14:50.391 3.153 - 3.176: 98.4810% ( 5) 00:14:50.391 3.176 - 3.200: 98.4884% ( 1) 00:14:50.391 3.224 - 3.247: 98.5032% ( 2) 00:14:50.391 3.271 - 3.295: 98.5253% ( 3) 00:14:50.391 3.319 - 3.342: 98.5474% ( 3) 00:14:50.391 3.342 - 3.366: 98.5548% ( 1) 00:14:50.391 3.366 - 3.390: 98.5622% ( 1) 00:14:50.391 3.390 - 3.413: 98.5769% ( 2) 00:14:50.391 3.413 - 3.437: 98.5990% ( 3) 00:14:50.391 3.437 - 3.461: 98.6285% ( 4) 00:14:50.391 3.461 - 3.484: 98.6359% ( 1) 00:14:50.391 3.508 - 3.532: 98.6506% ( 2) 00:14:50.391 3.556 - 3.579: 98.6580% ( 1) 00:14:50.391 3.603 - 3.627: 98.6654% ( 1) 00:14:50.391 3.650 - 3.674: 98.6875% ( 3) 00:14:50.391 3.674 - 3.698: 98.6949% ( 1) 00:14:50.391 3.698 - 3.721: 98.7096% ( 2) 00:14:50.391 3.769 - 3.793: 9[2024-07-26 14:08:58.042213] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:50.391 8.7391% ( 4) 00:14:50.391 3.793 - 3.816: 98.7465% ( 1) 00:14:50.391 3.864 - 3.887: 98.7612% ( 2) 00:14:50.391 3.911 - 3.935: 98.7686% ( 1) 00:14:50.391 3.959 - 3.982: 98.7760% ( 1) 00:14:50.391 3.982 - 4.006: 98.7834% ( 1) 00:14:50.391 4.551 - 4.575: 98.7907% ( 1) 00:14:50.391 5.665 - 5.689: 98.7981% ( 1) 00:14:50.391 5.926 - 5.950: 98.8055% ( 1) 00:14:50.391 6.305 - 6.353: 98.8129% ( 1) 00:14:50.391 6.400 - 6.447: 98.8202% ( 1) 00:14:50.391 6.495 - 6.542: 98.8276% ( 1) 00:14:50.391 6.542 - 6.590: 98.8350% ( 1) 00:14:50.391 6.590 - 6.637: 98.8424% ( 1) 00:14:50.391 7.016 - 7.064: 98.8497% ( 1) 00:14:50.391 7.064 - 7.111: 98.8571% ( 1) 00:14:50.391 7.111 - 7.159: 98.8718% ( 2) 00:14:50.391 7.490 - 7.538: 98.8866% ( 2) 00:14:50.391 7.538 - 7.585: 98.8940% ( 1) 00:14:50.391 7.633 - 7.680: 98.9013% ( 1) 00:14:50.391 7.870 - 7.917: 98.9161% ( 2) 00:14:50.391 8.201 - 8.249: 98.9235% ( 1) 00:14:50.391 8.249 - 8.296: 98.9308% ( 1) 00:14:50.391 8.344 - 8.391: 98.9382% ( 1) 00:14:50.391 8.770 - 8.818: 98.9456% ( 1) 00:14:50.391 9.197 - 9.244: 98.9530% ( 1) 00:14:50.391 9.434 - 9.481: 98.9603% ( 1) 00:14:50.391 10.145 - 10.193: 98.9677% ( 1) 00:14:50.391 15.455 - 15.550: 98.9751% ( 1) 00:14:50.391 15.550 - 15.644: 98.9825% ( 1) 00:14:50.391 15.644 - 15.739: 99.0046% ( 3) 00:14:50.391 15.739 - 15.834: 99.0341% ( 4) 00:14:50.391 15.834 - 15.929: 99.0488% ( 2) 00:14:50.391 15.929 - 16.024: 99.0709% ( 3) 00:14:50.391 16.024 - 16.119: 99.1225% ( 7) 00:14:50.391 16.119 - 16.213: 99.1520% ( 4) 00:14:50.391 16.213 - 16.308: 99.1594% ( 1) 00:14:50.391 16.308 - 16.403: 99.1742% ( 2) 00:14:50.391 16.403 - 16.498: 99.2037% ( 4) 00:14:50.391 16.498 - 16.593: 99.2110% ( 1) 00:14:50.391 16.593 - 16.687: 99.2553% ( 6) 00:14:50.391 16.687 - 16.782: 99.2700% ( 2) 00:14:50.391 16.782 - 16.877: 99.3216% ( 7) 00:14:50.391 16.877 - 16.972: 99.3585% ( 5) 00:14:50.391 17.256 - 17.351: 99.3732% ( 2) 00:14:50.391 17.541 - 17.636: 99.3806% ( 1) 00:14:50.391 17.920 - 18.015: 99.3880% ( 1) 00:14:50.391 18.204 - 18.299: 99.3954% ( 1) 00:14:50.391 21.428 - 21.523: 99.4027% ( 1) 00:14:50.391 3980.705 - 4004.978: 99.8746% ( 64) 00:14:50.391 4004.978 - 4029.250: 100.0000% ( 17) 00:14:50.391 00:14:50.391 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:14:50.391 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:14:50.391 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:14:50.391 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:14:50.391 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:50.391 [ 00:14:50.391 { 00:14:50.391 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:50.391 "subtype": "Discovery", 00:14:50.391 "listen_addresses": [], 00:14:50.391 "allow_any_host": true, 00:14:50.391 "hosts": [] 00:14:50.391 }, 00:14:50.391 { 00:14:50.391 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:50.391 "subtype": "NVMe", 00:14:50.391 "listen_addresses": [ 00:14:50.391 { 00:14:50.391 "trtype": "VFIOUSER", 00:14:50.391 "adrfam": "IPv4", 00:14:50.391 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:50.391 "trsvcid": "0" 00:14:50.391 } 00:14:50.391 ], 00:14:50.391 "allow_any_host": true, 00:14:50.391 "hosts": [], 00:14:50.391 "serial_number": "SPDK1", 00:14:50.391 "model_number": "SPDK bdev Controller", 00:14:50.391 "max_namespaces": 32, 00:14:50.391 "min_cntlid": 1, 00:14:50.391 "max_cntlid": 65519, 00:14:50.391 "namespaces": [ 00:14:50.391 { 00:14:50.392 "nsid": 1, 00:14:50.392 "bdev_name": "Malloc1", 00:14:50.392 "name": "Malloc1", 00:14:50.392 "nguid": "808E6E318D7D43FA8F5BE8483AC3DC5D", 00:14:50.392 "uuid": "808e6e31-8d7d-43fa-8f5b-e8483ac3dc5d" 00:14:50.392 }, 00:14:50.392 { 00:14:50.392 "nsid": 2, 00:14:50.392 "bdev_name": "Malloc3", 00:14:50.392 "name": "Malloc3", 00:14:50.392 "nguid": "9C27C05DC3144CC1A37FFC4D2EB0442C", 00:14:50.392 "uuid": "9c27c05d-c314-4cc1-a37f-fc4d2eb0442c" 00:14:50.392 } 00:14:50.392 ] 00:14:50.392 }, 00:14:50.392 { 00:14:50.392 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:50.392 "subtype": "NVMe", 00:14:50.392 "listen_addresses": [ 00:14:50.392 { 00:14:50.392 "trtype": "VFIOUSER", 00:14:50.392 "adrfam": "IPv4", 00:14:50.392 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:50.392 "trsvcid": "0" 00:14:50.392 } 00:14:50.392 ], 00:14:50.392 "allow_any_host": true, 00:14:50.392 "hosts": [], 00:14:50.392 "serial_number": "SPDK2", 00:14:50.392 "model_number": "SPDK bdev Controller", 00:14:50.392 "max_namespaces": 32, 00:14:50.392 "min_cntlid": 1, 00:14:50.392 "max_cntlid": 65519, 00:14:50.392 "namespaces": [ 00:14:50.392 { 00:14:50.392 "nsid": 1, 00:14:50.392 "bdev_name": "Malloc2", 00:14:50.392 "name": "Malloc2", 00:14:50.392 "nguid": "A6587B9FADAC4344AD843AA692D4CF4C", 00:14:50.392 "uuid": "a6587b9f-adac-4344-ad84-3aa692d4cf4c" 00:14:50.392 } 00:14:50.392 ] 00:14:50.392 } 00:14:50.392 ] 00:14:50.392 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:14:50.392 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=207443 00:14:50.392 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:14:50.392 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:14:50.392 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:14:50.392 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:50.392 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:14:50.392 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # i=1 00:14:50.392 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # sleep 0.1 00:14:50.392 EAL: No free 2048 kB hugepages reported on node 1 00:14:50.650 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:50.650 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:14:50.650 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # i=2 00:14:50.650 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # sleep 0.1 00:14:50.650 [2024-07-26 14:08:58.493067] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:50.650 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:50.650 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:50.650 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:14:50.650 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:14:50.650 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:14:50.908 Malloc4 00:14:50.908 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:14:51.166 [2024-07-26 14:08:59.130795] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:51.166 14:08:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:51.166 Asynchronous Event Request test 00:14:51.166 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:51.166 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:51.166 Registering asynchronous event callbacks... 00:14:51.166 Starting namespace attribute notice tests for all controllers... 00:14:51.166 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:14:51.166 aer_cb - Changed Namespace 00:14:51.166 Cleaning up... 00:14:51.424 [ 00:14:51.424 { 00:14:51.424 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:51.424 "subtype": "Discovery", 00:14:51.424 "listen_addresses": [], 00:14:51.424 "allow_any_host": true, 00:14:51.424 "hosts": [] 00:14:51.424 }, 00:14:51.424 { 00:14:51.424 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:51.424 "subtype": "NVMe", 00:14:51.424 "listen_addresses": [ 00:14:51.424 { 00:14:51.424 "trtype": "VFIOUSER", 00:14:51.424 "adrfam": "IPv4", 00:14:51.424 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:51.424 "trsvcid": "0" 00:14:51.424 } 00:14:51.424 ], 00:14:51.424 "allow_any_host": true, 00:14:51.424 "hosts": [], 00:14:51.424 "serial_number": "SPDK1", 00:14:51.424 "model_number": "SPDK bdev Controller", 00:14:51.424 "max_namespaces": 32, 00:14:51.424 "min_cntlid": 1, 00:14:51.424 "max_cntlid": 65519, 00:14:51.424 "namespaces": [ 00:14:51.424 { 00:14:51.424 "nsid": 1, 00:14:51.424 "bdev_name": "Malloc1", 00:14:51.424 "name": "Malloc1", 00:14:51.424 "nguid": "808E6E318D7D43FA8F5BE8483AC3DC5D", 00:14:51.424 "uuid": "808e6e31-8d7d-43fa-8f5b-e8483ac3dc5d" 00:14:51.424 }, 00:14:51.424 { 00:14:51.424 "nsid": 2, 00:14:51.424 "bdev_name": "Malloc3", 00:14:51.424 "name": "Malloc3", 00:14:51.424 "nguid": "9C27C05DC3144CC1A37FFC4D2EB0442C", 00:14:51.424 "uuid": "9c27c05d-c314-4cc1-a37f-fc4d2eb0442c" 00:14:51.424 } 00:14:51.424 ] 00:14:51.424 }, 00:14:51.424 { 00:14:51.424 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:51.424 "subtype": "NVMe", 00:14:51.424 "listen_addresses": [ 00:14:51.424 { 00:14:51.424 "trtype": "VFIOUSER", 00:14:51.424 "adrfam": "IPv4", 00:14:51.424 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:51.424 "trsvcid": "0" 00:14:51.424 } 00:14:51.424 ], 00:14:51.424 "allow_any_host": true, 00:14:51.424 "hosts": [], 00:14:51.424 "serial_number": "SPDK2", 00:14:51.424 "model_number": "SPDK bdev Controller", 00:14:51.424 "max_namespaces": 32, 00:14:51.424 "min_cntlid": 1, 00:14:51.424 "max_cntlid": 65519, 00:14:51.424 "namespaces": [ 00:14:51.424 { 00:14:51.424 "nsid": 1, 00:14:51.424 "bdev_name": "Malloc2", 00:14:51.424 "name": "Malloc2", 00:14:51.424 "nguid": "A6587B9FADAC4344AD843AA692D4CF4C", 00:14:51.424 "uuid": "a6587b9f-adac-4344-ad84-3aa692d4cf4c" 00:14:51.424 }, 00:14:51.424 { 00:14:51.424 "nsid": 2, 00:14:51.424 "bdev_name": "Malloc4", 00:14:51.424 "name": "Malloc4", 00:14:51.424 "nguid": "BC732852FAB84CEEB434EF369EEEF8E5", 00:14:51.425 "uuid": "bc732852-fab8-4cee-b434-ef369eeef8e5" 00:14:51.425 } 00:14:51.425 ] 00:14:51.425 } 00:14:51.425 ] 00:14:51.425 14:08:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 207443 00:14:51.425 14:08:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:14:51.425 14:08:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 201830 00:14:51.425 14:08:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 201830 ']' 00:14:51.425 14:08:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 201830 00:14:51.425 14:08:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:14:51.425 14:08:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:51.425 14:08:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 201830 00:14:51.425 14:08:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:51.425 14:08:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:51.425 14:08:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 201830' 00:14:51.425 killing process with pid 201830 00:14:51.425 14:08:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 201830 00:14:51.425 14:08:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 201830 00:14:51.989 14:08:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:14:51.989 14:08:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:51.989 14:08:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:14:51.989 14:08:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:14:51.989 14:08:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:14:51.989 14:08:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=207593 00:14:51.989 14:08:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:14:51.989 14:08:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 207593' 00:14:51.989 Process pid: 207593 00:14:51.989 14:08:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:51.989 14:08:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 207593 00:14:51.989 14:08:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 207593 ']' 00:14:51.989 14:08:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:51.989 14:08:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:51.989 14:08:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:51.989 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:51.989 14:08:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:51.989 14:08:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:51.989 [2024-07-26 14:08:59.816652] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:14:51.989 [2024-07-26 14:08:59.817749] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:14:51.989 [2024-07-26 14:08:59.817813] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:51.989 EAL: No free 2048 kB hugepages reported on node 1 00:14:51.989 [2024-07-26 14:08:59.875139] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:51.989 [2024-07-26 14:08:59.973909] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:51.989 [2024-07-26 14:08:59.973952] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:51.989 [2024-07-26 14:08:59.973982] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:51.989 [2024-07-26 14:08:59.973994] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:51.989 [2024-07-26 14:08:59.974004] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:51.989 [2024-07-26 14:08:59.974096] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:51.989 [2024-07-26 14:08:59.974131] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:51.989 [2024-07-26 14:08:59.974204] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:51.989 [2024-07-26 14:08:59.974206] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:52.246 [2024-07-26 14:09:00.076946] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:14:52.246 [2024-07-26 14:09:00.077219] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:14:52.246 [2024-07-26 14:09:00.077439] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:14:52.246 [2024-07-26 14:09:00.078129] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:14:52.247 [2024-07-26 14:09:00.078364] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:14:52.247 14:09:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:52.247 14:09:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:14:52.247 14:09:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:53.281 14:09:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:14:53.601 14:09:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:53.601 14:09:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:53.601 14:09:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:53.601 14:09:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:53.601 14:09:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:53.884 Malloc1 00:14:53.885 14:09:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:53.885 14:09:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:54.181 14:09:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:54.539 14:09:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:54.539 14:09:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:54.539 14:09:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:54.829 Malloc2 00:14:54.829 14:09:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:14:55.112 14:09:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:14:55.432 14:09:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:14:55.714 14:09:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:14:55.714 14:09:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 207593 00:14:55.714 14:09:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 207593 ']' 00:14:55.714 14:09:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 207593 00:14:55.714 14:09:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:14:55.714 14:09:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:55.714 14:09:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 207593 00:14:55.714 14:09:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:55.714 14:09:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:55.714 14:09:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 207593' 00:14:55.714 killing process with pid 207593 00:14:55.714 14:09:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 207593 00:14:55.714 14:09:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 207593 00:14:55.992 14:09:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:14:55.992 14:09:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:55.992 00:14:55.992 real 0m52.979s 00:14:55.992 user 3m29.242s 00:14:55.992 sys 0m4.387s 00:14:55.992 14:09:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:55.992 14:09:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:55.992 ************************************ 00:14:55.992 END TEST nvmf_vfio_user 00:14:55.992 ************************************ 00:14:55.992 14:09:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:14:55.992 14:09:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:55.992 14:09:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:55.992 14:09:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:55.992 ************************************ 00:14:55.992 START TEST nvmf_vfio_user_nvme_compliance 00:14:55.992 ************************************ 00:14:55.992 14:09:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:14:55.992 * Looking for test storage... 00:14:55.992 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:14:55.992 14:09:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:55.992 14:09:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:14:55.992 14:09:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:55.992 14:09:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:55.992 14:09:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:55.992 14:09:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:55.992 14:09:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:55.992 14:09:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:55.992 14:09:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:55.992 14:09:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:55.992 14:09:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:55.992 14:09:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:55.992 14:09:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:55.992 14:09:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:14:55.992 14:09:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:55.992 14:09:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:55.992 14:09:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:55.992 14:09:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:55.992 14:09:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:55.992 14:09:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:55.992 14:09:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:55.992 14:09:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:55.992 14:09:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.992 14:09:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.992 14:09:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.992 14:09:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:14:55.992 14:09:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.992 14:09:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:14:55.992 14:09:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:55.992 14:09:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:55.992 14:09:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:55.992 14:09:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:55.992 14:09:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:55.992 14:09:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:55.992 14:09:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:55.993 14:09:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:55.993 14:09:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:55.993 14:09:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:55.993 14:09:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:14:55.993 14:09:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:14:55.993 14:09:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:14:55.993 14:09:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=208326 00:14:55.993 14:09:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:14:55.993 14:09:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 208326' 00:14:55.993 Process pid: 208326 00:14:55.993 14:09:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:55.993 14:09:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 208326 00:14:55.993 14:09:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@831 -- # '[' -z 208326 ']' 00:14:55.993 14:09:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:55.993 14:09:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:55.993 14:09:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:55.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:55.993 14:09:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:55.993 14:09:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:55.993 [2024-07-26 14:09:03.918960] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:14:55.993 [2024-07-26 14:09:03.919044] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:55.993 EAL: No free 2048 kB hugepages reported on node 1 00:14:55.993 [2024-07-26 14:09:03.978556] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:56.271 [2024-07-26 14:09:04.086538] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:56.271 [2024-07-26 14:09:04.086592] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:56.271 [2024-07-26 14:09:04.086620] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:56.271 [2024-07-26 14:09:04.086632] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:56.271 [2024-07-26 14:09:04.086641] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:56.271 [2024-07-26 14:09:04.086706] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:56.272 [2024-07-26 14:09:04.086739] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:56.272 [2024-07-26 14:09:04.086742] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:56.272 14:09:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:56.272 14:09:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # return 0 00:14:56.272 14:09:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:14:57.270 14:09:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:14:57.270 14:09:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:14:57.270 14:09:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:14:57.270 14:09:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.270 14:09:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:57.270 14:09:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.270 14:09:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:14:57.270 14:09:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:14:57.270 14:09:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.270 14:09:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:57.270 malloc0 00:14:57.270 14:09:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.270 14:09:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:14:57.270 14:09:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.270 14:09:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:57.270 14:09:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.270 14:09:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:14:57.270 14:09:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.270 14:09:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:57.541 14:09:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.541 14:09:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:14:57.541 14:09:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.541 14:09:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:57.541 14:09:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.541 14:09:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:14:57.541 EAL: No free 2048 kB hugepages reported on node 1 00:14:57.541 00:14:57.541 00:14:57.541 CUnit - A unit testing framework for C - Version 2.1-3 00:14:57.541 http://cunit.sourceforge.net/ 00:14:57.541 00:14:57.541 00:14:57.541 Suite: nvme_compliance 00:14:57.541 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-26 14:09:05.440686] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:57.541 [2024-07-26 14:09:05.442201] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:14:57.541 [2024-07-26 14:09:05.442225] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:14:57.541 [2024-07-26 14:09:05.442237] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:14:57.541 [2024-07-26 14:09:05.443709] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:57.541 passed 00:14:57.541 Test: admin_identify_ctrlr_verify_fused ...[2024-07-26 14:09:05.528271] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:57.541 [2024-07-26 14:09:05.531292] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:57.832 passed 00:14:57.832 Test: admin_identify_ns ...[2024-07-26 14:09:05.619189] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:57.832 [2024-07-26 14:09:05.679551] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:14:57.832 [2024-07-26 14:09:05.687547] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:14:57.832 [2024-07-26 14:09:05.708659] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:57.832 passed 00:14:57.832 Test: admin_get_features_mandatory_features ...[2024-07-26 14:09:05.791145] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:57.832 [2024-07-26 14:09:05.794165] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:57.832 passed 00:14:58.098 Test: admin_get_features_optional_features ...[2024-07-26 14:09:05.877703] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:58.098 [2024-07-26 14:09:05.880724] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:58.098 passed 00:14:58.098 Test: admin_set_features_number_of_queues ...[2024-07-26 14:09:05.963936] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:58.098 [2024-07-26 14:09:06.068670] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:58.098 passed 00:14:58.390 Test: admin_get_log_page_mandatory_logs ...[2024-07-26 14:09:06.154786] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:58.390 [2024-07-26 14:09:06.157813] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:58.390 passed 00:14:58.390 Test: admin_get_log_page_with_lpo ...[2024-07-26 14:09:06.240983] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:58.390 [2024-07-26 14:09:06.308541] ctrlr.c:2688:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:14:58.390 [2024-07-26 14:09:06.321622] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:58.390 passed 00:14:58.675 Test: fabric_property_get ...[2024-07-26 14:09:06.406163] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:58.675 [2024-07-26 14:09:06.407462] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:14:58.675 [2024-07-26 14:09:06.409187] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:58.675 passed 00:14:58.675 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-26 14:09:06.492736] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:58.675 [2024-07-26 14:09:06.494052] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:14:58.675 [2024-07-26 14:09:06.495758] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:58.675 passed 00:14:58.675 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-26 14:09:06.579029] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:58.675 [2024-07-26 14:09:06.662540] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:58.675 [2024-07-26 14:09:06.678536] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:58.675 [2024-07-26 14:09:06.683687] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:58.941 passed 00:14:58.941 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-26 14:09:06.767070] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:58.941 [2024-07-26 14:09:06.768390] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:14:58.941 [2024-07-26 14:09:06.770096] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:58.941 passed 00:14:58.941 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-26 14:09:06.855038] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:58.941 [2024-07-26 14:09:06.931538] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:14:58.941 [2024-07-26 14:09:06.954557] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:59.217 [2024-07-26 14:09:06.959641] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:59.217 passed 00:14:59.217 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-26 14:09:07.044263] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:59.217 [2024-07-26 14:09:07.045602] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:14:59.217 [2024-07-26 14:09:07.045649] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:14:59.217 [2024-07-26 14:09:07.047283] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:59.217 passed 00:14:59.217 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-26 14:09:07.125662] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:59.217 [2024-07-26 14:09:07.217543] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:14:59.217 [2024-07-26 14:09:07.225554] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:14:59.508 [2024-07-26 14:09:07.233557] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:14:59.508 [2024-07-26 14:09:07.241553] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:14:59.508 [2024-07-26 14:09:07.270653] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:59.508 passed 00:14:59.508 Test: admin_create_io_sq_verify_pc ...[2024-07-26 14:09:07.353802] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:59.508 [2024-07-26 14:09:07.370553] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:14:59.508 [2024-07-26 14:09:07.388141] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:59.508 passed 00:14:59.508 Test: admin_create_io_qp_max_qps ...[2024-07-26 14:09:07.471686] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:00.966 [2024-07-26 14:09:08.562558] nvme_ctrlr.c:5469:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:15:00.966 [2024-07-26 14:09:08.937296] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:00.966 passed 00:15:01.259 Test: admin_create_io_sq_shared_cq ...[2024-07-26 14:09:09.021011] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:01.259 [2024-07-26 14:09:09.152537] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:01.259 [2024-07-26 14:09:09.189632] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:01.259 passed 00:15:01.259 00:15:01.259 Run Summary: Type Total Ran Passed Failed Inactive 00:15:01.259 suites 1 1 n/a 0 0 00:15:01.259 tests 18 18 18 0 0 00:15:01.259 asserts 360 360 360 0 n/a 00:15:01.259 00:15:01.259 Elapsed time = 1.552 seconds 00:15:01.259 14:09:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 208326 00:15:01.259 14:09:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@950 -- # '[' -z 208326 ']' 00:15:01.259 14:09:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # kill -0 208326 00:15:01.259 14:09:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # uname 00:15:01.259 14:09:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:01.259 14:09:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 208326 00:15:01.537 14:09:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:01.537 14:09:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:01.538 14:09:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@968 -- # echo 'killing process with pid 208326' 00:15:01.538 killing process with pid 208326 00:15:01.538 14:09:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@969 -- # kill 208326 00:15:01.538 14:09:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@974 -- # wait 208326 00:15:01.538 14:09:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:15:01.538 14:09:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:15:01.538 00:15:01.538 real 0m5.740s 00:15:01.538 user 0m16.016s 00:15:01.538 sys 0m0.548s 00:15:01.538 14:09:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:01.538 14:09:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:01.538 ************************************ 00:15:01.538 END TEST nvmf_vfio_user_nvme_compliance 00:15:01.538 ************************************ 00:15:01.812 14:09:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:01.812 14:09:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:01.812 14:09:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:01.812 14:09:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:01.813 ************************************ 00:15:01.813 START TEST nvmf_vfio_user_fuzz 00:15:01.813 ************************************ 00:15:01.813 14:09:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:01.813 * Looking for test storage... 00:15:01.813 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:01.813 14:09:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:01.813 14:09:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:15:01.813 14:09:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:01.813 14:09:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:01.813 14:09:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:01.813 14:09:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:01.813 14:09:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:01.813 14:09:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:01.813 14:09:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:01.813 14:09:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:01.813 14:09:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:01.813 14:09:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:01.813 14:09:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:01.813 14:09:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:15:01.813 14:09:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:01.813 14:09:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:01.813 14:09:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:01.813 14:09:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:01.813 14:09:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:01.813 14:09:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:01.813 14:09:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:01.813 14:09:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:01.813 14:09:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:01.813 14:09:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:01.813 14:09:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:01.813 14:09:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:15:01.813 14:09:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:01.813 14:09:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:15:01.813 14:09:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:01.813 14:09:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:01.813 14:09:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:01.813 14:09:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:01.813 14:09:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:01.813 14:09:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:01.813 14:09:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:01.813 14:09:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:01.813 14:09:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:01.813 14:09:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:01.813 14:09:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:01.813 14:09:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:15:01.813 14:09:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:01.813 14:09:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:01.813 14:09:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:15:01.813 14:09:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=209585 00:15:01.813 14:09:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:01.813 14:09:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 209585' 00:15:01.813 Process pid: 209585 00:15:01.813 14:09:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:01.813 14:09:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 209585 00:15:01.813 14:09:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@831 -- # '[' -z 209585 ']' 00:15:01.813 14:09:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:01.813 14:09:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:01.813 14:09:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:01.813 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:01.813 14:09:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:01.813 14:09:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:02.085 14:09:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:02.085 14:09:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # return 0 00:15:02.085 14:09:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:15:03.021 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:03.021 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.021 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:03.021 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.021 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:15:03.021 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:03.021 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.021 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:03.021 malloc0 00:15:03.021 14:09:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.021 14:09:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:15:03.021 14:09:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.021 14:09:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:03.021 14:09:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.021 14:09:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:03.021 14:09:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.021 14:09:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:03.279 14:09:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.279 14:09:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:03.279 14:09:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.279 14:09:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:03.279 14:09:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.279 14:09:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:15:03.279 14:09:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:15:35.361 Fuzzing completed. Shutting down the fuzz application 00:15:35.361 00:15:35.361 Dumping successful admin opcodes: 00:15:35.361 8, 9, 10, 24, 00:15:35.361 Dumping successful io opcodes: 00:15:35.361 0, 00:15:35.361 NS: 0x200003a1ef00 I/O qp, Total commands completed: 647484, total successful commands: 2512, random_seed: 1075386688 00:15:35.361 NS: 0x200003a1ef00 admin qp, Total commands completed: 82576, total successful commands: 659, random_seed: 2486941248 00:15:35.361 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:15:35.361 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.361 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:35.361 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.361 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 209585 00:15:35.361 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@950 -- # '[' -z 209585 ']' 00:15:35.362 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # kill -0 209585 00:15:35.362 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # uname 00:15:35.362 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:35.362 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 209585 00:15:35.362 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:35.362 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:35.362 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@968 -- # echo 'killing process with pid 209585' 00:15:35.362 killing process with pid 209585 00:15:35.362 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@969 -- # kill 209585 00:15:35.362 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@974 -- # wait 209585 00:15:35.362 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:15:35.362 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:15:35.362 00:15:35.362 real 0m32.259s 00:15:35.362 user 0m31.508s 00:15:35.362 sys 0m27.803s 00:15:35.362 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:35.362 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:35.362 ************************************ 00:15:35.362 END TEST nvmf_vfio_user_fuzz 00:15:35.362 ************************************ 00:15:35.362 14:09:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:15:35.362 14:09:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:35.362 14:09:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:35.362 14:09:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:35.362 ************************************ 00:15:35.362 START TEST nvmf_auth_target 00:15:35.362 ************************************ 00:15:35.362 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:15:35.362 * Looking for test storage... 00:15:35.362 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:35.362 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:35.362 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:15:35.362 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:35.362 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:35.362 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:35.362 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:35.362 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:35.362 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:35.362 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:35.362 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:35.362 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:35.362 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:35.362 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:35.362 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:15:35.362 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:35.362 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:35.362 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:35.362 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:35.362 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:35.362 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:35.362 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:35.362 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:35.362 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:35.362 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:35.362 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:35.362 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:15:35.362 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:35.362 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:15:35.362 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:35.362 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:35.362 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:35.362 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:35.362 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:35.362 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:35.362 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:35.362 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:35.362 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:15:35.362 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:15:35.362 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:15:35.362 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:35.362 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:15:35.362 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:15:35.362 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:15:35.362 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:15:35.362 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:35.362 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:35.362 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:35.362 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:35.362 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:35.362 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:35.362 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:35.362 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:35.362 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:35.362 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:35.362 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:15:35.362 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.298 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:36.298 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:15:36.298 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:36.298 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:36.298 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:36.298 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:36.298 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:36.298 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:15:36.298 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:36.298 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:15:36.298 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:15:36.298 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:15:36.298 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:15:36.298 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:15:36.298 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:15:36.298 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:36.298 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:36.298 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:36.298 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:36.298 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:36.298 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:36.298 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:36.298 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:36.298 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:36.298 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:36.298 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:36.298 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:36.298 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:36.298 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:36.298 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:36.298 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:36.298 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:36.298 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:36.298 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:15:36.298 Found 0000:09:00.0 (0x8086 - 0x159b) 00:15:36.298 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:36.298 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:36.298 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:36.298 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:36.298 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:36.298 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:36.299 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:15:36.299 Found 0000:09:00.1 (0x8086 - 0x159b) 00:15:36.299 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:36.299 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:36.299 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:36.299 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:36.299 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:36.299 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:36.299 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:36.299 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:36.299 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:36.299 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:36.299 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:36.299 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:36.299 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:36.299 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:36.299 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:36.299 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:15:36.299 Found net devices under 0000:09:00.0: cvl_0_0 00:15:36.299 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:36.299 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:36.299 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:36.299 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:36.299 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:36.299 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:36.299 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:36.299 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:36.299 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:15:36.299 Found net devices under 0000:09:00.1: cvl_0_1 00:15:36.299 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:36.299 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:36.299 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:15:36.299 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:36.299 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:36.299 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:36.299 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:36.299 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:36.299 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:36.299 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:36.299 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:36.299 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:36.299 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:36.299 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:36.299 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:36.299 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:36.299 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:36.299 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:36.299 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:36.299 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:36.299 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:36.299 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:36.299 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:36.299 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:36.299 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:36.299 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:36.299 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:36.299 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.185 ms 00:15:36.299 00:15:36.299 --- 10.0.0.2 ping statistics --- 00:15:36.299 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:36.299 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:15:36.299 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:36.299 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:36.299 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.107 ms 00:15:36.299 00:15:36.299 --- 10.0.0.1 ping statistics --- 00:15:36.299 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:36.299 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:15:36.299 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:36.299 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:15:36.299 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:36.299 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:36.299 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:36.299 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:36.299 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:36.299 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:36.299 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:36.299 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:15:36.299 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:36.299 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:36.299 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.299 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=215036 00:15:36.299 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:15:36.299 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 215036 00:15:36.299 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 215036 ']' 00:15:36.299 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:36.299 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:36.299 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:36.299 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:36.299 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.867 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:36.867 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:15:36.867 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:36.867 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:36.867 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.867 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:36.867 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=215055 00:15:36.867 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:15:36.867 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:15:36.867 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:15:36.867 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:15:36.867 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:36.867 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:15:36.867 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:15:36.867 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:15:36.867 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:36.867 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=cb7c6a6157b38d67957220ba2d20045d395706d1d78b3ef5 00:15:36.867 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:15:36.867 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.PHb 00:15:36.867 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key cb7c6a6157b38d67957220ba2d20045d395706d1d78b3ef5 0 00:15:36.867 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 cb7c6a6157b38d67957220ba2d20045d395706d1d78b3ef5 0 00:15:36.867 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:15:36.867 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:36.867 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=cb7c6a6157b38d67957220ba2d20045d395706d1d78b3ef5 00:15:36.867 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:15:36.867 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:15:36.867 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.PHb 00:15:36.867 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.PHb 00:15:36.867 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.PHb 00:15:36.867 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:15:36.867 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:15:36.867 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:36.867 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:15:36.867 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:15:36.867 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:15:36.867 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:15:36.867 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=0496d92babaa81a03ce248181e22e59cd2e3e17955b32e720b2542d89a655ad8 00:15:36.867 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:15:36.867 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.z5c 00:15:36.867 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 0496d92babaa81a03ce248181e22e59cd2e3e17955b32e720b2542d89a655ad8 3 00:15:36.867 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 0496d92babaa81a03ce248181e22e59cd2e3e17955b32e720b2542d89a655ad8 3 00:15:36.867 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:15:36.867 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:36.867 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=0496d92babaa81a03ce248181e22e59cd2e3e17955b32e720b2542d89a655ad8 00:15:36.867 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:15:36.867 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:15:36.867 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.z5c 00:15:36.867 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.z5c 00:15:36.867 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.z5c 00:15:36.867 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:15:36.867 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:15:36.867 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:36.867 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:15:36.868 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:15:36.868 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:15:36.868 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:36.868 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=f9b62d18ca1707c6e6bd8b87fa3d7533 00:15:36.868 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:15:36.868 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.Sn9 00:15:36.868 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key f9b62d18ca1707c6e6bd8b87fa3d7533 1 00:15:36.868 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 f9b62d18ca1707c6e6bd8b87fa3d7533 1 00:15:36.868 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:15:36.868 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:36.868 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=f9b62d18ca1707c6e6bd8b87fa3d7533 00:15:36.868 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:15:36.868 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:15:36.868 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.Sn9 00:15:36.868 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.Sn9 00:15:36.868 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.Sn9 00:15:36.868 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:15:36.868 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:15:36.868 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:36.868 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:15:36.868 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:15:36.868 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:15:36.868 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:36.868 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=a4692d876db02a661383747a5e517388a60a38f41dd07adf 00:15:36.868 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:15:36.868 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.HiE 00:15:36.868 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key a4692d876db02a661383747a5e517388a60a38f41dd07adf 2 00:15:36.868 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 a4692d876db02a661383747a5e517388a60a38f41dd07adf 2 00:15:36.868 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:15:36.868 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:36.868 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=a4692d876db02a661383747a5e517388a60a38f41dd07adf 00:15:36.868 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:15:36.868 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:15:36.868 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.HiE 00:15:36.868 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.HiE 00:15:36.868 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.HiE 00:15:36.868 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:15:36.868 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:15:36.868 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:36.868 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:15:36.868 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:15:36.868 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:15:36.868 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:36.868 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=8962f94c024104ca5702c1f37c230f7186669c309b51dd3b 00:15:36.868 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:15:36.868 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.3X4 00:15:36.868 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 8962f94c024104ca5702c1f37c230f7186669c309b51dd3b 2 00:15:36.868 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 8962f94c024104ca5702c1f37c230f7186669c309b51dd3b 2 00:15:36.868 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:15:36.868 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:36.868 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=8962f94c024104ca5702c1f37c230f7186669c309b51dd3b 00:15:36.868 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:15:36.868 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:15:37.127 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.3X4 00:15:37.127 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.3X4 00:15:37.127 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.3X4 00:15:37.127 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:15:37.127 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:15:37.127 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:37.127 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:15:37.127 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:15:37.127 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:15:37.127 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:37.127 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=e7e6d30db244c33947094cb4f9125555 00:15:37.127 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:15:37.127 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.X9y 00:15:37.127 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key e7e6d30db244c33947094cb4f9125555 1 00:15:37.127 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 e7e6d30db244c33947094cb4f9125555 1 00:15:37.127 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:15:37.127 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:37.127 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=e7e6d30db244c33947094cb4f9125555 00:15:37.127 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:15:37.127 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:15:37.127 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.X9y 00:15:37.127 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.X9y 00:15:37.127 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.X9y 00:15:37.127 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:15:37.127 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:15:37.127 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:37.127 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:15:37.127 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:15:37.127 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:15:37.127 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:15:37.127 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=178a3a9dc93f2d605486b94d24ed334a9d3b487ce32288e36922d406b36392a4 00:15:37.127 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:15:37.127 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.qK2 00:15:37.127 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 178a3a9dc93f2d605486b94d24ed334a9d3b487ce32288e36922d406b36392a4 3 00:15:37.127 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 178a3a9dc93f2d605486b94d24ed334a9d3b487ce32288e36922d406b36392a4 3 00:15:37.127 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:15:37.127 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:37.127 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=178a3a9dc93f2d605486b94d24ed334a9d3b487ce32288e36922d406b36392a4 00:15:37.127 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:15:37.127 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:15:37.127 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.qK2 00:15:37.127 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.qK2 00:15:37.127 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.qK2 00:15:37.127 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:15:37.127 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 215036 00:15:37.127 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 215036 ']' 00:15:37.127 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:37.127 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:37.127 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:37.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:37.127 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:37.127 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.386 14:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:37.386 14:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:15:37.386 14:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 215055 /var/tmp/host.sock 00:15:37.386 14:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 215055 ']' 00:15:37.386 14:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:15:37.386 14:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:37.386 14:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:15:37.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:15:37.386 14:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:37.386 14:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.644 14:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:37.644 14:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:15:37.644 14:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:15:37.644 14:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.644 14:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.644 14:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.644 14:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:15:37.644 14:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.PHb 00:15:37.644 14:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.644 14:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.644 14:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.644 14:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.PHb 00:15:37.644 14:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.PHb 00:15:37.902 14:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.z5c ]] 00:15:37.902 14:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.z5c 00:15:37.902 14:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.902 14:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.902 14:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.902 14:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.z5c 00:15:37.902 14:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.z5c 00:15:38.160 14:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:15:38.160 14:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.Sn9 00:15:38.160 14:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.160 14:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.160 14:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.160 14:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.Sn9 00:15:38.160 14:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.Sn9 00:15:38.417 14:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.HiE ]] 00:15:38.417 14:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.HiE 00:15:38.417 14:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.417 14:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.417 14:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.417 14:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.HiE 00:15:38.417 14:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.HiE 00:15:38.674 14:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:15:38.674 14:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.3X4 00:15:38.674 14:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.674 14:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.674 14:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.674 14:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.3X4 00:15:38.674 14:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.3X4 00:15:38.931 14:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.X9y ]] 00:15:38.931 14:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.X9y 00:15:38.931 14:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.931 14:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.931 14:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.931 14:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.X9y 00:15:38.931 14:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.X9y 00:15:39.189 14:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:15:39.189 14:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.qK2 00:15:39.189 14:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.189 14:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.189 14:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.189 14:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.qK2 00:15:39.189 14:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.qK2 00:15:39.447 14:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:15:39.447 14:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:15:39.447 14:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:39.447 14:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:39.447 14:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:39.447 14:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:39.705 14:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:15:39.705 14:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:39.705 14:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:39.705 14:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:39.705 14:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:39.705 14:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:39.705 14:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:39.705 14:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.705 14:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.705 14:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.705 14:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:39.705 14:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:39.976 00:15:39.976 14:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:39.976 14:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:39.976 14:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:40.238 14:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:40.238 14:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:40.238 14:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.238 14:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.238 14:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.238 14:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:40.238 { 00:15:40.238 "cntlid": 1, 00:15:40.238 "qid": 0, 00:15:40.238 "state": "enabled", 00:15:40.238 "thread": "nvmf_tgt_poll_group_000", 00:15:40.238 "listen_address": { 00:15:40.238 "trtype": "TCP", 00:15:40.238 "adrfam": "IPv4", 00:15:40.238 "traddr": "10.0.0.2", 00:15:40.238 "trsvcid": "4420" 00:15:40.238 }, 00:15:40.238 "peer_address": { 00:15:40.238 "trtype": "TCP", 00:15:40.238 "adrfam": "IPv4", 00:15:40.238 "traddr": "10.0.0.1", 00:15:40.238 "trsvcid": "56888" 00:15:40.238 }, 00:15:40.238 "auth": { 00:15:40.238 "state": "completed", 00:15:40.238 "digest": "sha256", 00:15:40.238 "dhgroup": "null" 00:15:40.238 } 00:15:40.238 } 00:15:40.238 ]' 00:15:40.238 14:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:40.238 14:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:40.238 14:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:40.238 14:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:40.238 14:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:40.238 14:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:40.238 14:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:40.239 14:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:40.497 14:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:Y2I3YzZhNjE1N2IzOGQ2Nzk1NzIyMGJhMmQyMDA0NWQzOTU3MDZkMWQ3OGIzZWY1DD6o/Q==: --dhchap-ctrl-secret DHHC-1:03:MDQ5NmQ5MmJhYmFhODFhMDNjZTI0ODE4MWUyMmU1OWNkMmUzZTE3OTU1YjMyZTcyMGIyNTQyZDg5YTY1NWFkONjITE0=: 00:15:45.758 14:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:45.759 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:45.759 14:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:45.759 14:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.759 14:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.759 14:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.759 14:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:45.759 14:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:45.759 14:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:45.759 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:15:45.759 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:45.759 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:45.759 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:45.759 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:45.759 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:45.759 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:45.759 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.759 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.759 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.759 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:45.759 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:45.759 00:15:45.759 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:45.759 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:45.759 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:45.759 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:45.759 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:45.759 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.759 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.759 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.759 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:45.759 { 00:15:45.759 "cntlid": 3, 00:15:45.759 "qid": 0, 00:15:45.759 "state": "enabled", 00:15:45.759 "thread": "nvmf_tgt_poll_group_000", 00:15:45.759 "listen_address": { 00:15:45.759 "trtype": "TCP", 00:15:45.759 "adrfam": "IPv4", 00:15:45.759 "traddr": "10.0.0.2", 00:15:45.759 "trsvcid": "4420" 00:15:45.759 }, 00:15:45.759 "peer_address": { 00:15:45.759 "trtype": "TCP", 00:15:45.759 "adrfam": "IPv4", 00:15:45.759 "traddr": "10.0.0.1", 00:15:45.759 "trsvcid": "56910" 00:15:45.759 }, 00:15:45.759 "auth": { 00:15:45.759 "state": "completed", 00:15:45.759 "digest": "sha256", 00:15:45.759 "dhgroup": "null" 00:15:45.759 } 00:15:45.759 } 00:15:45.759 ]' 00:15:45.759 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:46.017 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:46.017 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:46.017 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:46.017 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:46.017 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:46.017 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:46.017 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:46.275 14:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:ZjliNjJkMThjYTE3MDdjNmU2YmQ4Yjg3ZmEzZDc1MzNSpyDl: --dhchap-ctrl-secret DHHC-1:02:YTQ2OTJkODc2ZGIwMmE2NjEzODM3NDdhNWU1MTczODhhNjBhMzhmNDFkZDA3YWRmSj0TVg==: 00:15:47.207 14:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:47.207 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:47.207 14:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:47.208 14:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.208 14:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.208 14:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.208 14:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:47.208 14:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:47.208 14:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:47.465 14:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:15:47.465 14:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:47.465 14:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:47.465 14:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:47.465 14:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:47.465 14:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:47.465 14:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:47.465 14:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.465 14:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.465 14:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.465 14:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:47.465 14:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:47.723 00:15:47.723 14:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:47.723 14:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:47.723 14:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:47.981 14:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:47.981 14:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:47.981 14:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.981 14:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.981 14:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.981 14:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:47.981 { 00:15:47.981 "cntlid": 5, 00:15:47.981 "qid": 0, 00:15:47.981 "state": "enabled", 00:15:47.981 "thread": "nvmf_tgt_poll_group_000", 00:15:47.981 "listen_address": { 00:15:47.981 "trtype": "TCP", 00:15:47.981 "adrfam": "IPv4", 00:15:47.981 "traddr": "10.0.0.2", 00:15:47.981 "trsvcid": "4420" 00:15:47.981 }, 00:15:47.981 "peer_address": { 00:15:47.981 "trtype": "TCP", 00:15:47.981 "adrfam": "IPv4", 00:15:47.981 "traddr": "10.0.0.1", 00:15:47.981 "trsvcid": "39680" 00:15:47.981 }, 00:15:47.981 "auth": { 00:15:47.981 "state": "completed", 00:15:47.981 "digest": "sha256", 00:15:47.981 "dhgroup": "null" 00:15:47.981 } 00:15:47.981 } 00:15:47.981 ]' 00:15:47.981 14:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:47.981 14:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:47.981 14:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:47.981 14:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:47.981 14:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:47.981 14:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:47.981 14:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:47.981 14:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:48.239 14:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:ODk2MmY5NGMwMjQxMDRjYTU3MDJjMWYzN2MyMzBmNzE4NjY2OWMzMDliNTFkZDNiOBdn9Q==: --dhchap-ctrl-secret DHHC-1:01:ZTdlNmQzMGRiMjQ0YzMzOTQ3MDk0Y2I0ZjkxMjU1NTWzWWa/: 00:15:49.173 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:49.173 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:49.173 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:49.173 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.173 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.173 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.173 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:49.173 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:49.173 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:49.430 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:15:49.430 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:49.430 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:49.430 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:49.430 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:49.430 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:49.430 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:15:49.430 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.430 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.430 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.430 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:49.430 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:49.688 00:15:49.946 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:49.946 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:49.946 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:49.946 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:49.946 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:49.946 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.946 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.203 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.203 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:50.203 { 00:15:50.203 "cntlid": 7, 00:15:50.203 "qid": 0, 00:15:50.203 "state": "enabled", 00:15:50.203 "thread": "nvmf_tgt_poll_group_000", 00:15:50.203 "listen_address": { 00:15:50.203 "trtype": "TCP", 00:15:50.203 "adrfam": "IPv4", 00:15:50.203 "traddr": "10.0.0.2", 00:15:50.203 "trsvcid": "4420" 00:15:50.203 }, 00:15:50.203 "peer_address": { 00:15:50.203 "trtype": "TCP", 00:15:50.203 "adrfam": "IPv4", 00:15:50.203 "traddr": "10.0.0.1", 00:15:50.203 "trsvcid": "39702" 00:15:50.203 }, 00:15:50.203 "auth": { 00:15:50.203 "state": "completed", 00:15:50.203 "digest": "sha256", 00:15:50.203 "dhgroup": "null" 00:15:50.203 } 00:15:50.203 } 00:15:50.203 ]' 00:15:50.203 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:50.203 14:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:50.203 14:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:50.203 14:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:50.203 14:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:50.203 14:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:50.203 14:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:50.203 14:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:50.461 14:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:MTc4YTNhOWRjOTNmMmQ2MDU0ODZiOTRkMjRlZDMzNGE5ZDNiNDg3Y2UzMjI4OGUzNjkyMmQ0MDZiMzYzOTJhNCkUHwI=: 00:15:51.393 14:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:51.393 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:51.393 14:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:51.393 14:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.393 14:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.393 14:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.393 14:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:51.393 14:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:51.393 14:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:51.393 14:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:51.651 14:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:15:51.651 14:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:51.651 14:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:51.651 14:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:51.651 14:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:51.651 14:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:51.651 14:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:51.651 14:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.651 14:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.651 14:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.651 14:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:51.651 14:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:51.909 00:15:51.909 14:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:51.909 14:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:51.909 14:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:52.167 14:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:52.167 14:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:52.167 14:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.167 14:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.167 14:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.167 14:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:52.167 { 00:15:52.167 "cntlid": 9, 00:15:52.167 "qid": 0, 00:15:52.167 "state": "enabled", 00:15:52.167 "thread": "nvmf_tgt_poll_group_000", 00:15:52.167 "listen_address": { 00:15:52.167 "trtype": "TCP", 00:15:52.167 "adrfam": "IPv4", 00:15:52.167 "traddr": "10.0.0.2", 00:15:52.167 "trsvcid": "4420" 00:15:52.167 }, 00:15:52.167 "peer_address": { 00:15:52.167 "trtype": "TCP", 00:15:52.167 "adrfam": "IPv4", 00:15:52.167 "traddr": "10.0.0.1", 00:15:52.167 "trsvcid": "39732" 00:15:52.167 }, 00:15:52.167 "auth": { 00:15:52.167 "state": "completed", 00:15:52.167 "digest": "sha256", 00:15:52.167 "dhgroup": "ffdhe2048" 00:15:52.167 } 00:15:52.167 } 00:15:52.167 ]' 00:15:52.167 14:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:52.167 14:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:52.167 14:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:52.167 14:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:52.167 14:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:52.167 14:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:52.167 14:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:52.167 14:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:52.424 14:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:Y2I3YzZhNjE1N2IzOGQ2Nzk1NzIyMGJhMmQyMDA0NWQzOTU3MDZkMWQ3OGIzZWY1DD6o/Q==: --dhchap-ctrl-secret DHHC-1:03:MDQ5NmQ5MmJhYmFhODFhMDNjZTI0ODE4MWUyMmU1OWNkMmUzZTE3OTU1YjMyZTcyMGIyNTQyZDg5YTY1NWFkONjITE0=: 00:15:53.358 14:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:53.358 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:53.358 14:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:53.358 14:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.358 14:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.358 14:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.358 14:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:53.358 14:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:53.358 14:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:53.616 14:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:15:53.616 14:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:53.616 14:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:53.616 14:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:53.616 14:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:53.616 14:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:53.616 14:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:53.616 14:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.616 14:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.616 14:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.616 14:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:53.616 14:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:53.874 00:15:53.874 14:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:53.874 14:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:53.874 14:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:54.131 14:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:54.131 14:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:54.131 14:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.131 14:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.131 14:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.131 14:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:54.131 { 00:15:54.131 "cntlid": 11, 00:15:54.131 "qid": 0, 00:15:54.131 "state": "enabled", 00:15:54.131 "thread": "nvmf_tgt_poll_group_000", 00:15:54.131 "listen_address": { 00:15:54.131 "trtype": "TCP", 00:15:54.131 "adrfam": "IPv4", 00:15:54.131 "traddr": "10.0.0.2", 00:15:54.131 "trsvcid": "4420" 00:15:54.131 }, 00:15:54.131 "peer_address": { 00:15:54.131 "trtype": "TCP", 00:15:54.131 "adrfam": "IPv4", 00:15:54.131 "traddr": "10.0.0.1", 00:15:54.131 "trsvcid": "39746" 00:15:54.131 }, 00:15:54.131 "auth": { 00:15:54.131 "state": "completed", 00:15:54.131 "digest": "sha256", 00:15:54.131 "dhgroup": "ffdhe2048" 00:15:54.131 } 00:15:54.131 } 00:15:54.131 ]' 00:15:54.131 14:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:54.131 14:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:54.131 14:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:54.131 14:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:54.389 14:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:54.389 14:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:54.389 14:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:54.389 14:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:54.648 14:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:ZjliNjJkMThjYTE3MDdjNmU2YmQ4Yjg3ZmEzZDc1MzNSpyDl: --dhchap-ctrl-secret DHHC-1:02:YTQ2OTJkODc2ZGIwMmE2NjEzODM3NDdhNWU1MTczODhhNjBhMzhmNDFkZDA3YWRmSj0TVg==: 00:15:55.581 14:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:55.581 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:55.581 14:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:55.581 14:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.581 14:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.581 14:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.581 14:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:55.581 14:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:55.581 14:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:55.581 14:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:15:55.581 14:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:55.839 14:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:55.839 14:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:55.839 14:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:55.839 14:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:55.839 14:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:55.839 14:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.839 14:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.839 14:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.839 14:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:55.839 14:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:56.097 00:15:56.097 14:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:56.097 14:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:56.097 14:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:56.355 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:56.355 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:56.355 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.355 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.355 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.355 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:56.355 { 00:15:56.355 "cntlid": 13, 00:15:56.355 "qid": 0, 00:15:56.355 "state": "enabled", 00:15:56.355 "thread": "nvmf_tgt_poll_group_000", 00:15:56.355 "listen_address": { 00:15:56.355 "trtype": "TCP", 00:15:56.355 "adrfam": "IPv4", 00:15:56.355 "traddr": "10.0.0.2", 00:15:56.355 "trsvcid": "4420" 00:15:56.355 }, 00:15:56.355 "peer_address": { 00:15:56.355 "trtype": "TCP", 00:15:56.355 "adrfam": "IPv4", 00:15:56.355 "traddr": "10.0.0.1", 00:15:56.355 "trsvcid": "39498" 00:15:56.355 }, 00:15:56.355 "auth": { 00:15:56.355 "state": "completed", 00:15:56.355 "digest": "sha256", 00:15:56.355 "dhgroup": "ffdhe2048" 00:15:56.355 } 00:15:56.355 } 00:15:56.355 ]' 00:15:56.355 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:56.355 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:56.355 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:56.355 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:56.355 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:56.355 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:56.355 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:56.355 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:56.612 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:ODk2MmY5NGMwMjQxMDRjYTU3MDJjMWYzN2MyMzBmNzE4NjY2OWMzMDliNTFkZDNiOBdn9Q==: --dhchap-ctrl-secret DHHC-1:01:ZTdlNmQzMGRiMjQ0YzMzOTQ3MDk0Y2I0ZjkxMjU1NTWzWWa/: 00:15:57.545 14:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:57.545 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:57.546 14:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:57.546 14:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.546 14:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.546 14:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.546 14:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:57.546 14:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:57.546 14:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:57.803 14:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:15:57.803 14:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:57.803 14:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:57.803 14:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:57.803 14:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:57.803 14:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:57.803 14:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:15:57.803 14:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.803 14:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.803 14:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.803 14:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:57.803 14:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:58.061 00:15:58.061 14:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:58.061 14:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:58.061 14:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:58.318 14:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:58.318 14:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:58.318 14:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.318 14:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.318 14:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.318 14:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:58.318 { 00:15:58.318 "cntlid": 15, 00:15:58.318 "qid": 0, 00:15:58.318 "state": "enabled", 00:15:58.318 "thread": "nvmf_tgt_poll_group_000", 00:15:58.318 "listen_address": { 00:15:58.318 "trtype": "TCP", 00:15:58.318 "adrfam": "IPv4", 00:15:58.318 "traddr": "10.0.0.2", 00:15:58.318 "trsvcid": "4420" 00:15:58.318 }, 00:15:58.318 "peer_address": { 00:15:58.318 "trtype": "TCP", 00:15:58.318 "adrfam": "IPv4", 00:15:58.318 "traddr": "10.0.0.1", 00:15:58.318 "trsvcid": "39518" 00:15:58.319 }, 00:15:58.319 "auth": { 00:15:58.319 "state": "completed", 00:15:58.319 "digest": "sha256", 00:15:58.319 "dhgroup": "ffdhe2048" 00:15:58.319 } 00:15:58.319 } 00:15:58.319 ]' 00:15:58.319 14:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:58.319 14:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:58.319 14:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:58.319 14:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:58.319 14:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:58.576 14:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:58.576 14:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:58.576 14:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:58.834 14:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:MTc4YTNhOWRjOTNmMmQ2MDU0ODZiOTRkMjRlZDMzNGE5ZDNiNDg3Y2UzMjI4OGUzNjkyMmQ0MDZiMzYzOTJhNCkUHwI=: 00:15:59.767 14:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:59.767 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:59.767 14:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:59.767 14:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.767 14:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.767 14:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.767 14:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:59.767 14:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:59.767 14:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:59.767 14:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:59.768 14:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:15:59.768 14:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:59.768 14:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:59.768 14:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:59.768 14:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:59.768 14:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:59.768 14:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:59.768 14:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.768 14:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.768 14:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.768 14:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:59.768 14:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:00.332 00:16:00.332 14:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:00.332 14:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:00.332 14:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:00.332 14:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:00.332 14:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:00.332 14:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.332 14:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.332 14:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.332 14:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:00.332 { 00:16:00.332 "cntlid": 17, 00:16:00.332 "qid": 0, 00:16:00.332 "state": "enabled", 00:16:00.332 "thread": "nvmf_tgt_poll_group_000", 00:16:00.332 "listen_address": { 00:16:00.332 "trtype": "TCP", 00:16:00.332 "adrfam": "IPv4", 00:16:00.332 "traddr": "10.0.0.2", 00:16:00.332 "trsvcid": "4420" 00:16:00.332 }, 00:16:00.332 "peer_address": { 00:16:00.332 "trtype": "TCP", 00:16:00.332 "adrfam": "IPv4", 00:16:00.332 "traddr": "10.0.0.1", 00:16:00.332 "trsvcid": "39526" 00:16:00.332 }, 00:16:00.332 "auth": { 00:16:00.332 "state": "completed", 00:16:00.332 "digest": "sha256", 00:16:00.332 "dhgroup": "ffdhe3072" 00:16:00.332 } 00:16:00.332 } 00:16:00.332 ]' 00:16:00.332 14:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:00.591 14:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:00.591 14:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:00.591 14:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:00.591 14:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:00.591 14:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:00.591 14:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:00.591 14:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:00.849 14:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:Y2I3YzZhNjE1N2IzOGQ2Nzk1NzIyMGJhMmQyMDA0NWQzOTU3MDZkMWQ3OGIzZWY1DD6o/Q==: --dhchap-ctrl-secret DHHC-1:03:MDQ5NmQ5MmJhYmFhODFhMDNjZTI0ODE4MWUyMmU1OWNkMmUzZTE3OTU1YjMyZTcyMGIyNTQyZDg5YTY1NWFkONjITE0=: 00:16:01.781 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:01.781 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:01.781 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:01.781 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.781 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.781 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.781 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:01.781 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:01.781 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:02.040 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:16:02.040 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:02.040 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:02.040 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:02.040 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:02.040 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:02.040 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:02.040 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.040 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.040 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.040 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:02.040 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:02.297 00:16:02.297 14:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:02.297 14:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:02.297 14:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:02.563 14:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:02.563 14:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:02.563 14:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.563 14:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.563 14:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.563 14:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:02.563 { 00:16:02.563 "cntlid": 19, 00:16:02.563 "qid": 0, 00:16:02.563 "state": "enabled", 00:16:02.563 "thread": "nvmf_tgt_poll_group_000", 00:16:02.563 "listen_address": { 00:16:02.563 "trtype": "TCP", 00:16:02.563 "adrfam": "IPv4", 00:16:02.563 "traddr": "10.0.0.2", 00:16:02.563 "trsvcid": "4420" 00:16:02.563 }, 00:16:02.563 "peer_address": { 00:16:02.563 "trtype": "TCP", 00:16:02.563 "adrfam": "IPv4", 00:16:02.563 "traddr": "10.0.0.1", 00:16:02.563 "trsvcid": "39546" 00:16:02.563 }, 00:16:02.563 "auth": { 00:16:02.563 "state": "completed", 00:16:02.563 "digest": "sha256", 00:16:02.563 "dhgroup": "ffdhe3072" 00:16:02.563 } 00:16:02.563 } 00:16:02.563 ]' 00:16:02.563 14:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:02.563 14:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:02.563 14:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:02.563 14:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:02.563 14:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:02.563 14:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:02.563 14:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:02.563 14:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:02.820 14:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:ZjliNjJkMThjYTE3MDdjNmU2YmQ4Yjg3ZmEzZDc1MzNSpyDl: --dhchap-ctrl-secret DHHC-1:02:YTQ2OTJkODc2ZGIwMmE2NjEzODM3NDdhNWU1MTczODhhNjBhMzhmNDFkZDA3YWRmSj0TVg==: 00:16:03.751 14:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:03.751 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:03.751 14:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:03.751 14:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.751 14:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.751 14:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.751 14:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:03.751 14:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:03.751 14:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:04.008 14:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:16:04.008 14:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:04.008 14:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:04.008 14:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:04.008 14:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:04.008 14:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:04.008 14:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:04.008 14:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.008 14:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.008 14:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.008 14:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:04.008 14:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:04.266 00:16:04.523 14:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:04.523 14:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:04.523 14:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:04.780 14:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:04.780 14:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:04.780 14:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.780 14:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.780 14:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.780 14:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:04.780 { 00:16:04.780 "cntlid": 21, 00:16:04.780 "qid": 0, 00:16:04.780 "state": "enabled", 00:16:04.780 "thread": "nvmf_tgt_poll_group_000", 00:16:04.780 "listen_address": { 00:16:04.780 "trtype": "TCP", 00:16:04.780 "adrfam": "IPv4", 00:16:04.780 "traddr": "10.0.0.2", 00:16:04.780 "trsvcid": "4420" 00:16:04.780 }, 00:16:04.780 "peer_address": { 00:16:04.780 "trtype": "TCP", 00:16:04.780 "adrfam": "IPv4", 00:16:04.780 "traddr": "10.0.0.1", 00:16:04.780 "trsvcid": "39582" 00:16:04.780 }, 00:16:04.780 "auth": { 00:16:04.780 "state": "completed", 00:16:04.780 "digest": "sha256", 00:16:04.780 "dhgroup": "ffdhe3072" 00:16:04.780 } 00:16:04.780 } 00:16:04.780 ]' 00:16:04.780 14:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:04.780 14:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:04.780 14:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:04.780 14:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:04.780 14:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:04.780 14:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:04.780 14:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:04.780 14:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:05.037 14:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:ODk2MmY5NGMwMjQxMDRjYTU3MDJjMWYzN2MyMzBmNzE4NjY2OWMzMDliNTFkZDNiOBdn9Q==: --dhchap-ctrl-secret DHHC-1:01:ZTdlNmQzMGRiMjQ0YzMzOTQ3MDk0Y2I0ZjkxMjU1NTWzWWa/: 00:16:05.968 14:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:05.968 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:05.968 14:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:05.968 14:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.968 14:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.968 14:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.968 14:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:05.968 14:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:05.968 14:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:05.968 14:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:16:05.968 14:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:05.968 14:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:05.968 14:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:05.968 14:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:05.968 14:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:05.968 14:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:16:05.969 14:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.969 14:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.225 14:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.226 14:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:06.226 14:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:06.483 00:16:06.483 14:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:06.483 14:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:06.483 14:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:06.740 14:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:06.740 14:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:06.740 14:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.740 14:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.740 14:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.740 14:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:06.740 { 00:16:06.740 "cntlid": 23, 00:16:06.740 "qid": 0, 00:16:06.740 "state": "enabled", 00:16:06.740 "thread": "nvmf_tgt_poll_group_000", 00:16:06.740 "listen_address": { 00:16:06.740 "trtype": "TCP", 00:16:06.740 "adrfam": "IPv4", 00:16:06.740 "traddr": "10.0.0.2", 00:16:06.740 "trsvcid": "4420" 00:16:06.740 }, 00:16:06.740 "peer_address": { 00:16:06.740 "trtype": "TCP", 00:16:06.740 "adrfam": "IPv4", 00:16:06.740 "traddr": "10.0.0.1", 00:16:06.740 "trsvcid": "46606" 00:16:06.740 }, 00:16:06.740 "auth": { 00:16:06.740 "state": "completed", 00:16:06.740 "digest": "sha256", 00:16:06.740 "dhgroup": "ffdhe3072" 00:16:06.740 } 00:16:06.740 } 00:16:06.740 ]' 00:16:06.740 14:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:06.740 14:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:06.740 14:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:06.740 14:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:06.740 14:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:06.740 14:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:06.740 14:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:06.740 14:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:06.997 14:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:MTc4YTNhOWRjOTNmMmQ2MDU0ODZiOTRkMjRlZDMzNGE5ZDNiNDg3Y2UzMjI4OGUzNjkyMmQ0MDZiMzYzOTJhNCkUHwI=: 00:16:07.929 14:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:07.929 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:07.929 14:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:07.929 14:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.929 14:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.929 14:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.929 14:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:07.929 14:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:07.929 14:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:07.929 14:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:08.187 14:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:16:08.187 14:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:08.187 14:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:08.187 14:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:08.187 14:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:08.187 14:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:08.187 14:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:08.187 14:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.187 14:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.187 14:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.187 14:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:08.187 14:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:08.459 00:16:08.459 14:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:08.459 14:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:08.459 14:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:08.717 14:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:08.717 14:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:08.717 14:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.717 14:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.717 14:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.717 14:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:08.717 { 00:16:08.717 "cntlid": 25, 00:16:08.717 "qid": 0, 00:16:08.717 "state": "enabled", 00:16:08.717 "thread": "nvmf_tgt_poll_group_000", 00:16:08.717 "listen_address": { 00:16:08.717 "trtype": "TCP", 00:16:08.717 "adrfam": "IPv4", 00:16:08.717 "traddr": "10.0.0.2", 00:16:08.717 "trsvcid": "4420" 00:16:08.717 }, 00:16:08.717 "peer_address": { 00:16:08.717 "trtype": "TCP", 00:16:08.717 "adrfam": "IPv4", 00:16:08.717 "traddr": "10.0.0.1", 00:16:08.717 "trsvcid": "46640" 00:16:08.717 }, 00:16:08.717 "auth": { 00:16:08.717 "state": "completed", 00:16:08.717 "digest": "sha256", 00:16:08.717 "dhgroup": "ffdhe4096" 00:16:08.717 } 00:16:08.717 } 00:16:08.717 ]' 00:16:08.717 14:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:08.975 14:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:08.975 14:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:08.975 14:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:08.975 14:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:08.975 14:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:08.975 14:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:08.975 14:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:09.233 14:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:Y2I3YzZhNjE1N2IzOGQ2Nzk1NzIyMGJhMmQyMDA0NWQzOTU3MDZkMWQ3OGIzZWY1DD6o/Q==: --dhchap-ctrl-secret DHHC-1:03:MDQ5NmQ5MmJhYmFhODFhMDNjZTI0ODE4MWUyMmU1OWNkMmUzZTE3OTU1YjMyZTcyMGIyNTQyZDg5YTY1NWFkONjITE0=: 00:16:10.165 14:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:10.165 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:10.165 14:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:10.165 14:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.165 14:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.165 14:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.165 14:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:10.165 14:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:10.165 14:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:10.424 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:16:10.424 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:10.424 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:10.424 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:10.424 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:10.424 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:10.424 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:10.424 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.424 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.424 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.424 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:10.424 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:10.682 00:16:10.682 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:10.682 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:10.682 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:10.940 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:10.940 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:10.940 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.940 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.940 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.940 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:10.940 { 00:16:10.940 "cntlid": 27, 00:16:10.940 "qid": 0, 00:16:10.940 "state": "enabled", 00:16:10.940 "thread": "nvmf_tgt_poll_group_000", 00:16:10.940 "listen_address": { 00:16:10.940 "trtype": "TCP", 00:16:10.940 "adrfam": "IPv4", 00:16:10.940 "traddr": "10.0.0.2", 00:16:10.940 "trsvcid": "4420" 00:16:10.940 }, 00:16:10.940 "peer_address": { 00:16:10.940 "trtype": "TCP", 00:16:10.940 "adrfam": "IPv4", 00:16:10.940 "traddr": "10.0.0.1", 00:16:10.940 "trsvcid": "46662" 00:16:10.940 }, 00:16:10.940 "auth": { 00:16:10.940 "state": "completed", 00:16:10.940 "digest": "sha256", 00:16:10.940 "dhgroup": "ffdhe4096" 00:16:10.940 } 00:16:10.940 } 00:16:10.940 ]' 00:16:10.940 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:10.940 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:10.940 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:10.940 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:10.940 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:10.940 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:10.940 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:10.940 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:11.198 14:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:ZjliNjJkMThjYTE3MDdjNmU2YmQ4Yjg3ZmEzZDc1MzNSpyDl: --dhchap-ctrl-secret DHHC-1:02:YTQ2OTJkODc2ZGIwMmE2NjEzODM3NDdhNWU1MTczODhhNjBhMzhmNDFkZDA3YWRmSj0TVg==: 00:16:12.132 14:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:12.132 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:12.132 14:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:12.132 14:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.132 14:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.132 14:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.132 14:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:12.132 14:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:12.132 14:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:12.390 14:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:16:12.390 14:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:12.390 14:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:12.390 14:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:12.390 14:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:12.390 14:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:12.390 14:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:12.390 14:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.390 14:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.390 14:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.390 14:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:12.390 14:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:12.956 00:16:12.956 14:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:12.956 14:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:12.956 14:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:12.956 14:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:12.956 14:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:12.956 14:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.956 14:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.956 14:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.956 14:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:12.956 { 00:16:12.956 "cntlid": 29, 00:16:12.956 "qid": 0, 00:16:12.956 "state": "enabled", 00:16:12.956 "thread": "nvmf_tgt_poll_group_000", 00:16:12.956 "listen_address": { 00:16:12.956 "trtype": "TCP", 00:16:12.956 "adrfam": "IPv4", 00:16:12.956 "traddr": "10.0.0.2", 00:16:12.956 "trsvcid": "4420" 00:16:12.956 }, 00:16:12.956 "peer_address": { 00:16:12.956 "trtype": "TCP", 00:16:12.956 "adrfam": "IPv4", 00:16:12.956 "traddr": "10.0.0.1", 00:16:12.956 "trsvcid": "46684" 00:16:12.956 }, 00:16:12.956 "auth": { 00:16:12.956 "state": "completed", 00:16:12.956 "digest": "sha256", 00:16:12.956 "dhgroup": "ffdhe4096" 00:16:12.956 } 00:16:12.956 } 00:16:12.956 ]' 00:16:12.956 14:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:13.214 14:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:13.214 14:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:13.214 14:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:13.214 14:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:13.214 14:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:13.214 14:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:13.214 14:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:13.472 14:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:ODk2MmY5NGMwMjQxMDRjYTU3MDJjMWYzN2MyMzBmNzE4NjY2OWMzMDliNTFkZDNiOBdn9Q==: --dhchap-ctrl-secret DHHC-1:01:ZTdlNmQzMGRiMjQ0YzMzOTQ3MDk0Y2I0ZjkxMjU1NTWzWWa/: 00:16:14.404 14:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:14.404 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:14.404 14:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:14.404 14:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.404 14:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.404 14:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.404 14:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:14.404 14:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:14.404 14:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:14.662 14:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:16:14.662 14:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:14.662 14:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:14.662 14:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:14.662 14:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:14.662 14:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:14.662 14:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:16:14.662 14:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.662 14:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.662 14:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.662 14:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:14.662 14:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:14.920 00:16:14.920 14:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:14.920 14:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:14.920 14:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:15.177 14:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:15.177 14:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:15.177 14:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.177 14:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.177 14:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.177 14:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:15.177 { 00:16:15.177 "cntlid": 31, 00:16:15.177 "qid": 0, 00:16:15.177 "state": "enabled", 00:16:15.177 "thread": "nvmf_tgt_poll_group_000", 00:16:15.177 "listen_address": { 00:16:15.177 "trtype": "TCP", 00:16:15.177 "adrfam": "IPv4", 00:16:15.177 "traddr": "10.0.0.2", 00:16:15.178 "trsvcid": "4420" 00:16:15.178 }, 00:16:15.178 "peer_address": { 00:16:15.178 "trtype": "TCP", 00:16:15.178 "adrfam": "IPv4", 00:16:15.178 "traddr": "10.0.0.1", 00:16:15.178 "trsvcid": "46710" 00:16:15.178 }, 00:16:15.178 "auth": { 00:16:15.178 "state": "completed", 00:16:15.178 "digest": "sha256", 00:16:15.178 "dhgroup": "ffdhe4096" 00:16:15.178 } 00:16:15.178 } 00:16:15.178 ]' 00:16:15.178 14:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:15.178 14:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:15.178 14:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:15.178 14:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:15.178 14:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:15.435 14:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:15.435 14:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:15.435 14:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:15.693 14:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:MTc4YTNhOWRjOTNmMmQ2MDU0ODZiOTRkMjRlZDMzNGE5ZDNiNDg3Y2UzMjI4OGUzNjkyMmQ0MDZiMzYzOTJhNCkUHwI=: 00:16:16.626 14:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:16.626 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:16.626 14:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:16.626 14:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.626 14:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.626 14:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.626 14:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:16.626 14:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:16.626 14:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:16.626 14:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:16.626 14:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:16:16.626 14:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:16.626 14:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:16.626 14:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:16.626 14:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:16.626 14:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:16.626 14:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:16.626 14:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.626 14:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.626 14:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.626 14:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:16.626 14:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:17.192 00:16:17.192 14:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:17.192 14:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:17.192 14:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:17.450 14:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:17.450 14:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:17.450 14:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.450 14:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.450 14:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.450 14:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:17.450 { 00:16:17.450 "cntlid": 33, 00:16:17.450 "qid": 0, 00:16:17.450 "state": "enabled", 00:16:17.450 "thread": "nvmf_tgt_poll_group_000", 00:16:17.450 "listen_address": { 00:16:17.450 "trtype": "TCP", 00:16:17.450 "adrfam": "IPv4", 00:16:17.450 "traddr": "10.0.0.2", 00:16:17.450 "trsvcid": "4420" 00:16:17.450 }, 00:16:17.450 "peer_address": { 00:16:17.450 "trtype": "TCP", 00:16:17.450 "adrfam": "IPv4", 00:16:17.450 "traddr": "10.0.0.1", 00:16:17.450 "trsvcid": "53866" 00:16:17.450 }, 00:16:17.450 "auth": { 00:16:17.450 "state": "completed", 00:16:17.450 "digest": "sha256", 00:16:17.450 "dhgroup": "ffdhe6144" 00:16:17.450 } 00:16:17.450 } 00:16:17.450 ]' 00:16:17.450 14:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:17.450 14:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:17.450 14:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:17.708 14:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:17.708 14:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:17.708 14:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:17.708 14:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:17.708 14:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:17.965 14:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:Y2I3YzZhNjE1N2IzOGQ2Nzk1NzIyMGJhMmQyMDA0NWQzOTU3MDZkMWQ3OGIzZWY1DD6o/Q==: --dhchap-ctrl-secret DHHC-1:03:MDQ5NmQ5MmJhYmFhODFhMDNjZTI0ODE4MWUyMmU1OWNkMmUzZTE3OTU1YjMyZTcyMGIyNTQyZDg5YTY1NWFkONjITE0=: 00:16:18.898 14:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:18.898 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:18.898 14:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:18.898 14:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.898 14:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.898 14:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.898 14:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:18.898 14:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:18.898 14:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:19.156 14:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:16:19.156 14:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:19.156 14:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:19.156 14:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:19.156 14:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:19.156 14:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:19.156 14:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:19.156 14:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.156 14:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.156 14:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.156 14:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:19.157 14:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:19.725 00:16:19.725 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:19.725 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:19.725 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:19.725 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:19.725 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:19.725 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.725 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.725 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.725 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:19.725 { 00:16:19.725 "cntlid": 35, 00:16:19.725 "qid": 0, 00:16:19.725 "state": "enabled", 00:16:19.725 "thread": "nvmf_tgt_poll_group_000", 00:16:19.725 "listen_address": { 00:16:19.725 "trtype": "TCP", 00:16:19.725 "adrfam": "IPv4", 00:16:19.725 "traddr": "10.0.0.2", 00:16:19.725 "trsvcid": "4420" 00:16:19.725 }, 00:16:19.725 "peer_address": { 00:16:19.725 "trtype": "TCP", 00:16:19.725 "adrfam": "IPv4", 00:16:19.725 "traddr": "10.0.0.1", 00:16:19.725 "trsvcid": "53888" 00:16:19.725 }, 00:16:19.725 "auth": { 00:16:19.725 "state": "completed", 00:16:19.725 "digest": "sha256", 00:16:19.725 "dhgroup": "ffdhe6144" 00:16:19.725 } 00:16:19.725 } 00:16:19.725 ]' 00:16:19.725 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:19.983 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:19.983 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:19.983 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:19.983 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:19.983 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:19.983 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:19.983 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:20.242 14:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:ZjliNjJkMThjYTE3MDdjNmU2YmQ4Yjg3ZmEzZDc1MzNSpyDl: --dhchap-ctrl-secret DHHC-1:02:YTQ2OTJkODc2ZGIwMmE2NjEzODM3NDdhNWU1MTczODhhNjBhMzhmNDFkZDA3YWRmSj0TVg==: 00:16:21.176 14:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:21.176 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:21.176 14:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:21.176 14:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.176 14:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.176 14:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.176 14:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:21.176 14:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:21.176 14:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:21.434 14:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:16:21.434 14:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:21.434 14:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:21.434 14:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:21.434 14:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:21.434 14:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:21.434 14:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:21.434 14:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.434 14:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.434 14:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.434 14:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:21.434 14:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:22.000 00:16:22.000 14:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:22.000 14:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:22.000 14:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:22.259 14:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:22.259 14:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:22.259 14:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.259 14:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.259 14:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.259 14:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:22.259 { 00:16:22.259 "cntlid": 37, 00:16:22.259 "qid": 0, 00:16:22.259 "state": "enabled", 00:16:22.259 "thread": "nvmf_tgt_poll_group_000", 00:16:22.259 "listen_address": { 00:16:22.259 "trtype": "TCP", 00:16:22.259 "adrfam": "IPv4", 00:16:22.259 "traddr": "10.0.0.2", 00:16:22.259 "trsvcid": "4420" 00:16:22.259 }, 00:16:22.259 "peer_address": { 00:16:22.259 "trtype": "TCP", 00:16:22.259 "adrfam": "IPv4", 00:16:22.259 "traddr": "10.0.0.1", 00:16:22.259 "trsvcid": "53918" 00:16:22.259 }, 00:16:22.259 "auth": { 00:16:22.259 "state": "completed", 00:16:22.259 "digest": "sha256", 00:16:22.259 "dhgroup": "ffdhe6144" 00:16:22.259 } 00:16:22.259 } 00:16:22.259 ]' 00:16:22.259 14:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:22.259 14:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:22.259 14:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:22.259 14:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:22.259 14:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:22.259 14:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:22.259 14:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:22.259 14:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:22.517 14:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:ODk2MmY5NGMwMjQxMDRjYTU3MDJjMWYzN2MyMzBmNzE4NjY2OWMzMDliNTFkZDNiOBdn9Q==: --dhchap-ctrl-secret DHHC-1:01:ZTdlNmQzMGRiMjQ0YzMzOTQ3MDk0Y2I0ZjkxMjU1NTWzWWa/: 00:16:23.451 14:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:23.451 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:23.451 14:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:23.451 14:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.451 14:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.451 14:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.451 14:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:23.451 14:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:23.451 14:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:23.709 14:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:16:23.709 14:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:23.709 14:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:23.709 14:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:23.709 14:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:23.709 14:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:23.709 14:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:16:23.709 14:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.709 14:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.709 14:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.709 14:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:23.709 14:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:24.274 00:16:24.274 14:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:24.274 14:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:24.274 14:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:24.532 14:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:24.532 14:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:24.532 14:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.532 14:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.532 14:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.532 14:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:24.532 { 00:16:24.532 "cntlid": 39, 00:16:24.532 "qid": 0, 00:16:24.532 "state": "enabled", 00:16:24.532 "thread": "nvmf_tgt_poll_group_000", 00:16:24.532 "listen_address": { 00:16:24.532 "trtype": "TCP", 00:16:24.532 "adrfam": "IPv4", 00:16:24.532 "traddr": "10.0.0.2", 00:16:24.532 "trsvcid": "4420" 00:16:24.532 }, 00:16:24.532 "peer_address": { 00:16:24.532 "trtype": "TCP", 00:16:24.532 "adrfam": "IPv4", 00:16:24.532 "traddr": "10.0.0.1", 00:16:24.532 "trsvcid": "53942" 00:16:24.532 }, 00:16:24.532 "auth": { 00:16:24.532 "state": "completed", 00:16:24.532 "digest": "sha256", 00:16:24.532 "dhgroup": "ffdhe6144" 00:16:24.532 } 00:16:24.532 } 00:16:24.532 ]' 00:16:24.532 14:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:24.532 14:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:24.532 14:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:24.532 14:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:24.532 14:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:24.532 14:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:24.532 14:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:24.532 14:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:24.790 14:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:MTc4YTNhOWRjOTNmMmQ2MDU0ODZiOTRkMjRlZDMzNGE5ZDNiNDg3Y2UzMjI4OGUzNjkyMmQ0MDZiMzYzOTJhNCkUHwI=: 00:16:25.724 14:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:25.724 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:25.724 14:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:25.724 14:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.724 14:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.724 14:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.724 14:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:25.724 14:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:25.724 14:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:25.724 14:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:26.290 14:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:16:26.290 14:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:26.290 14:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:26.290 14:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:26.290 14:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:26.290 14:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:26.290 14:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:26.290 14:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.290 14:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.290 14:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.290 14:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:26.290 14:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:26.856 00:16:26.856 14:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:26.856 14:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:26.856 14:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:27.114 14:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:27.114 14:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:27.114 14:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.114 14:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.114 14:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.114 14:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:27.114 { 00:16:27.114 "cntlid": 41, 00:16:27.114 "qid": 0, 00:16:27.114 "state": "enabled", 00:16:27.114 "thread": "nvmf_tgt_poll_group_000", 00:16:27.114 "listen_address": { 00:16:27.114 "trtype": "TCP", 00:16:27.114 "adrfam": "IPv4", 00:16:27.114 "traddr": "10.0.0.2", 00:16:27.114 "trsvcid": "4420" 00:16:27.114 }, 00:16:27.114 "peer_address": { 00:16:27.114 "trtype": "TCP", 00:16:27.114 "adrfam": "IPv4", 00:16:27.114 "traddr": "10.0.0.1", 00:16:27.114 "trsvcid": "40064" 00:16:27.114 }, 00:16:27.114 "auth": { 00:16:27.114 "state": "completed", 00:16:27.114 "digest": "sha256", 00:16:27.114 "dhgroup": "ffdhe8192" 00:16:27.114 } 00:16:27.114 } 00:16:27.114 ]' 00:16:27.114 14:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:27.373 14:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:27.373 14:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:27.373 14:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:27.373 14:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:27.373 14:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:27.373 14:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:27.373 14:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:27.631 14:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:Y2I3YzZhNjE1N2IzOGQ2Nzk1NzIyMGJhMmQyMDA0NWQzOTU3MDZkMWQ3OGIzZWY1DD6o/Q==: --dhchap-ctrl-secret DHHC-1:03:MDQ5NmQ5MmJhYmFhODFhMDNjZTI0ODE4MWUyMmU1OWNkMmUzZTE3OTU1YjMyZTcyMGIyNTQyZDg5YTY1NWFkONjITE0=: 00:16:28.565 14:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:28.565 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:28.565 14:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:28.565 14:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.565 14:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.565 14:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.565 14:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:28.565 14:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:28.565 14:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:28.823 14:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:16:28.823 14:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:28.823 14:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:28.823 14:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:28.823 14:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:28.823 14:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:28.823 14:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:28.823 14:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.823 14:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.823 14:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.823 14:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:28.823 14:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:29.756 00:16:29.756 14:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:29.756 14:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:29.756 14:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:29.756 14:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:29.756 14:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:29.756 14:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.756 14:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.756 14:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.756 14:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:29.756 { 00:16:29.756 "cntlid": 43, 00:16:29.756 "qid": 0, 00:16:29.756 "state": "enabled", 00:16:29.756 "thread": "nvmf_tgt_poll_group_000", 00:16:29.756 "listen_address": { 00:16:29.756 "trtype": "TCP", 00:16:29.756 "adrfam": "IPv4", 00:16:29.756 "traddr": "10.0.0.2", 00:16:29.756 "trsvcid": "4420" 00:16:29.756 }, 00:16:29.756 "peer_address": { 00:16:29.756 "trtype": "TCP", 00:16:29.756 "adrfam": "IPv4", 00:16:29.756 "traddr": "10.0.0.1", 00:16:29.756 "trsvcid": "40084" 00:16:29.756 }, 00:16:29.756 "auth": { 00:16:29.756 "state": "completed", 00:16:29.756 "digest": "sha256", 00:16:29.756 "dhgroup": "ffdhe8192" 00:16:29.756 } 00:16:29.756 } 00:16:29.756 ]' 00:16:29.756 14:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:29.756 14:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:29.756 14:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:30.014 14:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:30.014 14:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:30.014 14:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:30.014 14:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:30.014 14:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:30.272 14:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:ZjliNjJkMThjYTE3MDdjNmU2YmQ4Yjg3ZmEzZDc1MzNSpyDl: --dhchap-ctrl-secret DHHC-1:02:YTQ2OTJkODc2ZGIwMmE2NjEzODM3NDdhNWU1MTczODhhNjBhMzhmNDFkZDA3YWRmSj0TVg==: 00:16:31.205 14:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:31.205 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:31.205 14:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:31.205 14:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.205 14:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.205 14:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.205 14:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:31.205 14:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:31.205 14:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:31.205 14:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:16:31.205 14:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:31.205 14:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:31.205 14:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:31.205 14:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:31.205 14:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:31.205 14:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:31.205 14:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.205 14:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.205 14:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.205 14:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:31.205 14:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:32.139 00:16:32.139 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:32.139 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:32.139 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:32.397 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:32.397 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:32.397 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.397 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.397 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.397 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:32.397 { 00:16:32.397 "cntlid": 45, 00:16:32.397 "qid": 0, 00:16:32.397 "state": "enabled", 00:16:32.397 "thread": "nvmf_tgt_poll_group_000", 00:16:32.397 "listen_address": { 00:16:32.397 "trtype": "TCP", 00:16:32.397 "adrfam": "IPv4", 00:16:32.397 "traddr": "10.0.0.2", 00:16:32.397 "trsvcid": "4420" 00:16:32.397 }, 00:16:32.397 "peer_address": { 00:16:32.397 "trtype": "TCP", 00:16:32.397 "adrfam": "IPv4", 00:16:32.397 "traddr": "10.0.0.1", 00:16:32.397 "trsvcid": "40108" 00:16:32.397 }, 00:16:32.397 "auth": { 00:16:32.397 "state": "completed", 00:16:32.397 "digest": "sha256", 00:16:32.397 "dhgroup": "ffdhe8192" 00:16:32.397 } 00:16:32.397 } 00:16:32.397 ]' 00:16:32.397 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:32.397 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:32.397 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:32.397 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:32.397 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:32.655 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:32.655 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:32.655 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:32.913 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:ODk2MmY5NGMwMjQxMDRjYTU3MDJjMWYzN2MyMzBmNzE4NjY2OWMzMDliNTFkZDNiOBdn9Q==: --dhchap-ctrl-secret DHHC-1:01:ZTdlNmQzMGRiMjQ0YzMzOTQ3MDk0Y2I0ZjkxMjU1NTWzWWa/: 00:16:33.847 14:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:33.847 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:33.847 14:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:33.847 14:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.847 14:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.847 14:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.847 14:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:33.847 14:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:33.847 14:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:33.847 14:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:16:33.847 14:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:33.847 14:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:33.847 14:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:33.847 14:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:33.847 14:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:33.847 14:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:16:33.847 14:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.847 14:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.847 14:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.847 14:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:33.847 14:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:34.781 00:16:34.781 14:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:34.781 14:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:34.781 14:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:35.039 14:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:35.039 14:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:35.039 14:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.039 14:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.039 14:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.039 14:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:35.039 { 00:16:35.039 "cntlid": 47, 00:16:35.039 "qid": 0, 00:16:35.039 "state": "enabled", 00:16:35.039 "thread": "nvmf_tgt_poll_group_000", 00:16:35.039 "listen_address": { 00:16:35.039 "trtype": "TCP", 00:16:35.039 "adrfam": "IPv4", 00:16:35.039 "traddr": "10.0.0.2", 00:16:35.039 "trsvcid": "4420" 00:16:35.039 }, 00:16:35.039 "peer_address": { 00:16:35.039 "trtype": "TCP", 00:16:35.039 "adrfam": "IPv4", 00:16:35.039 "traddr": "10.0.0.1", 00:16:35.039 "trsvcid": "40132" 00:16:35.039 }, 00:16:35.039 "auth": { 00:16:35.039 "state": "completed", 00:16:35.039 "digest": "sha256", 00:16:35.039 "dhgroup": "ffdhe8192" 00:16:35.039 } 00:16:35.039 } 00:16:35.039 ]' 00:16:35.039 14:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:35.039 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:35.039 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:35.039 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:35.297 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:35.297 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:35.297 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:35.297 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:35.554 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:MTc4YTNhOWRjOTNmMmQ2MDU0ODZiOTRkMjRlZDMzNGE5ZDNiNDg3Y2UzMjI4OGUzNjkyMmQ0MDZiMzYzOTJhNCkUHwI=: 00:16:36.487 14:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:36.488 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:36.488 14:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:36.488 14:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.488 14:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.488 14:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.488 14:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:16:36.488 14:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:36.488 14:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:36.488 14:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:36.488 14:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:36.746 14:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:16:36.746 14:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:36.746 14:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:36.746 14:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:36.746 14:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:36.746 14:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:36.746 14:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:36.746 14:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.746 14:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.746 14:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.746 14:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:36.746 14:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:37.004 00:16:37.004 14:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:37.004 14:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:37.004 14:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:37.262 14:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:37.262 14:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:37.262 14:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.262 14:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.262 14:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.262 14:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:37.262 { 00:16:37.262 "cntlid": 49, 00:16:37.262 "qid": 0, 00:16:37.262 "state": "enabled", 00:16:37.262 "thread": "nvmf_tgt_poll_group_000", 00:16:37.262 "listen_address": { 00:16:37.262 "trtype": "TCP", 00:16:37.262 "adrfam": "IPv4", 00:16:37.262 "traddr": "10.0.0.2", 00:16:37.262 "trsvcid": "4420" 00:16:37.262 }, 00:16:37.262 "peer_address": { 00:16:37.262 "trtype": "TCP", 00:16:37.262 "adrfam": "IPv4", 00:16:37.262 "traddr": "10.0.0.1", 00:16:37.262 "trsvcid": "39588" 00:16:37.262 }, 00:16:37.262 "auth": { 00:16:37.262 "state": "completed", 00:16:37.262 "digest": "sha384", 00:16:37.262 "dhgroup": "null" 00:16:37.262 } 00:16:37.262 } 00:16:37.262 ]' 00:16:37.262 14:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:37.262 14:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:37.262 14:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:37.262 14:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:37.262 14:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:37.519 14:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:37.519 14:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:37.519 14:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:37.519 14:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:Y2I3YzZhNjE1N2IzOGQ2Nzk1NzIyMGJhMmQyMDA0NWQzOTU3MDZkMWQ3OGIzZWY1DD6o/Q==: --dhchap-ctrl-secret DHHC-1:03:MDQ5NmQ5MmJhYmFhODFhMDNjZTI0ODE4MWUyMmU1OWNkMmUzZTE3OTU1YjMyZTcyMGIyNTQyZDg5YTY1NWFkONjITE0=: 00:16:38.451 14:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:38.451 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:38.451 14:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:38.451 14:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.451 14:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.451 14:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.451 14:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:38.451 14:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:38.451 14:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:38.709 14:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:16:38.709 14:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:38.709 14:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:38.709 14:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:38.709 14:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:38.709 14:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:38.709 14:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:38.709 14:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.709 14:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.709 14:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.709 14:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:38.709 14:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:39.274 00:16:39.274 14:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:39.274 14:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:39.274 14:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:39.274 14:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:39.274 14:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:39.274 14:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.274 14:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.531 14:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.531 14:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:39.531 { 00:16:39.531 "cntlid": 51, 00:16:39.531 "qid": 0, 00:16:39.531 "state": "enabled", 00:16:39.531 "thread": "nvmf_tgt_poll_group_000", 00:16:39.531 "listen_address": { 00:16:39.531 "trtype": "TCP", 00:16:39.531 "adrfam": "IPv4", 00:16:39.531 "traddr": "10.0.0.2", 00:16:39.531 "trsvcid": "4420" 00:16:39.531 }, 00:16:39.531 "peer_address": { 00:16:39.531 "trtype": "TCP", 00:16:39.531 "adrfam": "IPv4", 00:16:39.531 "traddr": "10.0.0.1", 00:16:39.531 "trsvcid": "39600" 00:16:39.531 }, 00:16:39.531 "auth": { 00:16:39.531 "state": "completed", 00:16:39.531 "digest": "sha384", 00:16:39.531 "dhgroup": "null" 00:16:39.531 } 00:16:39.531 } 00:16:39.531 ]' 00:16:39.531 14:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:39.531 14:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:39.531 14:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:39.531 14:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:39.531 14:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:39.531 14:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:39.531 14:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:39.531 14:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:39.787 14:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:ZjliNjJkMThjYTE3MDdjNmU2YmQ4Yjg3ZmEzZDc1MzNSpyDl: --dhchap-ctrl-secret DHHC-1:02:YTQ2OTJkODc2ZGIwMmE2NjEzODM3NDdhNWU1MTczODhhNjBhMzhmNDFkZDA3YWRmSj0TVg==: 00:16:40.720 14:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:40.720 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:40.720 14:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:40.720 14:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.720 14:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.720 14:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.720 14:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:40.720 14:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:40.720 14:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:40.977 14:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:16:40.977 14:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:40.977 14:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:40.977 14:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:40.977 14:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:40.977 14:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:40.977 14:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:40.977 14:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.977 14:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.977 14:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.977 14:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:40.977 14:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:41.235 00:16:41.235 14:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:41.235 14:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:41.235 14:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:41.492 14:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:41.492 14:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:41.492 14:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.492 14:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.492 14:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.492 14:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:41.492 { 00:16:41.492 "cntlid": 53, 00:16:41.492 "qid": 0, 00:16:41.492 "state": "enabled", 00:16:41.492 "thread": "nvmf_tgt_poll_group_000", 00:16:41.492 "listen_address": { 00:16:41.492 "trtype": "TCP", 00:16:41.492 "adrfam": "IPv4", 00:16:41.492 "traddr": "10.0.0.2", 00:16:41.492 "trsvcid": "4420" 00:16:41.492 }, 00:16:41.492 "peer_address": { 00:16:41.492 "trtype": "TCP", 00:16:41.492 "adrfam": "IPv4", 00:16:41.492 "traddr": "10.0.0.1", 00:16:41.492 "trsvcid": "39628" 00:16:41.492 }, 00:16:41.492 "auth": { 00:16:41.492 "state": "completed", 00:16:41.492 "digest": "sha384", 00:16:41.492 "dhgroup": "null" 00:16:41.492 } 00:16:41.492 } 00:16:41.492 ]' 00:16:41.492 14:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:41.492 14:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:41.492 14:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:41.492 14:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:41.492 14:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:41.492 14:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:41.492 14:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:41.492 14:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:41.750 14:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:ODk2MmY5NGMwMjQxMDRjYTU3MDJjMWYzN2MyMzBmNzE4NjY2OWMzMDliNTFkZDNiOBdn9Q==: --dhchap-ctrl-secret DHHC-1:01:ZTdlNmQzMGRiMjQ0YzMzOTQ3MDk0Y2I0ZjkxMjU1NTWzWWa/: 00:16:42.683 14:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:42.683 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:42.683 14:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:42.683 14:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.683 14:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.683 14:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.683 14:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:42.683 14:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:42.683 14:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:42.941 14:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:16:42.941 14:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:42.941 14:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:42.941 14:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:42.941 14:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:42.941 14:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:42.941 14:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:16:42.941 14:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.941 14:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.941 14:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.941 14:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:42.941 14:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:43.505 00:16:43.505 14:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:43.505 14:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:43.505 14:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:43.763 14:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:43.763 14:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:43.763 14:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.763 14:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.763 14:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.763 14:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:43.763 { 00:16:43.763 "cntlid": 55, 00:16:43.763 "qid": 0, 00:16:43.763 "state": "enabled", 00:16:43.763 "thread": "nvmf_tgt_poll_group_000", 00:16:43.763 "listen_address": { 00:16:43.763 "trtype": "TCP", 00:16:43.763 "adrfam": "IPv4", 00:16:43.763 "traddr": "10.0.0.2", 00:16:43.763 "trsvcid": "4420" 00:16:43.763 }, 00:16:43.763 "peer_address": { 00:16:43.763 "trtype": "TCP", 00:16:43.763 "adrfam": "IPv4", 00:16:43.763 "traddr": "10.0.0.1", 00:16:43.763 "trsvcid": "39648" 00:16:43.763 }, 00:16:43.763 "auth": { 00:16:43.763 "state": "completed", 00:16:43.763 "digest": "sha384", 00:16:43.763 "dhgroup": "null" 00:16:43.763 } 00:16:43.763 } 00:16:43.763 ]' 00:16:43.763 14:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:43.763 14:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:43.763 14:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:43.763 14:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:43.763 14:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:43.763 14:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:43.763 14:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:43.763 14:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:44.020 14:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:MTc4YTNhOWRjOTNmMmQ2MDU0ODZiOTRkMjRlZDMzNGE5ZDNiNDg3Y2UzMjI4OGUzNjkyMmQ0MDZiMzYzOTJhNCkUHwI=: 00:16:44.954 14:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:44.954 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:44.954 14:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:44.954 14:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.954 14:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.954 14:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.954 14:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:44.954 14:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:44.954 14:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:44.954 14:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:45.212 14:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:16:45.212 14:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:45.212 14:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:45.212 14:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:45.212 14:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:45.212 14:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:45.212 14:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:45.212 14:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.212 14:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.212 14:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.212 14:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:45.212 14:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:45.777 00:16:45.777 14:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:45.777 14:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:45.777 14:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:46.035 14:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:46.035 14:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:46.035 14:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.035 14:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.035 14:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.035 14:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:46.035 { 00:16:46.035 "cntlid": 57, 00:16:46.035 "qid": 0, 00:16:46.035 "state": "enabled", 00:16:46.035 "thread": "nvmf_tgt_poll_group_000", 00:16:46.035 "listen_address": { 00:16:46.035 "trtype": "TCP", 00:16:46.035 "adrfam": "IPv4", 00:16:46.035 "traddr": "10.0.0.2", 00:16:46.035 "trsvcid": "4420" 00:16:46.035 }, 00:16:46.035 "peer_address": { 00:16:46.035 "trtype": "TCP", 00:16:46.035 "adrfam": "IPv4", 00:16:46.035 "traddr": "10.0.0.1", 00:16:46.035 "trsvcid": "39684" 00:16:46.035 }, 00:16:46.035 "auth": { 00:16:46.035 "state": "completed", 00:16:46.035 "digest": "sha384", 00:16:46.035 "dhgroup": "ffdhe2048" 00:16:46.035 } 00:16:46.035 } 00:16:46.035 ]' 00:16:46.035 14:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:46.035 14:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:46.035 14:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:46.035 14:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:46.035 14:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:46.035 14:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:46.035 14:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:46.035 14:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:46.293 14:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:Y2I3YzZhNjE1N2IzOGQ2Nzk1NzIyMGJhMmQyMDA0NWQzOTU3MDZkMWQ3OGIzZWY1DD6o/Q==: --dhchap-ctrl-secret DHHC-1:03:MDQ5NmQ5MmJhYmFhODFhMDNjZTI0ODE4MWUyMmU1OWNkMmUzZTE3OTU1YjMyZTcyMGIyNTQyZDg5YTY1NWFkONjITE0=: 00:16:47.250 14:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:47.250 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:47.250 14:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:47.250 14:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.250 14:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.250 14:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.250 14:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:47.250 14:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:47.250 14:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:47.508 14:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:16:47.508 14:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:47.508 14:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:47.508 14:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:47.508 14:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:47.508 14:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:47.508 14:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:47.508 14:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.508 14:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.508 14:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.508 14:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:47.508 14:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:47.767 00:16:47.767 14:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:47.767 14:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:47.767 14:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:48.025 14:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:48.025 14:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:48.025 14:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.025 14:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.283 14:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.283 14:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:48.283 { 00:16:48.283 "cntlid": 59, 00:16:48.283 "qid": 0, 00:16:48.283 "state": "enabled", 00:16:48.283 "thread": "nvmf_tgt_poll_group_000", 00:16:48.283 "listen_address": { 00:16:48.283 "trtype": "TCP", 00:16:48.283 "adrfam": "IPv4", 00:16:48.283 "traddr": "10.0.0.2", 00:16:48.283 "trsvcid": "4420" 00:16:48.283 }, 00:16:48.283 "peer_address": { 00:16:48.283 "trtype": "TCP", 00:16:48.283 "adrfam": "IPv4", 00:16:48.283 "traddr": "10.0.0.1", 00:16:48.283 "trsvcid": "42276" 00:16:48.283 }, 00:16:48.283 "auth": { 00:16:48.283 "state": "completed", 00:16:48.283 "digest": "sha384", 00:16:48.283 "dhgroup": "ffdhe2048" 00:16:48.283 } 00:16:48.283 } 00:16:48.283 ]' 00:16:48.283 14:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:48.283 14:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:48.283 14:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:48.283 14:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:48.283 14:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:48.283 14:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:48.283 14:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:48.283 14:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:48.541 14:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:ZjliNjJkMThjYTE3MDdjNmU2YmQ4Yjg3ZmEzZDc1MzNSpyDl: --dhchap-ctrl-secret DHHC-1:02:YTQ2OTJkODc2ZGIwMmE2NjEzODM3NDdhNWU1MTczODhhNjBhMzhmNDFkZDA3YWRmSj0TVg==: 00:16:49.473 14:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:49.473 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:49.473 14:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:49.473 14:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.473 14:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.473 14:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.473 14:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:49.473 14:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:49.473 14:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:49.731 14:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:16:49.731 14:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:49.731 14:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:49.731 14:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:49.731 14:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:49.731 14:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:49.731 14:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:49.731 14:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.731 14:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.731 14:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.731 14:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:49.731 14:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:49.989 00:16:49.989 14:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:49.989 14:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:49.989 14:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:50.247 14:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:50.247 14:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:50.247 14:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.247 14:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.247 14:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.247 14:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:50.247 { 00:16:50.247 "cntlid": 61, 00:16:50.247 "qid": 0, 00:16:50.247 "state": "enabled", 00:16:50.247 "thread": "nvmf_tgt_poll_group_000", 00:16:50.247 "listen_address": { 00:16:50.247 "trtype": "TCP", 00:16:50.247 "adrfam": "IPv4", 00:16:50.247 "traddr": "10.0.0.2", 00:16:50.247 "trsvcid": "4420" 00:16:50.247 }, 00:16:50.247 "peer_address": { 00:16:50.247 "trtype": "TCP", 00:16:50.247 "adrfam": "IPv4", 00:16:50.247 "traddr": "10.0.0.1", 00:16:50.247 "trsvcid": "42306" 00:16:50.247 }, 00:16:50.247 "auth": { 00:16:50.247 "state": "completed", 00:16:50.247 "digest": "sha384", 00:16:50.247 "dhgroup": "ffdhe2048" 00:16:50.247 } 00:16:50.247 } 00:16:50.247 ]' 00:16:50.247 14:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:50.505 14:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:50.505 14:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:50.505 14:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:50.505 14:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:50.505 14:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:50.505 14:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:50.505 14:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:50.763 14:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:ODk2MmY5NGMwMjQxMDRjYTU3MDJjMWYzN2MyMzBmNzE4NjY2OWMzMDliNTFkZDNiOBdn9Q==: --dhchap-ctrl-secret DHHC-1:01:ZTdlNmQzMGRiMjQ0YzMzOTQ3MDk0Y2I0ZjkxMjU1NTWzWWa/: 00:16:51.697 14:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:51.697 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:51.697 14:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:51.697 14:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.697 14:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.697 14:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.697 14:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:51.697 14:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:51.697 14:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:51.986 14:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:16:51.986 14:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:51.986 14:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:51.986 14:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:51.986 14:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:51.986 14:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:51.986 14:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:16:51.986 14:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.986 14:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.986 14:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.986 14:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:51.986 14:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:52.246 00:16:52.246 14:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:52.246 14:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:52.246 14:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:52.505 14:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.505 14:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:52.505 14:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.505 14:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.505 14:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.505 14:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:52.505 { 00:16:52.505 "cntlid": 63, 00:16:52.505 "qid": 0, 00:16:52.505 "state": "enabled", 00:16:52.505 "thread": "nvmf_tgt_poll_group_000", 00:16:52.505 "listen_address": { 00:16:52.505 "trtype": "TCP", 00:16:52.505 "adrfam": "IPv4", 00:16:52.505 "traddr": "10.0.0.2", 00:16:52.505 "trsvcid": "4420" 00:16:52.505 }, 00:16:52.505 "peer_address": { 00:16:52.505 "trtype": "TCP", 00:16:52.505 "adrfam": "IPv4", 00:16:52.505 "traddr": "10.0.0.1", 00:16:52.505 "trsvcid": "42330" 00:16:52.505 }, 00:16:52.505 "auth": { 00:16:52.505 "state": "completed", 00:16:52.505 "digest": "sha384", 00:16:52.505 "dhgroup": "ffdhe2048" 00:16:52.505 } 00:16:52.505 } 00:16:52.505 ]' 00:16:52.505 14:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:52.505 14:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:52.505 14:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:52.505 14:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:52.505 14:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:52.505 14:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:52.505 14:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:52.505 14:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:52.763 14:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:MTc4YTNhOWRjOTNmMmQ2MDU0ODZiOTRkMjRlZDMzNGE5ZDNiNDg3Y2UzMjI4OGUzNjkyMmQ0MDZiMzYzOTJhNCkUHwI=: 00:16:53.697 14:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:53.697 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:53.697 14:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:53.697 14:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.697 14:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.697 14:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.697 14:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:53.697 14:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:53.697 14:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:53.697 14:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:53.955 14:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:16:53.955 14:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:53.955 14:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:53.955 14:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:53.955 14:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:53.955 14:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:53.955 14:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:53.955 14:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.955 14:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.955 14:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.955 14:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:53.955 14:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:54.520 00:16:54.520 14:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:54.520 14:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:54.520 14:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:54.520 14:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:54.520 14:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:54.520 14:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.520 14:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.520 14:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.520 14:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:54.520 { 00:16:54.520 "cntlid": 65, 00:16:54.520 "qid": 0, 00:16:54.520 "state": "enabled", 00:16:54.520 "thread": "nvmf_tgt_poll_group_000", 00:16:54.520 "listen_address": { 00:16:54.520 "trtype": "TCP", 00:16:54.520 "adrfam": "IPv4", 00:16:54.520 "traddr": "10.0.0.2", 00:16:54.520 "trsvcid": "4420" 00:16:54.520 }, 00:16:54.520 "peer_address": { 00:16:54.520 "trtype": "TCP", 00:16:54.520 "adrfam": "IPv4", 00:16:54.520 "traddr": "10.0.0.1", 00:16:54.520 "trsvcid": "42360" 00:16:54.520 }, 00:16:54.520 "auth": { 00:16:54.520 "state": "completed", 00:16:54.520 "digest": "sha384", 00:16:54.520 "dhgroup": "ffdhe3072" 00:16:54.520 } 00:16:54.520 } 00:16:54.520 ]' 00:16:54.520 14:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:54.777 14:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:54.777 14:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:54.777 14:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:54.777 14:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:54.778 14:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:54.778 14:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:54.778 14:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:55.035 14:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:Y2I3YzZhNjE1N2IzOGQ2Nzk1NzIyMGJhMmQyMDA0NWQzOTU3MDZkMWQ3OGIzZWY1DD6o/Q==: --dhchap-ctrl-secret DHHC-1:03:MDQ5NmQ5MmJhYmFhODFhMDNjZTI0ODE4MWUyMmU1OWNkMmUzZTE3OTU1YjMyZTcyMGIyNTQyZDg5YTY1NWFkONjITE0=: 00:16:55.966 14:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:55.966 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:55.966 14:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:55.966 14:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.966 14:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.966 14:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.966 14:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:55.966 14:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:55.966 14:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:56.223 14:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:16:56.223 14:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:56.223 14:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:56.223 14:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:56.223 14:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:56.223 14:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:56.223 14:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:56.223 14:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.223 14:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.223 14:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.223 14:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:56.223 14:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:56.481 00:16:56.481 14:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:56.481 14:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:56.481 14:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:56.738 14:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:56.738 14:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:56.738 14:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.738 14:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.738 14:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.738 14:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:56.738 { 00:16:56.738 "cntlid": 67, 00:16:56.738 "qid": 0, 00:16:56.738 "state": "enabled", 00:16:56.738 "thread": "nvmf_tgt_poll_group_000", 00:16:56.738 "listen_address": { 00:16:56.738 "trtype": "TCP", 00:16:56.738 "adrfam": "IPv4", 00:16:56.738 "traddr": "10.0.0.2", 00:16:56.738 "trsvcid": "4420" 00:16:56.738 }, 00:16:56.738 "peer_address": { 00:16:56.739 "trtype": "TCP", 00:16:56.739 "adrfam": "IPv4", 00:16:56.739 "traddr": "10.0.0.1", 00:16:56.739 "trsvcid": "50340" 00:16:56.739 }, 00:16:56.739 "auth": { 00:16:56.739 "state": "completed", 00:16:56.739 "digest": "sha384", 00:16:56.739 "dhgroup": "ffdhe3072" 00:16:56.739 } 00:16:56.739 } 00:16:56.739 ]' 00:16:56.739 14:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:56.739 14:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:56.739 14:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:56.739 14:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:56.739 14:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:56.739 14:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:56.739 14:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:56.739 14:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:56.996 14:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:ZjliNjJkMThjYTE3MDdjNmU2YmQ4Yjg3ZmEzZDc1MzNSpyDl: --dhchap-ctrl-secret DHHC-1:02:YTQ2OTJkODc2ZGIwMmE2NjEzODM3NDdhNWU1MTczODhhNjBhMzhmNDFkZDA3YWRmSj0TVg==: 00:16:57.929 14:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:57.929 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:57.929 14:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:57.929 14:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.929 14:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.929 14:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.929 14:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:57.929 14:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:57.929 14:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:58.187 14:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:16:58.187 14:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:58.187 14:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:58.187 14:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:58.187 14:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:58.187 14:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:58.187 14:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:58.187 14:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.187 14:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.187 14:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.187 14:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:58.187 14:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:58.444 00:16:58.444 14:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:58.445 14:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:58.445 14:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:58.702 14:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:58.702 14:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:58.702 14:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.702 14:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.960 14:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.960 14:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:58.960 { 00:16:58.960 "cntlid": 69, 00:16:58.960 "qid": 0, 00:16:58.960 "state": "enabled", 00:16:58.960 "thread": "nvmf_tgt_poll_group_000", 00:16:58.960 "listen_address": { 00:16:58.960 "trtype": "TCP", 00:16:58.960 "adrfam": "IPv4", 00:16:58.960 "traddr": "10.0.0.2", 00:16:58.960 "trsvcid": "4420" 00:16:58.960 }, 00:16:58.960 "peer_address": { 00:16:58.960 "trtype": "TCP", 00:16:58.960 "adrfam": "IPv4", 00:16:58.960 "traddr": "10.0.0.1", 00:16:58.960 "trsvcid": "50368" 00:16:58.960 }, 00:16:58.960 "auth": { 00:16:58.960 "state": "completed", 00:16:58.960 "digest": "sha384", 00:16:58.960 "dhgroup": "ffdhe3072" 00:16:58.960 } 00:16:58.960 } 00:16:58.960 ]' 00:16:58.960 14:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:58.960 14:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:58.960 14:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:58.960 14:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:58.960 14:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:58.960 14:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:58.960 14:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:58.960 14:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:59.217 14:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:ODk2MmY5NGMwMjQxMDRjYTU3MDJjMWYzN2MyMzBmNzE4NjY2OWMzMDliNTFkZDNiOBdn9Q==: --dhchap-ctrl-secret DHHC-1:01:ZTdlNmQzMGRiMjQ0YzMzOTQ3MDk0Y2I0ZjkxMjU1NTWzWWa/: 00:17:00.149 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:00.149 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:00.149 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:00.149 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.149 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.149 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.149 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:00.149 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:00.149 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:00.407 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:17:00.407 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:00.407 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:00.407 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:00.407 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:00.407 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:00.407 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:17:00.407 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.407 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.407 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.407 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:00.407 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:00.973 00:17:00.973 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:00.973 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:00.973 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:00.973 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.973 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:00.973 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.973 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.973 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.973 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:00.973 { 00:17:00.973 "cntlid": 71, 00:17:00.973 "qid": 0, 00:17:00.973 "state": "enabled", 00:17:00.973 "thread": "nvmf_tgt_poll_group_000", 00:17:00.973 "listen_address": { 00:17:00.973 "trtype": "TCP", 00:17:00.973 "adrfam": "IPv4", 00:17:00.973 "traddr": "10.0.0.2", 00:17:00.973 "trsvcid": "4420" 00:17:00.973 }, 00:17:00.973 "peer_address": { 00:17:00.973 "trtype": "TCP", 00:17:00.973 "adrfam": "IPv4", 00:17:00.973 "traddr": "10.0.0.1", 00:17:00.973 "trsvcid": "50400" 00:17:00.973 }, 00:17:00.973 "auth": { 00:17:00.973 "state": "completed", 00:17:00.973 "digest": "sha384", 00:17:00.973 "dhgroup": "ffdhe3072" 00:17:00.973 } 00:17:00.973 } 00:17:00.973 ]' 00:17:00.973 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:01.231 14:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:01.231 14:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:01.231 14:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:01.231 14:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:01.231 14:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:01.231 14:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:01.231 14:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:01.489 14:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:MTc4YTNhOWRjOTNmMmQ2MDU0ODZiOTRkMjRlZDMzNGE5ZDNiNDg3Y2UzMjI4OGUzNjkyMmQ0MDZiMzYzOTJhNCkUHwI=: 00:17:02.422 14:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:02.422 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:02.422 14:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:02.422 14:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.422 14:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.422 14:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.422 14:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:02.422 14:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:02.422 14:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:02.422 14:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:02.680 14:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:17:02.680 14:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:02.680 14:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:02.680 14:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:02.680 14:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:02.680 14:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:02.680 14:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:02.680 14:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.680 14:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.680 14:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.680 14:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:02.680 14:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:02.937 00:17:02.937 14:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:02.937 14:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:02.937 14:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:03.195 14:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:03.195 14:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:03.195 14:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.195 14:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.195 14:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.195 14:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:03.195 { 00:17:03.195 "cntlid": 73, 00:17:03.195 "qid": 0, 00:17:03.195 "state": "enabled", 00:17:03.195 "thread": "nvmf_tgt_poll_group_000", 00:17:03.195 "listen_address": { 00:17:03.195 "trtype": "TCP", 00:17:03.195 "adrfam": "IPv4", 00:17:03.195 "traddr": "10.0.0.2", 00:17:03.195 "trsvcid": "4420" 00:17:03.195 }, 00:17:03.195 "peer_address": { 00:17:03.195 "trtype": "TCP", 00:17:03.195 "adrfam": "IPv4", 00:17:03.195 "traddr": "10.0.0.1", 00:17:03.195 "trsvcid": "50426" 00:17:03.195 }, 00:17:03.195 "auth": { 00:17:03.195 "state": "completed", 00:17:03.195 "digest": "sha384", 00:17:03.195 "dhgroup": "ffdhe4096" 00:17:03.195 } 00:17:03.195 } 00:17:03.195 ]' 00:17:03.195 14:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:03.453 14:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:03.453 14:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:03.453 14:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:03.453 14:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:03.453 14:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:03.453 14:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:03.453 14:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:03.711 14:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:Y2I3YzZhNjE1N2IzOGQ2Nzk1NzIyMGJhMmQyMDA0NWQzOTU3MDZkMWQ3OGIzZWY1DD6o/Q==: --dhchap-ctrl-secret DHHC-1:03:MDQ5NmQ5MmJhYmFhODFhMDNjZTI0ODE4MWUyMmU1OWNkMmUzZTE3OTU1YjMyZTcyMGIyNTQyZDg5YTY1NWFkONjITE0=: 00:17:04.645 14:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:04.645 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:04.645 14:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:04.645 14:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.645 14:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.645 14:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.645 14:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:04.645 14:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:04.645 14:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:04.902 14:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:17:04.902 14:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:04.902 14:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:04.903 14:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:04.903 14:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:04.903 14:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:04.903 14:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:04.903 14:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.903 14:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.903 14:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.903 14:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:04.903 14:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:05.160 00:17:05.160 14:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:05.160 14:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:05.160 14:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:05.430 14:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:05.430 14:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:05.430 14:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.430 14:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.430 14:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.430 14:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:05.430 { 00:17:05.430 "cntlid": 75, 00:17:05.430 "qid": 0, 00:17:05.430 "state": "enabled", 00:17:05.430 "thread": "nvmf_tgt_poll_group_000", 00:17:05.430 "listen_address": { 00:17:05.430 "trtype": "TCP", 00:17:05.430 "adrfam": "IPv4", 00:17:05.430 "traddr": "10.0.0.2", 00:17:05.430 "trsvcid": "4420" 00:17:05.430 }, 00:17:05.430 "peer_address": { 00:17:05.430 "trtype": "TCP", 00:17:05.430 "adrfam": "IPv4", 00:17:05.430 "traddr": "10.0.0.1", 00:17:05.430 "trsvcid": "50456" 00:17:05.430 }, 00:17:05.430 "auth": { 00:17:05.430 "state": "completed", 00:17:05.430 "digest": "sha384", 00:17:05.430 "dhgroup": "ffdhe4096" 00:17:05.430 } 00:17:05.430 } 00:17:05.430 ]' 00:17:05.430 14:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:05.430 14:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:05.430 14:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:05.430 14:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:05.430 14:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:05.692 14:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:05.692 14:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:05.692 14:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:05.950 14:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:ZjliNjJkMThjYTE3MDdjNmU2YmQ4Yjg3ZmEzZDc1MzNSpyDl: --dhchap-ctrl-secret DHHC-1:02:YTQ2OTJkODc2ZGIwMmE2NjEzODM3NDdhNWU1MTczODhhNjBhMzhmNDFkZDA3YWRmSj0TVg==: 00:17:06.883 14:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:06.883 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:06.883 14:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:06.883 14:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.883 14:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.883 14:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.883 14:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:06.883 14:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:06.883 14:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:06.883 14:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:17:06.883 14:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:06.883 14:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:06.883 14:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:06.883 14:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:06.883 14:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:06.883 14:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:06.883 14:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.883 14:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.883 14:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.883 14:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:06.884 14:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:07.449 00:17:07.449 14:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:07.449 14:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:07.449 14:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:07.706 14:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.706 14:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:07.706 14:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.706 14:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.706 14:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.706 14:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:07.706 { 00:17:07.706 "cntlid": 77, 00:17:07.706 "qid": 0, 00:17:07.706 "state": "enabled", 00:17:07.706 "thread": "nvmf_tgt_poll_group_000", 00:17:07.706 "listen_address": { 00:17:07.706 "trtype": "TCP", 00:17:07.706 "adrfam": "IPv4", 00:17:07.706 "traddr": "10.0.0.2", 00:17:07.706 "trsvcid": "4420" 00:17:07.706 }, 00:17:07.706 "peer_address": { 00:17:07.706 "trtype": "TCP", 00:17:07.706 "adrfam": "IPv4", 00:17:07.706 "traddr": "10.0.0.1", 00:17:07.706 "trsvcid": "55666" 00:17:07.706 }, 00:17:07.706 "auth": { 00:17:07.706 "state": "completed", 00:17:07.706 "digest": "sha384", 00:17:07.706 "dhgroup": "ffdhe4096" 00:17:07.706 } 00:17:07.706 } 00:17:07.706 ]' 00:17:07.706 14:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:07.706 14:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:07.706 14:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:07.706 14:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:07.706 14:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:07.706 14:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:07.706 14:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:07.706 14:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:07.964 14:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:ODk2MmY5NGMwMjQxMDRjYTU3MDJjMWYzN2MyMzBmNzE4NjY2OWMzMDliNTFkZDNiOBdn9Q==: --dhchap-ctrl-secret DHHC-1:01:ZTdlNmQzMGRiMjQ0YzMzOTQ3MDk0Y2I0ZjkxMjU1NTWzWWa/: 00:17:08.897 14:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:08.897 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:08.897 14:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:08.897 14:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.897 14:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.897 14:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.897 14:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:08.897 14:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:08.897 14:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:09.155 14:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:17:09.155 14:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:09.155 14:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:09.155 14:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:09.155 14:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:09.155 14:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:09.155 14:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:17:09.155 14:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.155 14:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.155 14:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.155 14:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:09.155 14:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:09.720 00:17:09.720 14:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:09.720 14:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:09.720 14:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:09.720 14:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.720 14:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:09.720 14:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.720 14:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.720 14:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.721 14:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:09.721 { 00:17:09.721 "cntlid": 79, 00:17:09.721 "qid": 0, 00:17:09.721 "state": "enabled", 00:17:09.721 "thread": "nvmf_tgt_poll_group_000", 00:17:09.721 "listen_address": { 00:17:09.721 "trtype": "TCP", 00:17:09.721 "adrfam": "IPv4", 00:17:09.721 "traddr": "10.0.0.2", 00:17:09.721 "trsvcid": "4420" 00:17:09.721 }, 00:17:09.721 "peer_address": { 00:17:09.721 "trtype": "TCP", 00:17:09.721 "adrfam": "IPv4", 00:17:09.721 "traddr": "10.0.0.1", 00:17:09.721 "trsvcid": "55694" 00:17:09.721 }, 00:17:09.721 "auth": { 00:17:09.721 "state": "completed", 00:17:09.721 "digest": "sha384", 00:17:09.721 "dhgroup": "ffdhe4096" 00:17:09.721 } 00:17:09.721 } 00:17:09.721 ]' 00:17:09.721 14:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:09.978 14:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:09.978 14:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:09.978 14:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:09.978 14:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:09.978 14:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:09.978 14:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:09.978 14:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:10.263 14:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:MTc4YTNhOWRjOTNmMmQ2MDU0ODZiOTRkMjRlZDMzNGE5ZDNiNDg3Y2UzMjI4OGUzNjkyMmQ0MDZiMzYzOTJhNCkUHwI=: 00:17:11.197 14:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:11.197 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:11.197 14:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:11.197 14:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.197 14:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.197 14:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.197 14:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:11.197 14:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:11.197 14:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:11.197 14:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:11.197 14:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:17:11.197 14:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:11.197 14:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:11.197 14:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:11.197 14:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:11.197 14:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:11.197 14:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:11.197 14:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.197 14:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.197 14:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.197 14:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:11.197 14:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:11.763 00:17:11.763 14:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:11.763 14:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:11.763 14:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:12.021 14:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:12.021 14:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:12.021 14:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.021 14:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.021 14:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.021 14:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:12.021 { 00:17:12.021 "cntlid": 81, 00:17:12.021 "qid": 0, 00:17:12.021 "state": "enabled", 00:17:12.021 "thread": "nvmf_tgt_poll_group_000", 00:17:12.021 "listen_address": { 00:17:12.021 "trtype": "TCP", 00:17:12.021 "adrfam": "IPv4", 00:17:12.021 "traddr": "10.0.0.2", 00:17:12.021 "trsvcid": "4420" 00:17:12.021 }, 00:17:12.021 "peer_address": { 00:17:12.021 "trtype": "TCP", 00:17:12.021 "adrfam": "IPv4", 00:17:12.021 "traddr": "10.0.0.1", 00:17:12.021 "trsvcid": "55728" 00:17:12.021 }, 00:17:12.021 "auth": { 00:17:12.021 "state": "completed", 00:17:12.021 "digest": "sha384", 00:17:12.021 "dhgroup": "ffdhe6144" 00:17:12.021 } 00:17:12.021 } 00:17:12.021 ]' 00:17:12.021 14:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:12.021 14:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:12.021 14:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:12.279 14:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:12.279 14:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:12.279 14:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:12.279 14:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:12.279 14:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:12.537 14:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:Y2I3YzZhNjE1N2IzOGQ2Nzk1NzIyMGJhMmQyMDA0NWQzOTU3MDZkMWQ3OGIzZWY1DD6o/Q==: --dhchap-ctrl-secret DHHC-1:03:MDQ5NmQ5MmJhYmFhODFhMDNjZTI0ODE4MWUyMmU1OWNkMmUzZTE3OTU1YjMyZTcyMGIyNTQyZDg5YTY1NWFkONjITE0=: 00:17:13.469 14:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:13.469 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:13.469 14:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:13.469 14:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.469 14:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.469 14:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.469 14:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:13.469 14:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:13.469 14:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:13.726 14:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:17:13.726 14:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:13.726 14:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:13.726 14:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:13.726 14:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:13.726 14:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:13.726 14:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:13.726 14:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.726 14:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.726 14:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.726 14:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:13.726 14:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:14.290 00:17:14.290 14:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:14.290 14:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:14.290 14:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:14.290 14:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:14.290 14:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:14.290 14:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.290 14:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.290 14:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.290 14:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:14.290 { 00:17:14.290 "cntlid": 83, 00:17:14.290 "qid": 0, 00:17:14.290 "state": "enabled", 00:17:14.290 "thread": "nvmf_tgt_poll_group_000", 00:17:14.290 "listen_address": { 00:17:14.290 "trtype": "TCP", 00:17:14.290 "adrfam": "IPv4", 00:17:14.290 "traddr": "10.0.0.2", 00:17:14.290 "trsvcid": "4420" 00:17:14.290 }, 00:17:14.290 "peer_address": { 00:17:14.290 "trtype": "TCP", 00:17:14.291 "adrfam": "IPv4", 00:17:14.291 "traddr": "10.0.0.1", 00:17:14.291 "trsvcid": "55758" 00:17:14.291 }, 00:17:14.291 "auth": { 00:17:14.291 "state": "completed", 00:17:14.291 "digest": "sha384", 00:17:14.291 "dhgroup": "ffdhe6144" 00:17:14.291 } 00:17:14.291 } 00:17:14.291 ]' 00:17:14.291 14:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:14.548 14:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:14.548 14:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:14.548 14:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:14.548 14:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:14.548 14:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:14.548 14:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:14.548 14:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:14.805 14:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:ZjliNjJkMThjYTE3MDdjNmU2YmQ4Yjg3ZmEzZDc1MzNSpyDl: --dhchap-ctrl-secret DHHC-1:02:YTQ2OTJkODc2ZGIwMmE2NjEzODM3NDdhNWU1MTczODhhNjBhMzhmNDFkZDA3YWRmSj0TVg==: 00:17:15.757 14:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:15.757 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:15.757 14:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:15.757 14:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.757 14:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.757 14:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.757 14:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:15.757 14:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:15.757 14:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:16.016 14:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:17:16.016 14:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:16.016 14:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:16.016 14:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:16.016 14:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:16.016 14:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:16.016 14:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:16.016 14:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.016 14:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.016 14:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.016 14:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:16.016 14:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:16.581 00:17:16.581 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:16.581 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:16.581 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:16.581 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.581 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:16.581 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.581 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.839 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.839 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:16.839 { 00:17:16.839 "cntlid": 85, 00:17:16.839 "qid": 0, 00:17:16.839 "state": "enabled", 00:17:16.839 "thread": "nvmf_tgt_poll_group_000", 00:17:16.839 "listen_address": { 00:17:16.839 "trtype": "TCP", 00:17:16.839 "adrfam": "IPv4", 00:17:16.839 "traddr": "10.0.0.2", 00:17:16.839 "trsvcid": "4420" 00:17:16.839 }, 00:17:16.839 "peer_address": { 00:17:16.839 "trtype": "TCP", 00:17:16.839 "adrfam": "IPv4", 00:17:16.839 "traddr": "10.0.0.1", 00:17:16.839 "trsvcid": "57552" 00:17:16.839 }, 00:17:16.839 "auth": { 00:17:16.839 "state": "completed", 00:17:16.839 "digest": "sha384", 00:17:16.839 "dhgroup": "ffdhe6144" 00:17:16.839 } 00:17:16.839 } 00:17:16.839 ]' 00:17:16.839 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:16.839 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:16.839 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:16.839 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:16.839 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:16.839 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:16.839 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:16.839 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:17.097 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:ODk2MmY5NGMwMjQxMDRjYTU3MDJjMWYzN2MyMzBmNzE4NjY2OWMzMDliNTFkZDNiOBdn9Q==: --dhchap-ctrl-secret DHHC-1:01:ZTdlNmQzMGRiMjQ0YzMzOTQ3MDk0Y2I0ZjkxMjU1NTWzWWa/: 00:17:18.028 14:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:18.028 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:18.028 14:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:18.028 14:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.028 14:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.028 14:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.028 14:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:18.028 14:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:18.028 14:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:18.286 14:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:17:18.286 14:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:18.286 14:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:18.286 14:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:18.286 14:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:18.286 14:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:18.286 14:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:17:18.286 14:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.286 14:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.286 14:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.286 14:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:18.286 14:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:18.851 00:17:18.851 14:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:18.851 14:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:18.851 14:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:19.108 14:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:19.108 14:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:19.108 14:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.108 14:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.108 14:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.108 14:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:19.108 { 00:17:19.108 "cntlid": 87, 00:17:19.108 "qid": 0, 00:17:19.108 "state": "enabled", 00:17:19.108 "thread": "nvmf_tgt_poll_group_000", 00:17:19.108 "listen_address": { 00:17:19.108 "trtype": "TCP", 00:17:19.108 "adrfam": "IPv4", 00:17:19.108 "traddr": "10.0.0.2", 00:17:19.108 "trsvcid": "4420" 00:17:19.108 }, 00:17:19.108 "peer_address": { 00:17:19.108 "trtype": "TCP", 00:17:19.108 "adrfam": "IPv4", 00:17:19.108 "traddr": "10.0.0.1", 00:17:19.108 "trsvcid": "57586" 00:17:19.108 }, 00:17:19.108 "auth": { 00:17:19.108 "state": "completed", 00:17:19.108 "digest": "sha384", 00:17:19.108 "dhgroup": "ffdhe6144" 00:17:19.108 } 00:17:19.108 } 00:17:19.108 ]' 00:17:19.108 14:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:19.108 14:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:19.108 14:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:19.108 14:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:19.108 14:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:19.108 14:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:19.108 14:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:19.108 14:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:19.366 14:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:MTc4YTNhOWRjOTNmMmQ2MDU0ODZiOTRkMjRlZDMzNGE5ZDNiNDg3Y2UzMjI4OGUzNjkyMmQ0MDZiMzYzOTJhNCkUHwI=: 00:17:20.301 14:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:20.301 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:20.301 14:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:20.301 14:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.301 14:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.301 14:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.301 14:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:20.301 14:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:20.301 14:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:20.301 14:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:20.559 14:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:17:20.559 14:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:20.559 14:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:20.559 14:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:20.559 14:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:20.559 14:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:20.559 14:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:20.559 14:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.559 14:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.559 14:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.559 14:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:20.559 14:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:21.491 00:17:21.491 14:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:21.491 14:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:21.491 14:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:21.749 14:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:21.749 14:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:21.749 14:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.749 14:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.749 14:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.749 14:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:21.749 { 00:17:21.749 "cntlid": 89, 00:17:21.749 "qid": 0, 00:17:21.749 "state": "enabled", 00:17:21.749 "thread": "nvmf_tgt_poll_group_000", 00:17:21.749 "listen_address": { 00:17:21.749 "trtype": "TCP", 00:17:21.749 "adrfam": "IPv4", 00:17:21.749 "traddr": "10.0.0.2", 00:17:21.749 "trsvcid": "4420" 00:17:21.749 }, 00:17:21.749 "peer_address": { 00:17:21.749 "trtype": "TCP", 00:17:21.749 "adrfam": "IPv4", 00:17:21.749 "traddr": "10.0.0.1", 00:17:21.749 "trsvcid": "57604" 00:17:21.749 }, 00:17:21.749 "auth": { 00:17:21.749 "state": "completed", 00:17:21.749 "digest": "sha384", 00:17:21.749 "dhgroup": "ffdhe8192" 00:17:21.749 } 00:17:21.749 } 00:17:21.749 ]' 00:17:21.749 14:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:21.749 14:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:21.749 14:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:21.750 14:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:21.750 14:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:21.750 14:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:21.750 14:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:21.750 14:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:22.008 14:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:Y2I3YzZhNjE1N2IzOGQ2Nzk1NzIyMGJhMmQyMDA0NWQzOTU3MDZkMWQ3OGIzZWY1DD6o/Q==: --dhchap-ctrl-secret DHHC-1:03:MDQ5NmQ5MmJhYmFhODFhMDNjZTI0ODE4MWUyMmU1OWNkMmUzZTE3OTU1YjMyZTcyMGIyNTQyZDg5YTY1NWFkONjITE0=: 00:17:22.941 14:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:22.941 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:22.941 14:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:22.941 14:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.941 14:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.941 14:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.941 14:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:22.941 14:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:22.941 14:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:23.199 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:17:23.199 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:23.199 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:23.199 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:23.199 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:23.199 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:23.199 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:23.199 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.199 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.199 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.199 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:23.199 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:24.151 00:17:24.151 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:24.151 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:24.151 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:24.151 14:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.151 14:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:24.151 14:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.151 14:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.151 14:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.151 14:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:24.151 { 00:17:24.151 "cntlid": 91, 00:17:24.151 "qid": 0, 00:17:24.151 "state": "enabled", 00:17:24.151 "thread": "nvmf_tgt_poll_group_000", 00:17:24.151 "listen_address": { 00:17:24.151 "trtype": "TCP", 00:17:24.151 "adrfam": "IPv4", 00:17:24.151 "traddr": "10.0.0.2", 00:17:24.151 "trsvcid": "4420" 00:17:24.151 }, 00:17:24.151 "peer_address": { 00:17:24.151 "trtype": "TCP", 00:17:24.151 "adrfam": "IPv4", 00:17:24.151 "traddr": "10.0.0.1", 00:17:24.151 "trsvcid": "57640" 00:17:24.151 }, 00:17:24.151 "auth": { 00:17:24.151 "state": "completed", 00:17:24.151 "digest": "sha384", 00:17:24.151 "dhgroup": "ffdhe8192" 00:17:24.151 } 00:17:24.151 } 00:17:24.151 ]' 00:17:24.151 14:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:24.409 14:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:24.409 14:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:24.409 14:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:24.409 14:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:24.409 14:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:24.409 14:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:24.409 14:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:24.676 14:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:ZjliNjJkMThjYTE3MDdjNmU2YmQ4Yjg3ZmEzZDc1MzNSpyDl: --dhchap-ctrl-secret DHHC-1:02:YTQ2OTJkODc2ZGIwMmE2NjEzODM3NDdhNWU1MTczODhhNjBhMzhmNDFkZDA3YWRmSj0TVg==: 00:17:25.613 14:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:25.613 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:25.613 14:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:25.613 14:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.613 14:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.613 14:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.613 14:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:25.613 14:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:25.613 14:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:25.613 14:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:17:25.613 14:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:25.613 14:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:25.613 14:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:25.613 14:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:25.613 14:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:25.613 14:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:25.613 14:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.614 14:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.614 14:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.614 14:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:25.614 14:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:26.713 00:17:26.713 14:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:26.713 14:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:26.713 14:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:26.713 14:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.713 14:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:26.713 14:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.713 14:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.713 14:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.713 14:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:26.713 { 00:17:26.713 "cntlid": 93, 00:17:26.713 "qid": 0, 00:17:26.713 "state": "enabled", 00:17:26.713 "thread": "nvmf_tgt_poll_group_000", 00:17:26.713 "listen_address": { 00:17:26.713 "trtype": "TCP", 00:17:26.713 "adrfam": "IPv4", 00:17:26.713 "traddr": "10.0.0.2", 00:17:26.713 "trsvcid": "4420" 00:17:26.713 }, 00:17:26.713 "peer_address": { 00:17:26.713 "trtype": "TCP", 00:17:26.713 "adrfam": "IPv4", 00:17:26.713 "traddr": "10.0.0.1", 00:17:26.713 "trsvcid": "36610" 00:17:26.713 }, 00:17:26.713 "auth": { 00:17:26.713 "state": "completed", 00:17:26.713 "digest": "sha384", 00:17:26.713 "dhgroup": "ffdhe8192" 00:17:26.713 } 00:17:26.713 } 00:17:26.713 ]' 00:17:26.713 14:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:26.992 14:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:26.992 14:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:26.992 14:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:26.992 14:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:26.992 14:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:26.992 14:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:26.992 14:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:27.272 14:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:ODk2MmY5NGMwMjQxMDRjYTU3MDJjMWYzN2MyMzBmNzE4NjY2OWMzMDliNTFkZDNiOBdn9Q==: --dhchap-ctrl-secret DHHC-1:01:ZTdlNmQzMGRiMjQ0YzMzOTQ3MDk0Y2I0ZjkxMjU1NTWzWWa/: 00:17:28.255 14:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:28.255 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:28.255 14:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:28.255 14:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.255 14:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.255 14:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.255 14:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:28.255 14:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:28.255 14:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:28.255 14:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:17:28.255 14:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:28.255 14:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:28.255 14:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:28.255 14:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:28.255 14:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:28.256 14:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:17:28.256 14:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.256 14:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.256 14:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.256 14:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:28.256 14:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:29.233 00:17:29.233 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:29.233 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:29.233 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:29.490 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:29.490 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:29.490 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.490 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.490 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.490 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:29.490 { 00:17:29.490 "cntlid": 95, 00:17:29.490 "qid": 0, 00:17:29.490 "state": "enabled", 00:17:29.490 "thread": "nvmf_tgt_poll_group_000", 00:17:29.490 "listen_address": { 00:17:29.490 "trtype": "TCP", 00:17:29.490 "adrfam": "IPv4", 00:17:29.490 "traddr": "10.0.0.2", 00:17:29.490 "trsvcid": "4420" 00:17:29.490 }, 00:17:29.490 "peer_address": { 00:17:29.490 "trtype": "TCP", 00:17:29.490 "adrfam": "IPv4", 00:17:29.490 "traddr": "10.0.0.1", 00:17:29.490 "trsvcid": "36650" 00:17:29.490 }, 00:17:29.490 "auth": { 00:17:29.490 "state": "completed", 00:17:29.490 "digest": "sha384", 00:17:29.490 "dhgroup": "ffdhe8192" 00:17:29.490 } 00:17:29.490 } 00:17:29.490 ]' 00:17:29.491 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:29.491 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:29.491 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:29.491 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:29.491 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:29.491 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:29.491 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:29.491 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:29.748 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:MTc4YTNhOWRjOTNmMmQ2MDU0ODZiOTRkMjRlZDMzNGE5ZDNiNDg3Y2UzMjI4OGUzNjkyMmQ0MDZiMzYzOTJhNCkUHwI=: 00:17:30.682 14:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:30.682 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:30.682 14:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:30.682 14:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.682 14:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.682 14:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.682 14:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:17:30.682 14:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:30.682 14:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:30.682 14:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:30.682 14:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:30.940 14:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:17:30.940 14:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:30.940 14:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:30.940 14:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:30.940 14:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:30.940 14:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:30.940 14:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:30.940 14:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.940 14:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.940 14:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.940 14:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:30.940 14:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:31.506 00:17:31.506 14:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:31.506 14:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:31.506 14:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:31.764 14:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:31.764 14:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:31.764 14:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.764 14:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.764 14:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.764 14:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:31.764 { 00:17:31.764 "cntlid": 97, 00:17:31.764 "qid": 0, 00:17:31.764 "state": "enabled", 00:17:31.764 "thread": "nvmf_tgt_poll_group_000", 00:17:31.764 "listen_address": { 00:17:31.764 "trtype": "TCP", 00:17:31.764 "adrfam": "IPv4", 00:17:31.764 "traddr": "10.0.0.2", 00:17:31.764 "trsvcid": "4420" 00:17:31.764 }, 00:17:31.764 "peer_address": { 00:17:31.764 "trtype": "TCP", 00:17:31.764 "adrfam": "IPv4", 00:17:31.764 "traddr": "10.0.0.1", 00:17:31.764 "trsvcid": "36684" 00:17:31.764 }, 00:17:31.764 "auth": { 00:17:31.764 "state": "completed", 00:17:31.764 "digest": "sha512", 00:17:31.764 "dhgroup": "null" 00:17:31.764 } 00:17:31.764 } 00:17:31.764 ]' 00:17:31.764 14:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:31.764 14:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:31.764 14:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:31.764 14:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:31.764 14:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:31.764 14:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:31.764 14:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:31.764 14:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:32.023 14:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:Y2I3YzZhNjE1N2IzOGQ2Nzk1NzIyMGJhMmQyMDA0NWQzOTU3MDZkMWQ3OGIzZWY1DD6o/Q==: --dhchap-ctrl-secret DHHC-1:03:MDQ5NmQ5MmJhYmFhODFhMDNjZTI0ODE4MWUyMmU1OWNkMmUzZTE3OTU1YjMyZTcyMGIyNTQyZDg5YTY1NWFkONjITE0=: 00:17:32.956 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:32.956 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:32.956 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:32.956 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.956 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.956 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.956 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:32.956 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:32.956 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:33.215 14:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:17:33.215 14:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:33.215 14:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:33.215 14:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:33.215 14:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:33.215 14:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:33.215 14:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:33.215 14:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.215 14:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.215 14:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.215 14:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:33.215 14:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:33.781 00:17:33.781 14:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:33.781 14:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:33.781 14:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:34.039 14:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:34.039 14:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:34.039 14:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.039 14:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.039 14:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.039 14:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:34.039 { 00:17:34.039 "cntlid": 99, 00:17:34.039 "qid": 0, 00:17:34.039 "state": "enabled", 00:17:34.039 "thread": "nvmf_tgt_poll_group_000", 00:17:34.039 "listen_address": { 00:17:34.039 "trtype": "TCP", 00:17:34.039 "adrfam": "IPv4", 00:17:34.039 "traddr": "10.0.0.2", 00:17:34.039 "trsvcid": "4420" 00:17:34.039 }, 00:17:34.039 "peer_address": { 00:17:34.039 "trtype": "TCP", 00:17:34.039 "adrfam": "IPv4", 00:17:34.039 "traddr": "10.0.0.1", 00:17:34.039 "trsvcid": "36708" 00:17:34.039 }, 00:17:34.039 "auth": { 00:17:34.039 "state": "completed", 00:17:34.039 "digest": "sha512", 00:17:34.039 "dhgroup": "null" 00:17:34.039 } 00:17:34.039 } 00:17:34.039 ]' 00:17:34.039 14:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:34.039 14:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:34.039 14:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:34.039 14:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:34.039 14:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:34.039 14:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:34.039 14:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:34.039 14:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:34.297 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:ZjliNjJkMThjYTE3MDdjNmU2YmQ4Yjg3ZmEzZDc1MzNSpyDl: --dhchap-ctrl-secret DHHC-1:02:YTQ2OTJkODc2ZGIwMmE2NjEzODM3NDdhNWU1MTczODhhNjBhMzhmNDFkZDA3YWRmSj0TVg==: 00:17:35.245 14:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:35.245 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:35.245 14:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:35.245 14:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.245 14:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.245 14:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.245 14:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:35.245 14:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:35.245 14:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:35.504 14:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:17:35.504 14:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:35.504 14:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:35.504 14:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:35.504 14:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:35.504 14:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:35.504 14:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:35.504 14:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.504 14:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.504 14:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.504 14:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:35.504 14:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:35.761 00:17:35.761 14:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:35.761 14:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:35.761 14:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:36.019 14:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:36.019 14:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:36.019 14:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.019 14:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.019 14:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.019 14:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:36.019 { 00:17:36.019 "cntlid": 101, 00:17:36.019 "qid": 0, 00:17:36.019 "state": "enabled", 00:17:36.019 "thread": "nvmf_tgt_poll_group_000", 00:17:36.019 "listen_address": { 00:17:36.019 "trtype": "TCP", 00:17:36.019 "adrfam": "IPv4", 00:17:36.019 "traddr": "10.0.0.2", 00:17:36.019 "trsvcid": "4420" 00:17:36.019 }, 00:17:36.019 "peer_address": { 00:17:36.019 "trtype": "TCP", 00:17:36.019 "adrfam": "IPv4", 00:17:36.019 "traddr": "10.0.0.1", 00:17:36.019 "trsvcid": "38766" 00:17:36.019 }, 00:17:36.019 "auth": { 00:17:36.019 "state": "completed", 00:17:36.019 "digest": "sha512", 00:17:36.019 "dhgroup": "null" 00:17:36.019 } 00:17:36.019 } 00:17:36.019 ]' 00:17:36.019 14:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:36.019 14:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:36.019 14:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:36.277 14:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:36.277 14:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:36.277 14:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:36.277 14:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:36.277 14:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:36.549 14:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:ODk2MmY5NGMwMjQxMDRjYTU3MDJjMWYzN2MyMzBmNzE4NjY2OWMzMDliNTFkZDNiOBdn9Q==: --dhchap-ctrl-secret DHHC-1:01:ZTdlNmQzMGRiMjQ0YzMzOTQ3MDk0Y2I0ZjkxMjU1NTWzWWa/: 00:17:37.488 14:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:37.488 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:37.488 14:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:37.488 14:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.489 14:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.489 14:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.489 14:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:37.489 14:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:37.489 14:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:37.746 14:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:17:37.746 14:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:37.746 14:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:37.747 14:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:37.747 14:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:37.747 14:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:37.747 14:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:17:37.747 14:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.747 14:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.747 14:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.747 14:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:37.747 14:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:38.005 00:17:38.005 14:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:38.005 14:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:38.005 14:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:38.263 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:38.263 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:38.263 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.263 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.263 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.263 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:38.263 { 00:17:38.263 "cntlid": 103, 00:17:38.263 "qid": 0, 00:17:38.263 "state": "enabled", 00:17:38.263 "thread": "nvmf_tgt_poll_group_000", 00:17:38.263 "listen_address": { 00:17:38.263 "trtype": "TCP", 00:17:38.263 "adrfam": "IPv4", 00:17:38.263 "traddr": "10.0.0.2", 00:17:38.263 "trsvcid": "4420" 00:17:38.263 }, 00:17:38.263 "peer_address": { 00:17:38.263 "trtype": "TCP", 00:17:38.263 "adrfam": "IPv4", 00:17:38.263 "traddr": "10.0.0.1", 00:17:38.263 "trsvcid": "38778" 00:17:38.263 }, 00:17:38.263 "auth": { 00:17:38.263 "state": "completed", 00:17:38.263 "digest": "sha512", 00:17:38.263 "dhgroup": "null" 00:17:38.263 } 00:17:38.263 } 00:17:38.263 ]' 00:17:38.263 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:38.263 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:38.263 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:38.263 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:38.263 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:38.263 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:38.263 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:38.263 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:38.521 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:MTc4YTNhOWRjOTNmMmQ2MDU0ODZiOTRkMjRlZDMzNGE5ZDNiNDg3Y2UzMjI4OGUzNjkyMmQ0MDZiMzYzOTJhNCkUHwI=: 00:17:39.462 14:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:39.462 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:39.462 14:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:39.462 14:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.462 14:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.462 14:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.462 14:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:39.462 14:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:39.462 14:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:39.462 14:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:39.720 14:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:17:39.720 14:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:39.720 14:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:39.720 14:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:39.720 14:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:39.720 14:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:39.720 14:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:39.720 14:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.720 14:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.978 14:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.978 14:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:39.978 14:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:40.236 00:17:40.236 14:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:40.236 14:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:40.236 14:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:40.494 14:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:40.494 14:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:40.494 14:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.494 14:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.494 14:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.494 14:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:40.494 { 00:17:40.494 "cntlid": 105, 00:17:40.494 "qid": 0, 00:17:40.494 "state": "enabled", 00:17:40.494 "thread": "nvmf_tgt_poll_group_000", 00:17:40.494 "listen_address": { 00:17:40.494 "trtype": "TCP", 00:17:40.494 "adrfam": "IPv4", 00:17:40.494 "traddr": "10.0.0.2", 00:17:40.494 "trsvcid": "4420" 00:17:40.494 }, 00:17:40.494 "peer_address": { 00:17:40.494 "trtype": "TCP", 00:17:40.494 "adrfam": "IPv4", 00:17:40.494 "traddr": "10.0.0.1", 00:17:40.494 "trsvcid": "38812" 00:17:40.494 }, 00:17:40.494 "auth": { 00:17:40.494 "state": "completed", 00:17:40.494 "digest": "sha512", 00:17:40.494 "dhgroup": "ffdhe2048" 00:17:40.494 } 00:17:40.494 } 00:17:40.494 ]' 00:17:40.494 14:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:40.494 14:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:40.494 14:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:40.494 14:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:40.494 14:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:40.494 14:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:40.494 14:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:40.494 14:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:40.752 14:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:Y2I3YzZhNjE1N2IzOGQ2Nzk1NzIyMGJhMmQyMDA0NWQzOTU3MDZkMWQ3OGIzZWY1DD6o/Q==: --dhchap-ctrl-secret DHHC-1:03:MDQ5NmQ5MmJhYmFhODFhMDNjZTI0ODE4MWUyMmU1OWNkMmUzZTE3OTU1YjMyZTcyMGIyNTQyZDg5YTY1NWFkONjITE0=: 00:17:41.687 14:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:41.687 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:41.687 14:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:41.687 14:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.687 14:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.687 14:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.687 14:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:41.687 14:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:41.687 14:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:41.946 14:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:17:41.946 14:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:41.946 14:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:41.946 14:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:41.946 14:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:41.946 14:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:41.946 14:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:41.946 14:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.946 14:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.946 14:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.946 14:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:41.946 14:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:42.204 00:17:42.204 14:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:42.204 14:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:42.204 14:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:42.462 14:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:42.462 14:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:42.462 14:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.462 14:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.462 14:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.462 14:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:42.462 { 00:17:42.462 "cntlid": 107, 00:17:42.462 "qid": 0, 00:17:42.462 "state": "enabled", 00:17:42.462 "thread": "nvmf_tgt_poll_group_000", 00:17:42.462 "listen_address": { 00:17:42.462 "trtype": "TCP", 00:17:42.462 "adrfam": "IPv4", 00:17:42.462 "traddr": "10.0.0.2", 00:17:42.462 "trsvcid": "4420" 00:17:42.462 }, 00:17:42.462 "peer_address": { 00:17:42.462 "trtype": "TCP", 00:17:42.462 "adrfam": "IPv4", 00:17:42.462 "traddr": "10.0.0.1", 00:17:42.462 "trsvcid": "38854" 00:17:42.462 }, 00:17:42.462 "auth": { 00:17:42.462 "state": "completed", 00:17:42.462 "digest": "sha512", 00:17:42.462 "dhgroup": "ffdhe2048" 00:17:42.462 } 00:17:42.462 } 00:17:42.462 ]' 00:17:42.462 14:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:42.462 14:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:42.462 14:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:42.720 14:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:42.720 14:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:42.720 14:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:42.720 14:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:42.720 14:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:42.978 14:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:ZjliNjJkMThjYTE3MDdjNmU2YmQ4Yjg3ZmEzZDc1MzNSpyDl: --dhchap-ctrl-secret DHHC-1:02:YTQ2OTJkODc2ZGIwMmE2NjEzODM3NDdhNWU1MTczODhhNjBhMzhmNDFkZDA3YWRmSj0TVg==: 00:17:43.911 14:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:43.911 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:43.911 14:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:43.911 14:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.911 14:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.911 14:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.911 14:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:43.911 14:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:43.912 14:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:43.912 14:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:17:43.912 14:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:43.912 14:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:43.912 14:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:43.912 14:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:43.912 14:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:43.912 14:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:43.912 14:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.912 14:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.912 14:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.912 14:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:43.912 14:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:44.478 00:17:44.478 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:44.478 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:44.478 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:44.736 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:44.736 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:44.736 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.736 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.736 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.736 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:44.736 { 00:17:44.736 "cntlid": 109, 00:17:44.736 "qid": 0, 00:17:44.736 "state": "enabled", 00:17:44.736 "thread": "nvmf_tgt_poll_group_000", 00:17:44.736 "listen_address": { 00:17:44.736 "trtype": "TCP", 00:17:44.736 "adrfam": "IPv4", 00:17:44.736 "traddr": "10.0.0.2", 00:17:44.736 "trsvcid": "4420" 00:17:44.736 }, 00:17:44.736 "peer_address": { 00:17:44.736 "trtype": "TCP", 00:17:44.736 "adrfam": "IPv4", 00:17:44.736 "traddr": "10.0.0.1", 00:17:44.736 "trsvcid": "38888" 00:17:44.736 }, 00:17:44.736 "auth": { 00:17:44.736 "state": "completed", 00:17:44.736 "digest": "sha512", 00:17:44.736 "dhgroup": "ffdhe2048" 00:17:44.736 } 00:17:44.736 } 00:17:44.736 ]' 00:17:44.736 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:44.736 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:44.736 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:44.736 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:44.736 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:44.736 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:44.736 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:44.736 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:44.995 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:ODk2MmY5NGMwMjQxMDRjYTU3MDJjMWYzN2MyMzBmNzE4NjY2OWMzMDliNTFkZDNiOBdn9Q==: --dhchap-ctrl-secret DHHC-1:01:ZTdlNmQzMGRiMjQ0YzMzOTQ3MDk0Y2I0ZjkxMjU1NTWzWWa/: 00:17:45.927 14:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:45.927 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:45.927 14:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:45.927 14:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.927 14:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.927 14:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.927 14:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:45.927 14:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:45.927 14:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:46.185 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:17:46.185 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:46.185 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:46.185 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:46.185 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:46.185 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:46.185 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:17:46.185 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.185 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.185 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.185 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:46.185 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:46.767 00:17:46.767 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:46.767 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:46.767 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:46.767 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:46.767 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:46.767 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.767 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.767 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.767 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:46.767 { 00:17:46.767 "cntlid": 111, 00:17:46.767 "qid": 0, 00:17:46.767 "state": "enabled", 00:17:46.767 "thread": "nvmf_tgt_poll_group_000", 00:17:46.767 "listen_address": { 00:17:46.767 "trtype": "TCP", 00:17:46.767 "adrfam": "IPv4", 00:17:46.767 "traddr": "10.0.0.2", 00:17:46.767 "trsvcid": "4420" 00:17:46.767 }, 00:17:46.767 "peer_address": { 00:17:46.767 "trtype": "TCP", 00:17:46.767 "adrfam": "IPv4", 00:17:46.767 "traddr": "10.0.0.1", 00:17:46.767 "trsvcid": "47866" 00:17:46.767 }, 00:17:46.767 "auth": { 00:17:46.767 "state": "completed", 00:17:46.767 "digest": "sha512", 00:17:46.767 "dhgroup": "ffdhe2048" 00:17:46.767 } 00:17:46.767 } 00:17:46.767 ]' 00:17:46.767 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:47.025 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:47.025 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:47.025 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:47.025 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:47.025 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:47.025 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:47.025 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:47.283 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:MTc4YTNhOWRjOTNmMmQ2MDU0ODZiOTRkMjRlZDMzNGE5ZDNiNDg3Y2UzMjI4OGUzNjkyMmQ0MDZiMzYzOTJhNCkUHwI=: 00:17:48.217 14:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:48.217 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:48.217 14:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:48.217 14:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.217 14:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.217 14:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.217 14:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:48.217 14:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:48.217 14:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:48.217 14:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:48.475 14:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:17:48.475 14:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:48.475 14:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:48.475 14:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:48.475 14:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:48.475 14:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:48.475 14:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:48.475 14:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.475 14:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.475 14:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.475 14:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:48.475 14:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:48.733 00:17:48.733 14:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:48.733 14:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:48.733 14:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:48.991 14:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:48.991 14:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:48.991 14:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.991 14:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.991 14:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.991 14:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:48.991 { 00:17:48.991 "cntlid": 113, 00:17:48.991 "qid": 0, 00:17:48.991 "state": "enabled", 00:17:48.991 "thread": "nvmf_tgt_poll_group_000", 00:17:48.991 "listen_address": { 00:17:48.991 "trtype": "TCP", 00:17:48.991 "adrfam": "IPv4", 00:17:48.991 "traddr": "10.0.0.2", 00:17:48.991 "trsvcid": "4420" 00:17:48.991 }, 00:17:48.991 "peer_address": { 00:17:48.991 "trtype": "TCP", 00:17:48.991 "adrfam": "IPv4", 00:17:48.991 "traddr": "10.0.0.1", 00:17:48.991 "trsvcid": "47896" 00:17:48.991 }, 00:17:48.991 "auth": { 00:17:48.991 "state": "completed", 00:17:48.991 "digest": "sha512", 00:17:48.991 "dhgroup": "ffdhe3072" 00:17:48.991 } 00:17:48.991 } 00:17:48.991 ]' 00:17:48.991 14:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:48.991 14:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:48.991 14:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:49.250 14:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:49.250 14:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:49.250 14:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:49.250 14:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:49.250 14:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:49.507 14:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:Y2I3YzZhNjE1N2IzOGQ2Nzk1NzIyMGJhMmQyMDA0NWQzOTU3MDZkMWQ3OGIzZWY1DD6o/Q==: --dhchap-ctrl-secret DHHC-1:03:MDQ5NmQ5MmJhYmFhODFhMDNjZTI0ODE4MWUyMmU1OWNkMmUzZTE3OTU1YjMyZTcyMGIyNTQyZDg5YTY1NWFkONjITE0=: 00:17:50.439 14:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:50.439 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:50.439 14:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:50.439 14:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.439 14:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.439 14:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.439 14:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:50.439 14:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:50.439 14:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:50.695 14:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:17:50.695 14:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:50.695 14:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:50.695 14:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:50.695 14:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:50.695 14:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:50.695 14:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:50.695 14:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.695 14:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.695 14:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.695 14:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:50.695 14:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:50.953 00:17:50.953 14:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:50.953 14:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:50.953 14:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:51.209 14:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:51.209 14:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:51.209 14:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.210 14:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.210 14:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.210 14:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:51.210 { 00:17:51.210 "cntlid": 115, 00:17:51.210 "qid": 0, 00:17:51.210 "state": "enabled", 00:17:51.210 "thread": "nvmf_tgt_poll_group_000", 00:17:51.210 "listen_address": { 00:17:51.210 "trtype": "TCP", 00:17:51.210 "adrfam": "IPv4", 00:17:51.210 "traddr": "10.0.0.2", 00:17:51.210 "trsvcid": "4420" 00:17:51.210 }, 00:17:51.210 "peer_address": { 00:17:51.210 "trtype": "TCP", 00:17:51.210 "adrfam": "IPv4", 00:17:51.210 "traddr": "10.0.0.1", 00:17:51.210 "trsvcid": "47930" 00:17:51.210 }, 00:17:51.210 "auth": { 00:17:51.210 "state": "completed", 00:17:51.210 "digest": "sha512", 00:17:51.210 "dhgroup": "ffdhe3072" 00:17:51.210 } 00:17:51.210 } 00:17:51.210 ]' 00:17:51.210 14:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:51.210 14:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:51.210 14:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:51.210 14:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:51.210 14:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:51.210 14:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:51.210 14:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:51.210 14:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:51.467 14:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:ZjliNjJkMThjYTE3MDdjNmU2YmQ4Yjg3ZmEzZDc1MzNSpyDl: --dhchap-ctrl-secret DHHC-1:02:YTQ2OTJkODc2ZGIwMmE2NjEzODM3NDdhNWU1MTczODhhNjBhMzhmNDFkZDA3YWRmSj0TVg==: 00:17:52.401 14:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:52.401 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:52.401 14:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:52.401 14:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.401 14:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.401 14:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.401 14:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:52.401 14:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:52.401 14:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:52.659 14:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:17:52.659 14:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:52.659 14:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:52.659 14:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:52.659 14:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:52.659 14:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:52.659 14:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:52.659 14:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.659 14:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.659 14:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.659 14:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:52.659 14:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:53.224 00:17:53.224 14:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:53.224 14:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:53.224 14:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:53.481 14:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:53.481 14:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:53.481 14:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.481 14:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.481 14:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.481 14:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:53.481 { 00:17:53.481 "cntlid": 117, 00:17:53.482 "qid": 0, 00:17:53.482 "state": "enabled", 00:17:53.482 "thread": "nvmf_tgt_poll_group_000", 00:17:53.482 "listen_address": { 00:17:53.482 "trtype": "TCP", 00:17:53.482 "adrfam": "IPv4", 00:17:53.482 "traddr": "10.0.0.2", 00:17:53.482 "trsvcid": "4420" 00:17:53.482 }, 00:17:53.482 "peer_address": { 00:17:53.482 "trtype": "TCP", 00:17:53.482 "adrfam": "IPv4", 00:17:53.482 "traddr": "10.0.0.1", 00:17:53.482 "trsvcid": "47960" 00:17:53.482 }, 00:17:53.482 "auth": { 00:17:53.482 "state": "completed", 00:17:53.482 "digest": "sha512", 00:17:53.482 "dhgroup": "ffdhe3072" 00:17:53.482 } 00:17:53.482 } 00:17:53.482 ]' 00:17:53.482 14:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:53.482 14:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:53.482 14:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:53.482 14:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:53.482 14:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:53.482 14:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:53.482 14:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:53.482 14:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:53.740 14:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:ODk2MmY5NGMwMjQxMDRjYTU3MDJjMWYzN2MyMzBmNzE4NjY2OWMzMDliNTFkZDNiOBdn9Q==: --dhchap-ctrl-secret DHHC-1:01:ZTdlNmQzMGRiMjQ0YzMzOTQ3MDk0Y2I0ZjkxMjU1NTWzWWa/: 00:17:54.681 14:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:54.681 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:54.681 14:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:54.681 14:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.682 14:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.682 14:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.682 14:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:54.682 14:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:54.682 14:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:54.939 14:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:17:54.939 14:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:54.939 14:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:54.939 14:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:54.939 14:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:54.939 14:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:54.939 14:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:17:54.939 14:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.939 14:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.939 14:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.939 14:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:54.939 14:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:55.197 00:17:55.197 14:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:55.197 14:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:55.197 14:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:55.454 14:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:55.454 14:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:55.454 14:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.454 14:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.454 14:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.454 14:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:55.454 { 00:17:55.454 "cntlid": 119, 00:17:55.454 "qid": 0, 00:17:55.454 "state": "enabled", 00:17:55.454 "thread": "nvmf_tgt_poll_group_000", 00:17:55.454 "listen_address": { 00:17:55.454 "trtype": "TCP", 00:17:55.454 "adrfam": "IPv4", 00:17:55.454 "traddr": "10.0.0.2", 00:17:55.454 "trsvcid": "4420" 00:17:55.454 }, 00:17:55.454 "peer_address": { 00:17:55.454 "trtype": "TCP", 00:17:55.454 "adrfam": "IPv4", 00:17:55.454 "traddr": "10.0.0.1", 00:17:55.454 "trsvcid": "47992" 00:17:55.454 }, 00:17:55.454 "auth": { 00:17:55.454 "state": "completed", 00:17:55.454 "digest": "sha512", 00:17:55.454 "dhgroup": "ffdhe3072" 00:17:55.454 } 00:17:55.454 } 00:17:55.454 ]' 00:17:55.454 14:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:55.454 14:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:55.454 14:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:55.454 14:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:55.454 14:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:55.454 14:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:55.454 14:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:55.454 14:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:55.712 14:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:MTc4YTNhOWRjOTNmMmQ2MDU0ODZiOTRkMjRlZDMzNGE5ZDNiNDg3Y2UzMjI4OGUzNjkyMmQ0MDZiMzYzOTJhNCkUHwI=: 00:17:56.645 14:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:56.645 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:56.645 14:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:56.645 14:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.645 14:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.645 14:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.645 14:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:56.645 14:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:56.645 14:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:56.645 14:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:56.903 14:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:17:56.903 14:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:56.903 14:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:56.903 14:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:56.903 14:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:56.903 14:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:56.903 14:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:56.903 14:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.903 14:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.903 14:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.903 14:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:56.903 14:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:57.469 00:17:57.469 14:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:57.469 14:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:57.469 14:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:57.727 14:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:57.727 14:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:57.727 14:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.727 14:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.727 14:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.727 14:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:57.727 { 00:17:57.727 "cntlid": 121, 00:17:57.727 "qid": 0, 00:17:57.727 "state": "enabled", 00:17:57.727 "thread": "nvmf_tgt_poll_group_000", 00:17:57.727 "listen_address": { 00:17:57.727 "trtype": "TCP", 00:17:57.727 "adrfam": "IPv4", 00:17:57.727 "traddr": "10.0.0.2", 00:17:57.727 "trsvcid": "4420" 00:17:57.727 }, 00:17:57.727 "peer_address": { 00:17:57.727 "trtype": "TCP", 00:17:57.727 "adrfam": "IPv4", 00:17:57.727 "traddr": "10.0.0.1", 00:17:57.727 "trsvcid": "36210" 00:17:57.727 }, 00:17:57.727 "auth": { 00:17:57.727 "state": "completed", 00:17:57.727 "digest": "sha512", 00:17:57.727 "dhgroup": "ffdhe4096" 00:17:57.727 } 00:17:57.727 } 00:17:57.727 ]' 00:17:57.727 14:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:57.727 14:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:57.727 14:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:57.727 14:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:57.727 14:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:57.727 14:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:57.727 14:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:57.727 14:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:57.985 14:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:Y2I3YzZhNjE1N2IzOGQ2Nzk1NzIyMGJhMmQyMDA0NWQzOTU3MDZkMWQ3OGIzZWY1DD6o/Q==: --dhchap-ctrl-secret DHHC-1:03:MDQ5NmQ5MmJhYmFhODFhMDNjZTI0ODE4MWUyMmU1OWNkMmUzZTE3OTU1YjMyZTcyMGIyNTQyZDg5YTY1NWFkONjITE0=: 00:17:58.920 14:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:58.920 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:58.920 14:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:58.920 14:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.920 14:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.920 14:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.920 14:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:58.920 14:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:58.920 14:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:59.178 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:17:59.178 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:59.178 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:59.178 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:59.178 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:59.178 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:59.178 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:59.178 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.178 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.178 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.178 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:59.178 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:59.744 00:17:59.744 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:59.744 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:59.744 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:59.744 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:59.744 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:59.744 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.744 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.002 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.002 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:00.002 { 00:18:00.002 "cntlid": 123, 00:18:00.002 "qid": 0, 00:18:00.002 "state": "enabled", 00:18:00.002 "thread": "nvmf_tgt_poll_group_000", 00:18:00.002 "listen_address": { 00:18:00.002 "trtype": "TCP", 00:18:00.002 "adrfam": "IPv4", 00:18:00.002 "traddr": "10.0.0.2", 00:18:00.002 "trsvcid": "4420" 00:18:00.002 }, 00:18:00.002 "peer_address": { 00:18:00.002 "trtype": "TCP", 00:18:00.002 "adrfam": "IPv4", 00:18:00.002 "traddr": "10.0.0.1", 00:18:00.002 "trsvcid": "36242" 00:18:00.002 }, 00:18:00.002 "auth": { 00:18:00.002 "state": "completed", 00:18:00.002 "digest": "sha512", 00:18:00.002 "dhgroup": "ffdhe4096" 00:18:00.002 } 00:18:00.002 } 00:18:00.002 ]' 00:18:00.002 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:00.002 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:00.002 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:00.002 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:00.002 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:00.002 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:00.002 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:00.002 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:00.260 14:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:ZjliNjJkMThjYTE3MDdjNmU2YmQ4Yjg3ZmEzZDc1MzNSpyDl: --dhchap-ctrl-secret DHHC-1:02:YTQ2OTJkODc2ZGIwMmE2NjEzODM3NDdhNWU1MTczODhhNjBhMzhmNDFkZDA3YWRmSj0TVg==: 00:18:01.193 14:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:01.193 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:01.193 14:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:01.193 14:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.193 14:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.193 14:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.193 14:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:01.193 14:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:01.193 14:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:01.483 14:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:18:01.483 14:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:01.483 14:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:01.483 14:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:01.483 14:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:01.483 14:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:01.483 14:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:01.483 14:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.483 14:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.483 14:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.483 14:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:01.483 14:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:01.786 00:18:01.786 14:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:01.786 14:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:01.786 14:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:02.065 14:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:02.065 14:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:02.065 14:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.065 14:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.065 14:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.065 14:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:02.065 { 00:18:02.065 "cntlid": 125, 00:18:02.065 "qid": 0, 00:18:02.065 "state": "enabled", 00:18:02.065 "thread": "nvmf_tgt_poll_group_000", 00:18:02.065 "listen_address": { 00:18:02.065 "trtype": "TCP", 00:18:02.065 "adrfam": "IPv4", 00:18:02.065 "traddr": "10.0.0.2", 00:18:02.065 "trsvcid": "4420" 00:18:02.065 }, 00:18:02.065 "peer_address": { 00:18:02.065 "trtype": "TCP", 00:18:02.065 "adrfam": "IPv4", 00:18:02.065 "traddr": "10.0.0.1", 00:18:02.065 "trsvcid": "36276" 00:18:02.065 }, 00:18:02.065 "auth": { 00:18:02.065 "state": "completed", 00:18:02.065 "digest": "sha512", 00:18:02.065 "dhgroup": "ffdhe4096" 00:18:02.065 } 00:18:02.065 } 00:18:02.065 ]' 00:18:02.065 14:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:02.065 14:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:02.065 14:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:02.065 14:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:02.065 14:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:02.065 14:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:02.065 14:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:02.065 14:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:02.325 14:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:ODk2MmY5NGMwMjQxMDRjYTU3MDJjMWYzN2MyMzBmNzE4NjY2OWMzMDliNTFkZDNiOBdn9Q==: --dhchap-ctrl-secret DHHC-1:01:ZTdlNmQzMGRiMjQ0YzMzOTQ3MDk0Y2I0ZjkxMjU1NTWzWWa/: 00:18:03.257 14:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:03.257 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:03.257 14:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:03.257 14:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.257 14:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.257 14:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.257 14:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:03.257 14:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:03.257 14:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:03.515 14:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:18:03.515 14:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:03.515 14:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:03.515 14:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:03.515 14:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:03.515 14:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:03.515 14:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:18:03.515 14:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.515 14:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.515 14:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.515 14:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:03.515 14:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:03.773 00:18:03.773 14:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:03.773 14:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:03.773 14:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:04.031 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:04.031 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:04.031 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.031 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.031 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.031 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:04.031 { 00:18:04.031 "cntlid": 127, 00:18:04.031 "qid": 0, 00:18:04.031 "state": "enabled", 00:18:04.031 "thread": "nvmf_tgt_poll_group_000", 00:18:04.031 "listen_address": { 00:18:04.031 "trtype": "TCP", 00:18:04.031 "adrfam": "IPv4", 00:18:04.031 "traddr": "10.0.0.2", 00:18:04.031 "trsvcid": "4420" 00:18:04.031 }, 00:18:04.031 "peer_address": { 00:18:04.031 "trtype": "TCP", 00:18:04.031 "adrfam": "IPv4", 00:18:04.031 "traddr": "10.0.0.1", 00:18:04.031 "trsvcid": "36296" 00:18:04.031 }, 00:18:04.031 "auth": { 00:18:04.031 "state": "completed", 00:18:04.031 "digest": "sha512", 00:18:04.031 "dhgroup": "ffdhe4096" 00:18:04.031 } 00:18:04.031 } 00:18:04.031 ]' 00:18:04.031 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:04.288 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:04.288 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:04.288 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:04.288 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:04.288 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:04.288 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:04.288 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:04.546 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:MTc4YTNhOWRjOTNmMmQ2MDU0ODZiOTRkMjRlZDMzNGE5ZDNiNDg3Y2UzMjI4OGUzNjkyMmQ0MDZiMzYzOTJhNCkUHwI=: 00:18:05.479 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:05.479 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:05.479 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:05.479 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.479 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.479 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.479 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:05.479 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:05.479 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:05.479 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:05.736 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:18:05.736 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:05.736 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:05.736 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:05.736 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:05.736 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:05.736 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:05.736 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.736 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.736 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.736 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:05.737 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:06.301 00:18:06.301 14:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:06.301 14:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:06.301 14:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:06.558 14:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:06.558 14:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:06.558 14:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.558 14:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.558 14:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.558 14:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:06.558 { 00:18:06.558 "cntlid": 129, 00:18:06.558 "qid": 0, 00:18:06.558 "state": "enabled", 00:18:06.558 "thread": "nvmf_tgt_poll_group_000", 00:18:06.558 "listen_address": { 00:18:06.558 "trtype": "TCP", 00:18:06.558 "adrfam": "IPv4", 00:18:06.558 "traddr": "10.0.0.2", 00:18:06.558 "trsvcid": "4420" 00:18:06.558 }, 00:18:06.558 "peer_address": { 00:18:06.558 "trtype": "TCP", 00:18:06.558 "adrfam": "IPv4", 00:18:06.558 "traddr": "10.0.0.1", 00:18:06.558 "trsvcid": "41012" 00:18:06.558 }, 00:18:06.558 "auth": { 00:18:06.558 "state": "completed", 00:18:06.558 "digest": "sha512", 00:18:06.558 "dhgroup": "ffdhe6144" 00:18:06.558 } 00:18:06.558 } 00:18:06.558 ]' 00:18:06.558 14:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:06.558 14:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:06.558 14:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:06.558 14:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:06.558 14:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:06.558 14:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:06.558 14:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:06.558 14:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:07.123 14:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:Y2I3YzZhNjE1N2IzOGQ2Nzk1NzIyMGJhMmQyMDA0NWQzOTU3MDZkMWQ3OGIzZWY1DD6o/Q==: --dhchap-ctrl-secret DHHC-1:03:MDQ5NmQ5MmJhYmFhODFhMDNjZTI0ODE4MWUyMmU1OWNkMmUzZTE3OTU1YjMyZTcyMGIyNTQyZDg5YTY1NWFkONjITE0=: 00:18:07.687 14:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:07.945 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:07.945 14:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:07.945 14:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.945 14:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.945 14:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.945 14:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:07.945 14:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:07.945 14:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:08.203 14:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:18:08.203 14:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:08.203 14:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:08.203 14:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:08.203 14:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:08.203 14:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:08.203 14:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:08.203 14:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.203 14:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.203 14:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.203 14:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:08.203 14:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:08.768 00:18:08.768 14:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:08.768 14:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:08.768 14:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:09.026 14:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:09.026 14:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:09.026 14:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.026 14:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.026 14:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.026 14:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:09.026 { 00:18:09.026 "cntlid": 131, 00:18:09.026 "qid": 0, 00:18:09.026 "state": "enabled", 00:18:09.026 "thread": "nvmf_tgt_poll_group_000", 00:18:09.026 "listen_address": { 00:18:09.026 "trtype": "TCP", 00:18:09.026 "adrfam": "IPv4", 00:18:09.026 "traddr": "10.0.0.2", 00:18:09.026 "trsvcid": "4420" 00:18:09.026 }, 00:18:09.026 "peer_address": { 00:18:09.026 "trtype": "TCP", 00:18:09.026 "adrfam": "IPv4", 00:18:09.026 "traddr": "10.0.0.1", 00:18:09.026 "trsvcid": "41056" 00:18:09.026 }, 00:18:09.026 "auth": { 00:18:09.026 "state": "completed", 00:18:09.026 "digest": "sha512", 00:18:09.026 "dhgroup": "ffdhe6144" 00:18:09.026 } 00:18:09.026 } 00:18:09.026 ]' 00:18:09.026 14:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:09.026 14:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:09.026 14:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:09.026 14:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:09.026 14:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:09.026 14:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:09.026 14:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:09.026 14:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:09.282 14:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:ZjliNjJkMThjYTE3MDdjNmU2YmQ4Yjg3ZmEzZDc1MzNSpyDl: --dhchap-ctrl-secret DHHC-1:02:YTQ2OTJkODc2ZGIwMmE2NjEzODM3NDdhNWU1MTczODhhNjBhMzhmNDFkZDA3YWRmSj0TVg==: 00:18:10.215 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:10.215 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:10.215 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:10.215 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.215 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.215 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.215 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:10.215 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:10.215 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:10.473 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:18:10.473 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:10.473 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:10.473 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:10.473 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:10.473 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:10.473 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:10.473 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.473 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.473 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.473 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:10.473 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:11.038 00:18:11.038 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:11.038 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:11.038 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:11.295 14:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:11.295 14:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:11.295 14:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.295 14:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.295 14:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.295 14:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:11.295 { 00:18:11.295 "cntlid": 133, 00:18:11.295 "qid": 0, 00:18:11.295 "state": "enabled", 00:18:11.295 "thread": "nvmf_tgt_poll_group_000", 00:18:11.295 "listen_address": { 00:18:11.295 "trtype": "TCP", 00:18:11.295 "adrfam": "IPv4", 00:18:11.295 "traddr": "10.0.0.2", 00:18:11.295 "trsvcid": "4420" 00:18:11.295 }, 00:18:11.295 "peer_address": { 00:18:11.295 "trtype": "TCP", 00:18:11.295 "adrfam": "IPv4", 00:18:11.295 "traddr": "10.0.0.1", 00:18:11.295 "trsvcid": "41074" 00:18:11.295 }, 00:18:11.295 "auth": { 00:18:11.295 "state": "completed", 00:18:11.295 "digest": "sha512", 00:18:11.295 "dhgroup": "ffdhe6144" 00:18:11.295 } 00:18:11.295 } 00:18:11.295 ]' 00:18:11.295 14:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:11.295 14:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:11.295 14:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:11.295 14:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:11.295 14:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:11.295 14:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:11.295 14:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:11.295 14:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:11.553 14:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:ODk2MmY5NGMwMjQxMDRjYTU3MDJjMWYzN2MyMzBmNzE4NjY2OWMzMDliNTFkZDNiOBdn9Q==: --dhchap-ctrl-secret DHHC-1:01:ZTdlNmQzMGRiMjQ0YzMzOTQ3MDk0Y2I0ZjkxMjU1NTWzWWa/: 00:18:12.487 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:12.487 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:12.487 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:12.487 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.487 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.487 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.487 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:12.487 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:12.487 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:12.745 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:18:12.745 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:12.745 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:12.745 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:12.745 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:12.745 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:12.745 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:18:12.745 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.745 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.745 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.745 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:12.745 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:13.310 00:18:13.310 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:13.310 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:13.310 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:13.568 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:13.568 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:13.568 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.568 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.568 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.568 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:13.568 { 00:18:13.568 "cntlid": 135, 00:18:13.568 "qid": 0, 00:18:13.568 "state": "enabled", 00:18:13.568 "thread": "nvmf_tgt_poll_group_000", 00:18:13.568 "listen_address": { 00:18:13.568 "trtype": "TCP", 00:18:13.568 "adrfam": "IPv4", 00:18:13.568 "traddr": "10.0.0.2", 00:18:13.568 "trsvcid": "4420" 00:18:13.568 }, 00:18:13.568 "peer_address": { 00:18:13.568 "trtype": "TCP", 00:18:13.568 "adrfam": "IPv4", 00:18:13.568 "traddr": "10.0.0.1", 00:18:13.568 "trsvcid": "41104" 00:18:13.568 }, 00:18:13.568 "auth": { 00:18:13.568 "state": "completed", 00:18:13.568 "digest": "sha512", 00:18:13.568 "dhgroup": "ffdhe6144" 00:18:13.568 } 00:18:13.568 } 00:18:13.568 ]' 00:18:13.568 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:13.568 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:13.568 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:13.568 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:13.568 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:13.568 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:13.568 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:13.568 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:13.826 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:MTc4YTNhOWRjOTNmMmQ2MDU0ODZiOTRkMjRlZDMzNGE5ZDNiNDg3Y2UzMjI4OGUzNjkyMmQ0MDZiMzYzOTJhNCkUHwI=: 00:18:14.758 14:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:14.758 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:14.758 14:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:14.758 14:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.758 14:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.758 14:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.758 14:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:14.758 14:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:14.758 14:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:14.758 14:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:15.016 14:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:18:15.016 14:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:15.016 14:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:15.016 14:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:15.016 14:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:15.016 14:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:15.016 14:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:15.016 14:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.016 14:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.016 14:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.017 14:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:15.017 14:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:15.951 00:18:15.951 14:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:15.951 14:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:15.951 14:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:16.209 14:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:16.209 14:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:16.209 14:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.209 14:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.209 14:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.209 14:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:16.209 { 00:18:16.209 "cntlid": 137, 00:18:16.209 "qid": 0, 00:18:16.209 "state": "enabled", 00:18:16.209 "thread": "nvmf_tgt_poll_group_000", 00:18:16.209 "listen_address": { 00:18:16.209 "trtype": "TCP", 00:18:16.209 "adrfam": "IPv4", 00:18:16.209 "traddr": "10.0.0.2", 00:18:16.209 "trsvcid": "4420" 00:18:16.209 }, 00:18:16.209 "peer_address": { 00:18:16.209 "trtype": "TCP", 00:18:16.209 "adrfam": "IPv4", 00:18:16.209 "traddr": "10.0.0.1", 00:18:16.209 "trsvcid": "41124" 00:18:16.209 }, 00:18:16.209 "auth": { 00:18:16.209 "state": "completed", 00:18:16.209 "digest": "sha512", 00:18:16.209 "dhgroup": "ffdhe8192" 00:18:16.209 } 00:18:16.209 } 00:18:16.209 ]' 00:18:16.209 14:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:16.209 14:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:16.209 14:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:16.209 14:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:16.209 14:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:16.209 14:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:16.209 14:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:16.209 14:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:16.467 14:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:Y2I3YzZhNjE1N2IzOGQ2Nzk1NzIyMGJhMmQyMDA0NWQzOTU3MDZkMWQ3OGIzZWY1DD6o/Q==: --dhchap-ctrl-secret DHHC-1:03:MDQ5NmQ5MmJhYmFhODFhMDNjZTI0ODE4MWUyMmU1OWNkMmUzZTE3OTU1YjMyZTcyMGIyNTQyZDg5YTY1NWFkONjITE0=: 00:18:17.400 14:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:17.400 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:17.400 14:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:17.400 14:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.400 14:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.400 14:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.400 14:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:17.400 14:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:17.400 14:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:17.658 14:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:18:17.658 14:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:17.658 14:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:17.658 14:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:17.658 14:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:17.658 14:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:17.658 14:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:17.658 14:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.658 14:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.658 14:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.658 14:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:17.658 14:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:18.591 00:18:18.591 14:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:18.592 14:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:18.592 14:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:18.849 14:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:18.849 14:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:18.849 14:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.849 14:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.849 14:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.849 14:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:18.849 { 00:18:18.849 "cntlid": 139, 00:18:18.849 "qid": 0, 00:18:18.849 "state": "enabled", 00:18:18.849 "thread": "nvmf_tgt_poll_group_000", 00:18:18.849 "listen_address": { 00:18:18.849 "trtype": "TCP", 00:18:18.849 "adrfam": "IPv4", 00:18:18.849 "traddr": "10.0.0.2", 00:18:18.849 "trsvcid": "4420" 00:18:18.849 }, 00:18:18.849 "peer_address": { 00:18:18.849 "trtype": "TCP", 00:18:18.849 "adrfam": "IPv4", 00:18:18.849 "traddr": "10.0.0.1", 00:18:18.849 "trsvcid": "44990" 00:18:18.849 }, 00:18:18.849 "auth": { 00:18:18.849 "state": "completed", 00:18:18.849 "digest": "sha512", 00:18:18.849 "dhgroup": "ffdhe8192" 00:18:18.849 } 00:18:18.849 } 00:18:18.849 ]' 00:18:18.849 14:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:18.849 14:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:18.849 14:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:18.849 14:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:18.849 14:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:18.849 14:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:18.849 14:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:18.849 14:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:19.106 14:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:ZjliNjJkMThjYTE3MDdjNmU2YmQ4Yjg3ZmEzZDc1MzNSpyDl: --dhchap-ctrl-secret DHHC-1:02:YTQ2OTJkODc2ZGIwMmE2NjEzODM3NDdhNWU1MTczODhhNjBhMzhmNDFkZDA3YWRmSj0TVg==: 00:18:20.038 14:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:20.038 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:20.038 14:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:20.038 14:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.038 14:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.038 14:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.038 14:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:20.038 14:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:20.038 14:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:20.296 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:18:20.296 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:20.296 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:20.296 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:20.296 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:20.296 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:20.296 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:20.296 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.296 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.296 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.296 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:20.296 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:20.862 00:18:20.862 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:20.862 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:20.862 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:21.120 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:21.120 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:21.120 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.120 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.120 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.120 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:21.120 { 00:18:21.120 "cntlid": 141, 00:18:21.120 "qid": 0, 00:18:21.120 "state": "enabled", 00:18:21.120 "thread": "nvmf_tgt_poll_group_000", 00:18:21.120 "listen_address": { 00:18:21.120 "trtype": "TCP", 00:18:21.120 "adrfam": "IPv4", 00:18:21.120 "traddr": "10.0.0.2", 00:18:21.120 "trsvcid": "4420" 00:18:21.120 }, 00:18:21.120 "peer_address": { 00:18:21.120 "trtype": "TCP", 00:18:21.120 "adrfam": "IPv4", 00:18:21.120 "traddr": "10.0.0.1", 00:18:21.120 "trsvcid": "45028" 00:18:21.120 }, 00:18:21.120 "auth": { 00:18:21.120 "state": "completed", 00:18:21.120 "digest": "sha512", 00:18:21.120 "dhgroup": "ffdhe8192" 00:18:21.120 } 00:18:21.120 } 00:18:21.120 ]' 00:18:21.377 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:21.377 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:21.377 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:21.378 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:21.378 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:21.378 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:21.378 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:21.378 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:21.635 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:ODk2MmY5NGMwMjQxMDRjYTU3MDJjMWYzN2MyMzBmNzE4NjY2OWMzMDliNTFkZDNiOBdn9Q==: --dhchap-ctrl-secret DHHC-1:01:ZTdlNmQzMGRiMjQ0YzMzOTQ3MDk0Y2I0ZjkxMjU1NTWzWWa/: 00:18:22.567 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:22.567 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:22.567 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:22.567 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.567 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.567 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.567 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:22.567 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:22.567 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:22.824 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:18:22.824 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:22.824 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:22.824 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:22.824 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:22.824 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:22.824 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:18:22.824 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.824 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.824 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.824 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:22.824 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:23.757 00:18:23.757 14:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:23.757 14:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:23.757 14:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:23.757 14:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:23.757 14:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:23.757 14:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.757 14:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.757 14:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.757 14:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:23.757 { 00:18:23.757 "cntlid": 143, 00:18:23.757 "qid": 0, 00:18:23.757 "state": "enabled", 00:18:23.757 "thread": "nvmf_tgt_poll_group_000", 00:18:23.757 "listen_address": { 00:18:23.758 "trtype": "TCP", 00:18:23.758 "adrfam": "IPv4", 00:18:23.758 "traddr": "10.0.0.2", 00:18:23.758 "trsvcid": "4420" 00:18:23.758 }, 00:18:23.758 "peer_address": { 00:18:23.758 "trtype": "TCP", 00:18:23.758 "adrfam": "IPv4", 00:18:23.758 "traddr": "10.0.0.1", 00:18:23.758 "trsvcid": "45062" 00:18:23.758 }, 00:18:23.758 "auth": { 00:18:23.758 "state": "completed", 00:18:23.758 "digest": "sha512", 00:18:23.758 "dhgroup": "ffdhe8192" 00:18:23.758 } 00:18:23.758 } 00:18:23.758 ]' 00:18:23.758 14:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:24.015 14:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:24.015 14:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:24.015 14:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:24.015 14:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:24.015 14:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:24.015 14:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:24.015 14:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:24.272 14:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:MTc4YTNhOWRjOTNmMmQ2MDU0ODZiOTRkMjRlZDMzNGE5ZDNiNDg3Y2UzMjI4OGUzNjkyMmQ0MDZiMzYzOTJhNCkUHwI=: 00:18:25.202 14:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:25.202 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:25.202 14:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:25.202 14:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.202 14:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.202 14:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.202 14:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:18:25.202 14:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:18:25.202 14:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:18:25.202 14:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:25.202 14:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:25.202 14:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:25.458 14:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:18:25.458 14:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:25.458 14:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:25.458 14:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:25.458 14:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:25.458 14:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:25.458 14:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:25.458 14:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.458 14:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.458 14:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.458 14:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:25.458 14:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:26.389 00:18:26.389 14:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:26.389 14:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:26.389 14:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:26.389 14:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:26.389 14:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:26.389 14:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.389 14:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.389 14:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.389 14:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:26.389 { 00:18:26.389 "cntlid": 145, 00:18:26.389 "qid": 0, 00:18:26.389 "state": "enabled", 00:18:26.389 "thread": "nvmf_tgt_poll_group_000", 00:18:26.389 "listen_address": { 00:18:26.389 "trtype": "TCP", 00:18:26.389 "adrfam": "IPv4", 00:18:26.389 "traddr": "10.0.0.2", 00:18:26.389 "trsvcid": "4420" 00:18:26.389 }, 00:18:26.389 "peer_address": { 00:18:26.389 "trtype": "TCP", 00:18:26.389 "adrfam": "IPv4", 00:18:26.389 "traddr": "10.0.0.1", 00:18:26.389 "trsvcid": "45098" 00:18:26.389 }, 00:18:26.389 "auth": { 00:18:26.389 "state": "completed", 00:18:26.389 "digest": "sha512", 00:18:26.389 "dhgroup": "ffdhe8192" 00:18:26.389 } 00:18:26.389 } 00:18:26.389 ]' 00:18:26.389 14:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:26.389 14:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:26.389 14:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:26.647 14:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:26.647 14:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:26.647 14:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:26.647 14:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:26.647 14:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:26.904 14:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:Y2I3YzZhNjE1N2IzOGQ2Nzk1NzIyMGJhMmQyMDA0NWQzOTU3MDZkMWQ3OGIzZWY1DD6o/Q==: --dhchap-ctrl-secret DHHC-1:03:MDQ5NmQ5MmJhYmFhODFhMDNjZTI0ODE4MWUyMmU1OWNkMmUzZTE3OTU1YjMyZTcyMGIyNTQyZDg5YTY1NWFkONjITE0=: 00:18:27.835 14:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:27.835 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:27.835 14:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:27.835 14:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.835 14:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.835 14:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.835 14:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 00:18:27.835 14:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.835 14:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.835 14:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.835 14:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:27.835 14:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:27.835 14:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:27.835 14:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:18:27.835 14:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:27.835 14:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:18:27.835 14:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:27.835 14:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:27.835 14:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:28.400 request: 00:18:28.400 { 00:18:28.400 "name": "nvme0", 00:18:28.400 "trtype": "tcp", 00:18:28.400 "traddr": "10.0.0.2", 00:18:28.400 "adrfam": "ipv4", 00:18:28.400 "trsvcid": "4420", 00:18:28.400 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:28.400 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:28.400 "prchk_reftag": false, 00:18:28.400 "prchk_guard": false, 00:18:28.400 "hdgst": false, 00:18:28.400 "ddgst": false, 00:18:28.400 "dhchap_key": "key2", 00:18:28.400 "method": "bdev_nvme_attach_controller", 00:18:28.400 "req_id": 1 00:18:28.400 } 00:18:28.400 Got JSON-RPC error response 00:18:28.400 response: 00:18:28.400 { 00:18:28.400 "code": -5, 00:18:28.400 "message": "Input/output error" 00:18:28.400 } 00:18:28.400 14:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:28.400 14:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:28.400 14:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:28.400 14:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:28.400 14:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:28.400 14:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.400 14:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.400 14:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.400 14:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:28.401 14:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.401 14:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.401 14:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.401 14:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:28.401 14:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:28.401 14:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:28.401 14:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:18:28.401 14:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:28.401 14:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:18:28.401 14:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:28.401 14:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:28.401 14:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:29.332 request: 00:18:29.332 { 00:18:29.332 "name": "nvme0", 00:18:29.332 "trtype": "tcp", 00:18:29.332 "traddr": "10.0.0.2", 00:18:29.332 "adrfam": "ipv4", 00:18:29.332 "trsvcid": "4420", 00:18:29.332 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:29.332 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:29.332 "prchk_reftag": false, 00:18:29.332 "prchk_guard": false, 00:18:29.332 "hdgst": false, 00:18:29.332 "ddgst": false, 00:18:29.332 "dhchap_key": "key1", 00:18:29.332 "dhchap_ctrlr_key": "ckey2", 00:18:29.332 "method": "bdev_nvme_attach_controller", 00:18:29.332 "req_id": 1 00:18:29.332 } 00:18:29.332 Got JSON-RPC error response 00:18:29.332 response: 00:18:29.332 { 00:18:29.332 "code": -5, 00:18:29.332 "message": "Input/output error" 00:18:29.332 } 00:18:29.332 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:29.332 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:29.332 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:29.333 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:29.333 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:29.333 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.333 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.333 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.333 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 00:18:29.333 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.333 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.333 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.333 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:29.333 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:29.333 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:29.333 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:18:29.333 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:29.333 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:18:29.333 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:29.333 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:29.333 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:30.265 request: 00:18:30.265 { 00:18:30.265 "name": "nvme0", 00:18:30.265 "trtype": "tcp", 00:18:30.265 "traddr": "10.0.0.2", 00:18:30.265 "adrfam": "ipv4", 00:18:30.265 "trsvcid": "4420", 00:18:30.265 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:30.265 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:30.265 "prchk_reftag": false, 00:18:30.265 "prchk_guard": false, 00:18:30.265 "hdgst": false, 00:18:30.265 "ddgst": false, 00:18:30.265 "dhchap_key": "key1", 00:18:30.265 "dhchap_ctrlr_key": "ckey1", 00:18:30.265 "method": "bdev_nvme_attach_controller", 00:18:30.265 "req_id": 1 00:18:30.265 } 00:18:30.265 Got JSON-RPC error response 00:18:30.265 response: 00:18:30.265 { 00:18:30.265 "code": -5, 00:18:30.265 "message": "Input/output error" 00:18:30.265 } 00:18:30.265 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:30.265 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:30.265 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:30.265 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:30.265 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:30.265 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.265 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.265 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.265 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 215036 00:18:30.265 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 215036 ']' 00:18:30.265 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 215036 00:18:30.265 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:18:30.265 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:30.265 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 215036 00:18:30.265 14:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:30.266 14:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:30.266 14:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 215036' 00:18:30.266 killing process with pid 215036 00:18:30.266 14:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 215036 00:18:30.266 14:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 215036 00:18:30.266 14:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:18:30.266 14:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:30.266 14:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:30.266 14:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.266 14:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=237211 00:18:30.266 14:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:18:30.266 14:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 237211 00:18:30.266 14:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 237211 ']' 00:18:30.266 14:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:30.266 14:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:30.266 14:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:30.266 14:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:30.266 14:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.524 14:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:30.524 14:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:18:30.524 14:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:30.524 14:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:30.524 14:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.524 14:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:30.524 14:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:30.524 14:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 237211 00:18:30.524 14:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 237211 ']' 00:18:30.524 14:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:30.524 14:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:30.524 14:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:30.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:30.524 14:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:30.524 14:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.782 14:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:30.782 14:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:18:30.782 14:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:18:30.782 14:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.782 14:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.040 14:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.040 14:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:18:31.040 14:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:31.040 14:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:31.040 14:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:31.040 14:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:31.040 14:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:31.040 14:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:18:31.040 14:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.040 14:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.040 14:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.040 14:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:31.040 14:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:31.972 00:18:31.972 14:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:31.972 14:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:31.972 14:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:31.972 14:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:32.230 14:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:32.230 14:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.230 14:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.230 14:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.230 14:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:32.230 { 00:18:32.230 "cntlid": 1, 00:18:32.230 "qid": 0, 00:18:32.230 "state": "enabled", 00:18:32.230 "thread": "nvmf_tgt_poll_group_000", 00:18:32.230 "listen_address": { 00:18:32.230 "trtype": "TCP", 00:18:32.230 "adrfam": "IPv4", 00:18:32.230 "traddr": "10.0.0.2", 00:18:32.230 "trsvcid": "4420" 00:18:32.230 }, 00:18:32.230 "peer_address": { 00:18:32.230 "trtype": "TCP", 00:18:32.230 "adrfam": "IPv4", 00:18:32.230 "traddr": "10.0.0.1", 00:18:32.230 "trsvcid": "52746" 00:18:32.230 }, 00:18:32.230 "auth": { 00:18:32.230 "state": "completed", 00:18:32.230 "digest": "sha512", 00:18:32.230 "dhgroup": "ffdhe8192" 00:18:32.230 } 00:18:32.230 } 00:18:32.230 ]' 00:18:32.230 14:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:32.230 14:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:32.230 14:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:32.230 14:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:32.230 14:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:32.230 14:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:32.230 14:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:32.230 14:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:32.488 14:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:MTc4YTNhOWRjOTNmMmQ2MDU0ODZiOTRkMjRlZDMzNGE5ZDNiNDg3Y2UzMjI4OGUzNjkyMmQ0MDZiMzYzOTJhNCkUHwI=: 00:18:33.420 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:33.420 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:33.420 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:33.420 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.420 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.420 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.420 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:18:33.420 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.420 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.420 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.420 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:18:33.420 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:18:33.678 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:33.678 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:33.678 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:33.678 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:18:33.678 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:33.678 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:18:33.678 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:33.678 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:33.678 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:33.935 request: 00:18:33.935 { 00:18:33.935 "name": "nvme0", 00:18:33.935 "trtype": "tcp", 00:18:33.935 "traddr": "10.0.0.2", 00:18:33.935 "adrfam": "ipv4", 00:18:33.935 "trsvcid": "4420", 00:18:33.935 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:33.935 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:33.935 "prchk_reftag": false, 00:18:33.935 "prchk_guard": false, 00:18:33.935 "hdgst": false, 00:18:33.935 "ddgst": false, 00:18:33.935 "dhchap_key": "key3", 00:18:33.935 "method": "bdev_nvme_attach_controller", 00:18:33.935 "req_id": 1 00:18:33.935 } 00:18:33.935 Got JSON-RPC error response 00:18:33.935 response: 00:18:33.935 { 00:18:33.935 "code": -5, 00:18:33.935 "message": "Input/output error" 00:18:33.935 } 00:18:33.935 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:33.935 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:33.935 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:33.935 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:33.935 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:18:33.935 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:18:33.935 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:33.935 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:34.193 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:34.193 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:34.193 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:34.193 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:18:34.193 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:34.193 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:18:34.193 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:34.193 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:34.193 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:34.451 request: 00:18:34.451 { 00:18:34.451 "name": "nvme0", 00:18:34.451 "trtype": "tcp", 00:18:34.451 "traddr": "10.0.0.2", 00:18:34.451 "adrfam": "ipv4", 00:18:34.451 "trsvcid": "4420", 00:18:34.451 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:34.451 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:34.451 "prchk_reftag": false, 00:18:34.451 "prchk_guard": false, 00:18:34.451 "hdgst": false, 00:18:34.451 "ddgst": false, 00:18:34.451 "dhchap_key": "key3", 00:18:34.451 "method": "bdev_nvme_attach_controller", 00:18:34.451 "req_id": 1 00:18:34.451 } 00:18:34.451 Got JSON-RPC error response 00:18:34.451 response: 00:18:34.451 { 00:18:34.451 "code": -5, 00:18:34.451 "message": "Input/output error" 00:18:34.451 } 00:18:34.451 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:34.451 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:34.451 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:34.451 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:34.451 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:18:34.451 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:18:34.451 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:18:34.451 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:34.451 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:34.451 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:34.709 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:34.709 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.709 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.709 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.709 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:34.709 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.709 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.709 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.709 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:34.709 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:34.709 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:34.709 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:18:34.709 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:34.709 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:18:34.709 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:34.709 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:34.709 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:34.967 request: 00:18:34.967 { 00:18:34.967 "name": "nvme0", 00:18:34.967 "trtype": "tcp", 00:18:34.967 "traddr": "10.0.0.2", 00:18:34.967 "adrfam": "ipv4", 00:18:34.967 "trsvcid": "4420", 00:18:34.967 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:34.967 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:34.967 "prchk_reftag": false, 00:18:34.967 "prchk_guard": false, 00:18:34.968 "hdgst": false, 00:18:34.968 "ddgst": false, 00:18:34.968 "dhchap_key": "key0", 00:18:34.968 "dhchap_ctrlr_key": "key1", 00:18:34.968 "method": "bdev_nvme_attach_controller", 00:18:34.968 "req_id": 1 00:18:34.968 } 00:18:34.968 Got JSON-RPC error response 00:18:34.968 response: 00:18:34.968 { 00:18:34.968 "code": -5, 00:18:34.968 "message": "Input/output error" 00:18:34.968 } 00:18:34.968 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:34.968 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:34.968 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:34.968 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:34.968 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:34.968 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:35.226 00:18:35.226 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:18:35.226 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:18:35.226 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:35.483 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:35.483 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:35.483 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:35.741 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:18:35.741 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:18:35.741 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 215055 00:18:35.741 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 215055 ']' 00:18:35.741 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 215055 00:18:35.741 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:18:35.741 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:35.741 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 215055 00:18:35.741 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:35.741 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:35.741 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 215055' 00:18:35.741 killing process with pid 215055 00:18:35.741 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 215055 00:18:35.741 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 215055 00:18:36.307 14:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:18:36.307 14:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:36.307 14:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:18:36.307 14:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:36.307 14:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:18:36.307 14:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:36.307 14:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:36.307 rmmod nvme_tcp 00:18:36.307 rmmod nvme_fabrics 00:18:36.307 rmmod nvme_keyring 00:18:36.307 14:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:36.307 14:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:18:36.307 14:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:18:36.307 14:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 237211 ']' 00:18:36.307 14:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 237211 00:18:36.307 14:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 237211 ']' 00:18:36.307 14:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 237211 00:18:36.307 14:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:18:36.307 14:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:36.307 14:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 237211 00:18:36.307 14:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:36.307 14:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:36.307 14:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 237211' 00:18:36.307 killing process with pid 237211 00:18:36.307 14:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 237211 00:18:36.308 14:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 237211 00:18:36.566 14:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:36.566 14:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:36.566 14:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:36.566 14:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:36.566 14:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:36.566 14:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:36.566 14:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:36.566 14:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:39.105 14:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:39.105 14:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.PHb /tmp/spdk.key-sha256.Sn9 /tmp/spdk.key-sha384.3X4 /tmp/spdk.key-sha512.qK2 /tmp/spdk.key-sha512.z5c /tmp/spdk.key-sha384.HiE /tmp/spdk.key-sha256.X9y '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:18:39.105 00:18:39.105 real 3m4.616s 00:18:39.105 user 7m10.107s 00:18:39.105 sys 0m25.561s 00:18:39.105 14:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:39.105 14:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.105 ************************************ 00:18:39.105 END TEST nvmf_auth_target 00:18:39.105 ************************************ 00:18:39.105 14:12:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:18:39.105 14:12:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:39.105 14:12:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:18:39.105 14:12:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:39.105 14:12:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:39.105 ************************************ 00:18:39.105 START TEST nvmf_bdevio_no_huge 00:18:39.105 ************************************ 00:18:39.105 14:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:39.105 * Looking for test storage... 00:18:39.105 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:39.105 14:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:39.105 14:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:18:39.105 14:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:39.105 14:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:39.105 14:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:39.105 14:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:39.106 14:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:39.106 14:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:39.106 14:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:39.106 14:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:39.106 14:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:39.106 14:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:39.106 14:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:39.106 14:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:18:39.106 14:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:39.106 14:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:39.106 14:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:39.106 14:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:39.106 14:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:39.106 14:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:39.106 14:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:39.106 14:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:39.106 14:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:39.106 14:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:39.106 14:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:39.106 14:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:18:39.106 14:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:39.106 14:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:18:39.106 14:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:39.106 14:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:39.106 14:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:39.106 14:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:39.106 14:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:39.106 14:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:39.106 14:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:39.106 14:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:39.106 14:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:39.106 14:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:39.106 14:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:18:39.106 14:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:39.106 14:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:39.106 14:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:39.106 14:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:39.106 14:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:39.106 14:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:39.106 14:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:39.106 14:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:39.106 14:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:39.106 14:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:39.106 14:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:18:39.106 14:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:41.011 14:12:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:41.011 14:12:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:18:41.011 14:12:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:41.011 14:12:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:41.011 14:12:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:41.011 14:12:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:41.011 14:12:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:41.011 14:12:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:18:41.011 14:12:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:41.011 14:12:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:18:41.011 14:12:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:18:41.011 14:12:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:18:41.011 14:12:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:18:41.011 14:12:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:18:41.011 14:12:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:18:41.011 14:12:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:41.011 14:12:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:41.011 14:12:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:41.011 14:12:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:41.011 14:12:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:41.011 14:12:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:41.011 14:12:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:41.011 14:12:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:41.011 14:12:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:41.011 14:12:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:41.011 14:12:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:41.011 14:12:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:41.011 14:12:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:41.011 14:12:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:41.011 14:12:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:41.011 14:12:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:41.011 14:12:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:41.011 14:12:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:41.011 14:12:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:18:41.011 Found 0000:09:00.0 (0x8086 - 0x159b) 00:18:41.011 14:12:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:41.011 14:12:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:41.011 14:12:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:41.011 14:12:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:41.011 14:12:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:41.011 14:12:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:41.011 14:12:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:18:41.011 Found 0000:09:00.1 (0x8086 - 0x159b) 00:18:41.011 14:12:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:41.011 14:12:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:41.011 14:12:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:41.011 14:12:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:41.011 14:12:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:41.011 14:12:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:41.011 14:12:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:41.011 14:12:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:41.011 14:12:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:41.011 14:12:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:41.011 14:12:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:41.011 14:12:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:41.011 14:12:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:41.011 14:12:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:41.012 14:12:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:41.012 14:12:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:18:41.012 Found net devices under 0000:09:00.0: cvl_0_0 00:18:41.012 14:12:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:41.012 14:12:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:41.012 14:12:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:41.012 14:12:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:41.012 14:12:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:41.012 14:12:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:41.012 14:12:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:41.012 14:12:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:41.012 14:12:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:18:41.012 Found net devices under 0000:09:00.1: cvl_0_1 00:18:41.012 14:12:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:41.012 14:12:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:41.012 14:12:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:18:41.012 14:12:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:41.012 14:12:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:41.012 14:12:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:41.012 14:12:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:41.012 14:12:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:41.012 14:12:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:41.012 14:12:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:41.012 14:12:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:41.012 14:12:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:41.012 14:12:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:41.012 14:12:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:41.012 14:12:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:41.012 14:12:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:41.012 14:12:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:41.012 14:12:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:41.012 14:12:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:41.012 14:12:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:41.012 14:12:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:41.012 14:12:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:41.012 14:12:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:41.012 14:12:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:41.012 14:12:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:41.012 14:12:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:41.012 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:41.012 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.126 ms 00:18:41.012 00:18:41.012 --- 10.0.0.2 ping statistics --- 00:18:41.012 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:41.012 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:18:41.012 14:12:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:41.012 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:41.012 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.076 ms 00:18:41.012 00:18:41.012 --- 10.0.0.1 ping statistics --- 00:18:41.012 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:41.012 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:18:41.012 14:12:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:41.012 14:12:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:18:41.012 14:12:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:41.012 14:12:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:41.012 14:12:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:41.012 14:12:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:41.012 14:12:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:41.012 14:12:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:41.012 14:12:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:41.012 14:12:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:18:41.012 14:12:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:41.012 14:12:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:41.012 14:12:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:41.012 14:12:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=239891 00:18:41.012 14:12:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:18:41.012 14:12:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 239891 00:18:41.012 14:12:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # '[' -z 239891 ']' 00:18:41.012 14:12:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:41.012 14:12:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:41.012 14:12:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:41.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:41.012 14:12:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:41.012 14:12:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:41.012 [2024-07-26 14:12:48.961359] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:18:41.012 [2024-07-26 14:12:48.961444] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:18:41.271 [2024-07-26 14:12:49.035772] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:41.271 [2024-07-26 14:12:49.145486] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:41.271 [2024-07-26 14:12:49.145562] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:41.271 [2024-07-26 14:12:49.145578] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:41.271 [2024-07-26 14:12:49.145589] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:41.271 [2024-07-26 14:12:49.145599] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:41.271 [2024-07-26 14:12:49.145707] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:18:41.271 [2024-07-26 14:12:49.145764] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:18:41.271 [2024-07-26 14:12:49.145813] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:18:41.271 [2024-07-26 14:12:49.145816] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:18:41.271 14:12:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:41.271 14:12:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # return 0 00:18:41.271 14:12:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:41.271 14:12:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:41.271 14:12:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:41.271 14:12:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:41.271 14:12:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:41.271 14:12:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.271 14:12:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:41.271 [2024-07-26 14:12:49.259961] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:41.271 14:12:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.271 14:12:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:41.271 14:12:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.271 14:12:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:41.271 Malloc0 00:18:41.271 14:12:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.271 14:12:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:41.272 14:12:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.272 14:12:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:41.272 14:12:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.272 14:12:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:41.272 14:12:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.272 14:12:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:41.530 14:12:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.530 14:12:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:41.530 14:12:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.530 14:12:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:41.530 [2024-07-26 14:12:49.297618] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:41.530 14:12:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.530 14:12:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:18:41.530 14:12:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:18:41.530 14:12:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:18:41.530 14:12:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:18:41.530 14:12:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:41.530 14:12:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:41.530 { 00:18:41.530 "params": { 00:18:41.530 "name": "Nvme$subsystem", 00:18:41.530 "trtype": "$TEST_TRANSPORT", 00:18:41.530 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:41.530 "adrfam": "ipv4", 00:18:41.530 "trsvcid": "$NVMF_PORT", 00:18:41.530 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:41.530 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:41.530 "hdgst": ${hdgst:-false}, 00:18:41.530 "ddgst": ${ddgst:-false} 00:18:41.530 }, 00:18:41.530 "method": "bdev_nvme_attach_controller" 00:18:41.530 } 00:18:41.530 EOF 00:18:41.530 )") 00:18:41.530 14:12:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:18:41.530 14:12:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:18:41.530 14:12:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:18:41.530 14:12:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:41.530 "params": { 00:18:41.530 "name": "Nvme1", 00:18:41.530 "trtype": "tcp", 00:18:41.530 "traddr": "10.0.0.2", 00:18:41.530 "adrfam": "ipv4", 00:18:41.530 "trsvcid": "4420", 00:18:41.530 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:41.530 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:41.530 "hdgst": false, 00:18:41.530 "ddgst": false 00:18:41.530 }, 00:18:41.530 "method": "bdev_nvme_attach_controller" 00:18:41.530 }' 00:18:41.530 [2024-07-26 14:12:49.341856] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:18:41.530 [2024-07-26 14:12:49.341936] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid240007 ] 00:18:41.530 [2024-07-26 14:12:49.405662] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:41.530 [2024-07-26 14:12:49.521668] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:41.530 [2024-07-26 14:12:49.521716] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:41.530 [2024-07-26 14:12:49.521720] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:41.788 I/O targets: 00:18:41.788 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:18:41.788 00:18:41.788 00:18:41.788 CUnit - A unit testing framework for C - Version 2.1-3 00:18:41.788 http://cunit.sourceforge.net/ 00:18:41.788 00:18:41.788 00:18:41.788 Suite: bdevio tests on: Nvme1n1 00:18:41.788 Test: blockdev write read block ...passed 00:18:41.788 Test: blockdev write zeroes read block ...passed 00:18:41.788 Test: blockdev write zeroes read no split ...passed 00:18:41.788 Test: blockdev write zeroes read split ...passed 00:18:42.046 Test: blockdev write zeroes read split partial ...passed 00:18:42.046 Test: blockdev reset ...[2024-07-26 14:12:49.807860] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:42.046 [2024-07-26 14:12:49.807978] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dbfb0 (9): Bad file descriptor 00:18:42.046 [2024-07-26 14:12:49.822016] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:42.046 passed 00:18:42.046 Test: blockdev write read 8 blocks ...passed 00:18:42.046 Test: blockdev write read size > 128k ...passed 00:18:42.046 Test: blockdev write read invalid size ...passed 00:18:42.046 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:42.046 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:42.046 Test: blockdev write read max offset ...passed 00:18:42.046 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:42.046 Test: blockdev writev readv 8 blocks ...passed 00:18:42.046 Test: blockdev writev readv 30 x 1block ...passed 00:18:42.304 Test: blockdev writev readv block ...passed 00:18:42.304 Test: blockdev writev readv size > 128k ...passed 00:18:42.304 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:42.304 Test: blockdev comparev and writev ...[2024-07-26 14:12:50.119694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:42.304 [2024-07-26 14:12:50.119747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:42.304 [2024-07-26 14:12:50.119772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:42.304 [2024-07-26 14:12:50.119800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:42.304 [2024-07-26 14:12:50.120125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:42.304 [2024-07-26 14:12:50.120149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:42.304 [2024-07-26 14:12:50.120182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:42.304 [2024-07-26 14:12:50.120199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:42.304 [2024-07-26 14:12:50.120503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:42.304 [2024-07-26 14:12:50.120535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:42.304 [2024-07-26 14:12:50.120559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:42.304 [2024-07-26 14:12:50.120576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:42.304 [2024-07-26 14:12:50.120877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:42.304 [2024-07-26 14:12:50.120902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:42.304 [2024-07-26 14:12:50.120924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:42.304 [2024-07-26 14:12:50.120941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:42.304 passed 00:18:42.304 Test: blockdev nvme passthru rw ...passed 00:18:42.304 Test: blockdev nvme passthru vendor specific ...[2024-07-26 14:12:50.202798] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:42.304 [2024-07-26 14:12:50.202838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:42.304 [2024-07-26 14:12:50.202981] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:42.304 [2024-07-26 14:12:50.203004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:42.305 [2024-07-26 14:12:50.203148] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:42.305 [2024-07-26 14:12:50.203171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:42.305 [2024-07-26 14:12:50.203316] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:42.305 [2024-07-26 14:12:50.203339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:42.305 passed 00:18:42.305 Test: blockdev nvme admin passthru ...passed 00:18:42.305 Test: blockdev copy ...passed 00:18:42.305 00:18:42.305 Run Summary: Type Total Ran Passed Failed Inactive 00:18:42.305 suites 1 1 n/a 0 0 00:18:42.305 tests 23 23 23 0 0 00:18:42.305 asserts 152 152 152 0 n/a 00:18:42.305 00:18:42.305 Elapsed time = 1.152 seconds 00:18:42.871 14:12:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:42.871 14:12:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.871 14:12:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:42.871 14:12:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.871 14:12:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:18:42.871 14:12:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:18:42.871 14:12:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:42.871 14:12:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:18:42.871 14:12:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:42.871 14:12:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:18:42.871 14:12:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:42.871 14:12:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:42.871 rmmod nvme_tcp 00:18:42.871 rmmod nvme_fabrics 00:18:42.871 rmmod nvme_keyring 00:18:42.871 14:12:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:42.871 14:12:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:18:42.871 14:12:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:18:42.871 14:12:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 239891 ']' 00:18:42.871 14:12:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 239891 00:18:42.871 14:12:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # '[' -z 239891 ']' 00:18:42.871 14:12:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # kill -0 239891 00:18:42.871 14:12:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # uname 00:18:42.871 14:12:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:42.871 14:12:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 239891 00:18:42.871 14:12:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:18:42.871 14:12:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:18:42.871 14:12:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@968 -- # echo 'killing process with pid 239891' 00:18:42.871 killing process with pid 239891 00:18:42.871 14:12:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@969 -- # kill 239891 00:18:42.871 14:12:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@974 -- # wait 239891 00:18:43.131 14:12:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:43.131 14:12:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:43.131 14:12:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:43.131 14:12:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:43.131 14:12:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:43.131 14:12:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:43.131 14:12:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:43.131 14:12:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:45.666 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:45.666 00:18:45.666 real 0m6.557s 00:18:45.666 user 0m10.167s 00:18:45.666 sys 0m2.558s 00:18:45.666 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:45.666 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:45.666 ************************************ 00:18:45.666 END TEST nvmf_bdevio_no_huge 00:18:45.666 ************************************ 00:18:45.666 14:12:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:45.666 14:12:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:45.666 14:12:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:45.666 14:12:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:45.666 ************************************ 00:18:45.666 START TEST nvmf_tls 00:18:45.666 ************************************ 00:18:45.666 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:45.666 * Looking for test storage... 00:18:45.666 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:45.666 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:45.666 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:18:45.666 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:45.666 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:45.666 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:45.666 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:45.666 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:45.666 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:45.666 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:45.666 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:45.666 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:45.666 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:45.666 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:45.666 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:18:45.666 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:45.666 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:45.666 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:45.666 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:45.666 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:45.666 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:45.666 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:45.666 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:45.666 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:45.666 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:45.666 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:45.666 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:18:45.666 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:45.667 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:18:45.667 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:45.667 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:45.667 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:45.667 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:45.667 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:45.667 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:45.667 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:45.667 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:45.667 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:45.667 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:18:45.667 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:45.667 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:45.667 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:45.667 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:45.667 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:45.667 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:45.667 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:45.667 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:45.667 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:45.667 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:45.667 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:18:45.667 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:47.566 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:47.566 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:18:47.566 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:47.566 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:47.566 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:47.566 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:47.566 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:47.566 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:18:47.566 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:47.566 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:18:47.566 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:18:47.566 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:18:47.566 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:18:47.566 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:18:47.566 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:18:47.566 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:47.566 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:47.566 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:47.566 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:47.566 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:47.566 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:47.566 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:47.566 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:47.566 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:47.566 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:47.566 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:47.566 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:47.566 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:47.566 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:47.566 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:47.566 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:47.566 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:47.566 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:47.566 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:18:47.566 Found 0000:09:00.0 (0x8086 - 0x159b) 00:18:47.566 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:47.566 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:47.566 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:47.566 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:47.566 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:47.566 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:47.566 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:18:47.566 Found 0000:09:00.1 (0x8086 - 0x159b) 00:18:47.566 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:47.566 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:47.566 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:47.566 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:47.566 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:47.566 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:47.566 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:47.566 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:47.566 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:47.566 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:47.566 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:47.566 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:47.566 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:47.566 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:47.566 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:47.566 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:18:47.566 Found net devices under 0000:09:00.0: cvl_0_0 00:18:47.566 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:47.566 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:47.566 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:47.566 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:47.566 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:47.566 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:47.566 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:47.567 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:47.567 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:18:47.567 Found net devices under 0000:09:00.1: cvl_0_1 00:18:47.567 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:47.567 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:47.567 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:18:47.567 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:47.567 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:47.567 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:47.567 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:47.567 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:47.567 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:47.567 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:47.567 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:47.567 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:47.567 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:47.567 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:47.567 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:47.567 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:47.567 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:47.567 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:47.567 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:47.567 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:47.567 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:47.567 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:47.567 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:47.567 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:47.567 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:47.567 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:47.567 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:47.567 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.161 ms 00:18:47.567 00:18:47.567 --- 10.0.0.2 ping statistics --- 00:18:47.567 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:47.567 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:18:47.567 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:47.567 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:47.567 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.061 ms 00:18:47.567 00:18:47.567 --- 10.0.0.1 ping statistics --- 00:18:47.567 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:47.567 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:18:47.567 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:47.567 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:18:47.567 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:47.567 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:47.567 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:47.567 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:47.567 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:47.567 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:47.567 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:47.567 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:18:47.567 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:47.567 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:47.567 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:47.567 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=242079 00:18:47.567 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:18:47.567 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 242079 00:18:47.567 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 242079 ']' 00:18:47.567 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:47.567 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:47.567 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:47.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:47.567 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:47.567 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:47.567 [2024-07-26 14:12:55.518909] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:18:47.567 [2024-07-26 14:12:55.518978] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:47.567 EAL: No free 2048 kB hugepages reported on node 1 00:18:47.567 [2024-07-26 14:12:55.582493] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:47.825 [2024-07-26 14:12:55.687953] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:47.825 [2024-07-26 14:12:55.688005] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:47.825 [2024-07-26 14:12:55.688028] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:47.825 [2024-07-26 14:12:55.688039] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:47.825 [2024-07-26 14:12:55.688048] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:47.825 [2024-07-26 14:12:55.688073] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:47.825 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:47.825 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:18:47.825 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:47.825 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:47.825 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:47.825 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:47.825 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:18:47.825 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:18:48.082 true 00:18:48.082 14:12:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:48.082 14:12:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:18:48.340 14:12:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # version=0 00:18:48.340 14:12:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:18:48.340 14:12:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:48.599 14:12:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:48.599 14:12:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:18:48.857 14:12:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # version=13 00:18:48.857 14:12:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:18:48.857 14:12:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:18:49.115 14:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:49.115 14:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:18:49.373 14:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # version=7 00:18:49.373 14:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:18:49.373 14:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:49.373 14:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:18:49.631 14:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:18:49.631 14:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:18:49.631 14:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:18:49.889 14:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:49.889 14:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:18:50.170 14:12:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:18:50.170 14:12:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:18:50.170 14:12:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:18:50.428 14:12:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:50.428 14:12:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:18:50.686 14:12:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:18:50.686 14:12:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:18:50.686 14:12:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:18:50.686 14:12:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:18:50.686 14:12:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:18:50.686 14:12:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:18:50.686 14:12:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:18:50.686 14:12:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:18:50.686 14:12:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:18:50.686 14:12:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:50.686 14:12:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:18:50.686 14:12:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:18:50.686 14:12:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:18:50.686 14:12:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:18:50.686 14:12:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:18:50.686 14:12:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:18:50.686 14:12:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:18:50.686 14:12:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:50.686 14:12:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:18:50.686 14:12:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.7995QD2od2 00:18:50.686 14:12:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:18:50.686 14:12:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.ScuL97guE7 00:18:50.686 14:12:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:50.686 14:12:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:50.686 14:12:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.7995QD2od2 00:18:50.686 14:12:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.ScuL97guE7 00:18:50.686 14:12:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:50.944 14:12:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:18:51.202 14:12:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.7995QD2od2 00:18:51.202 14:12:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.7995QD2od2 00:18:51.202 14:12:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:51.460 [2024-07-26 14:12:59.390276] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:51.460 14:12:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:51.718 14:12:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:51.977 [2024-07-26 14:12:59.867604] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:51.977 [2024-07-26 14:12:59.867851] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:51.977 14:12:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:52.235 malloc0 00:18:52.235 14:13:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:52.493 14:13:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.7995QD2od2 00:18:52.751 [2024-07-26 14:13:00.717751] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:52.751 14:13:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.7995QD2od2 00:18:52.751 EAL: No free 2048 kB hugepages reported on node 1 00:19:04.944 Initializing NVMe Controllers 00:19:04.944 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:04.944 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:04.944 Initialization complete. Launching workers. 00:19:04.944 ======================================================== 00:19:04.944 Latency(us) 00:19:04.944 Device Information : IOPS MiB/s Average min max 00:19:04.944 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8718.08 34.05 7343.14 1029.18 9133.77 00:19:04.944 ======================================================== 00:19:04.944 Total : 8718.08 34.05 7343.14 1029.18 9133.77 00:19:04.944 00:19:04.944 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.7995QD2od2 00:19:04.944 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:04.944 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:04.944 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:04.945 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.7995QD2od2' 00:19:04.945 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:04.945 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=243965 00:19:04.945 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:04.945 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:04.945 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 243965 /var/tmp/bdevperf.sock 00:19:04.945 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 243965 ']' 00:19:04.945 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:04.945 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:04.945 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:04.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:04.945 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:04.945 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:04.945 [2024-07-26 14:13:10.876854] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:19:04.945 [2024-07-26 14:13:10.876920] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid243965 ] 00:19:04.945 EAL: No free 2048 kB hugepages reported on node 1 00:19:04.945 [2024-07-26 14:13:10.933236] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:04.945 [2024-07-26 14:13:11.039458] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:04.945 14:13:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:04.945 14:13:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:04.945 14:13:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.7995QD2od2 00:19:04.945 [2024-07-26 14:13:11.359777] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:04.945 [2024-07-26 14:13:11.359904] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:04.945 TLSTESTn1 00:19:04.945 14:13:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:04.945 Running I/O for 10 seconds... 00:19:14.907 00:19:14.907 Latency(us) 00:19:14.907 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:14.907 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:14.907 Verification LBA range: start 0x0 length 0x2000 00:19:14.907 TLSTESTn1 : 10.02 3318.02 12.96 0.00 0.00 38502.19 8252.68 43690.67 00:19:14.907 =================================================================================================================== 00:19:14.907 Total : 3318.02 12.96 0.00 0.00 38502.19 8252.68 43690.67 00:19:14.907 0 00:19:14.907 14:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:14.907 14:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 243965 00:19:14.907 14:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 243965 ']' 00:19:14.907 14:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 243965 00:19:14.907 14:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:14.907 14:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:14.907 14:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 243965 00:19:14.907 14:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:14.907 14:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:14.907 14:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 243965' 00:19:14.907 killing process with pid 243965 00:19:14.907 14:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 243965 00:19:14.907 Received shutdown signal, test time was about 10.000000 seconds 00:19:14.908 00:19:14.908 Latency(us) 00:19:14.908 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:14.908 =================================================================================================================== 00:19:14.908 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:14.908 [2024-07-26 14:13:21.616672] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:14.908 14:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 243965 00:19:14.908 14:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ScuL97guE7 00:19:14.908 14:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:14.908 14:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ScuL97guE7 00:19:14.908 14:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:14.908 14:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:14.908 14:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:14.908 14:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:14.908 14:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ScuL97guE7 00:19:14.908 14:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:14.908 14:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:14.908 14:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:14.908 14:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.ScuL97guE7' 00:19:14.908 14:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:14.908 14:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=245167 00:19:14.908 14:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:14.908 14:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:14.908 14:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 245167 /var/tmp/bdevperf.sock 00:19:14.908 14:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 245167 ']' 00:19:14.908 14:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:14.908 14:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:14.908 14:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:14.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:14.908 14:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:14.908 14:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:14.908 [2024-07-26 14:13:21.897450] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:19:14.908 [2024-07-26 14:13:21.897543] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid245167 ] 00:19:14.908 EAL: No free 2048 kB hugepages reported on node 1 00:19:14.908 [2024-07-26 14:13:21.954558] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:14.908 [2024-07-26 14:13:22.059746] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:14.908 14:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:14.908 14:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:14.908 14:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ScuL97guE7 00:19:14.908 [2024-07-26 14:13:22.436574] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:14.908 [2024-07-26 14:13:22.436680] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:14.908 [2024-07-26 14:13:22.444213] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:14.908 [2024-07-26 14:13:22.445292] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2308f90 (107): Transport endpoint is not connected 00:19:14.908 [2024-07-26 14:13:22.446284] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2308f90 (9): Bad file descriptor 00:19:14.908 [2024-07-26 14:13:22.447285] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:14.908 [2024-07-26 14:13:22.447305] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:14.908 [2024-07-26 14:13:22.447323] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:14.908 request: 00:19:14.908 { 00:19:14.908 "name": "TLSTEST", 00:19:14.908 "trtype": "tcp", 00:19:14.908 "traddr": "10.0.0.2", 00:19:14.908 "adrfam": "ipv4", 00:19:14.908 "trsvcid": "4420", 00:19:14.908 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:14.908 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:14.908 "prchk_reftag": false, 00:19:14.908 "prchk_guard": false, 00:19:14.908 "hdgst": false, 00:19:14.908 "ddgst": false, 00:19:14.908 "psk": "/tmp/tmp.ScuL97guE7", 00:19:14.908 "method": "bdev_nvme_attach_controller", 00:19:14.908 "req_id": 1 00:19:14.908 } 00:19:14.908 Got JSON-RPC error response 00:19:14.908 response: 00:19:14.908 { 00:19:14.908 "code": -5, 00:19:14.908 "message": "Input/output error" 00:19:14.908 } 00:19:14.908 14:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 245167 00:19:14.908 14:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 245167 ']' 00:19:14.908 14:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 245167 00:19:14.908 14:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:14.908 14:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:14.908 14:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 245167 00:19:14.908 14:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:14.908 14:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:14.908 14:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 245167' 00:19:14.908 killing process with pid 245167 00:19:14.908 14:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 245167 00:19:14.908 Received shutdown signal, test time was about 10.000000 seconds 00:19:14.908 00:19:14.908 Latency(us) 00:19:14.908 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:14.908 =================================================================================================================== 00:19:14.908 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:14.908 [2024-07-26 14:13:22.494711] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:14.908 14:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 245167 00:19:14.908 14:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:14.908 14:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:14.908 14:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:14.908 14:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:14.908 14:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:14.908 14:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.7995QD2od2 00:19:14.908 14:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:14.908 14:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.7995QD2od2 00:19:14.908 14:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:14.908 14:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:14.908 14:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:14.908 14:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:14.908 14:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.7995QD2od2 00:19:14.908 14:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:14.909 14:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:14.909 14:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:19:14.909 14:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.7995QD2od2' 00:19:14.909 14:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:14.909 14:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=245303 00:19:14.909 14:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:14.909 14:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:14.909 14:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 245303 /var/tmp/bdevperf.sock 00:19:14.909 14:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 245303 ']' 00:19:14.909 14:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:14.909 14:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:14.909 14:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:14.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:14.909 14:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:14.909 14:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:14.909 [2024-07-26 14:13:22.791726] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:19:14.909 [2024-07-26 14:13:22.791817] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid245303 ] 00:19:14.909 EAL: No free 2048 kB hugepages reported on node 1 00:19:14.909 [2024-07-26 14:13:22.849869] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:15.167 [2024-07-26 14:13:22.956987] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:15.167 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:15.167 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:15.167 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.7995QD2od2 00:19:15.424 [2024-07-26 14:13:23.277434] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:15.424 [2024-07-26 14:13:23.277602] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:15.424 [2024-07-26 14:13:23.286237] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:15.424 [2024-07-26 14:13:23.286267] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:15.424 [2024-07-26 14:13:23.286319] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:15.424 [2024-07-26 14:13:23.286444] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56f90 (107): Transport endpoint is not connected 00:19:15.424 [2024-07-26 14:13:23.287434] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56f90 (9): Bad file descriptor 00:19:15.424 [2024-07-26 14:13:23.288434] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:15.424 [2024-07-26 14:13:23.288453] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:15.424 [2024-07-26 14:13:23.288479] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:15.424 request: 00:19:15.424 { 00:19:15.424 "name": "TLSTEST", 00:19:15.424 "trtype": "tcp", 00:19:15.424 "traddr": "10.0.0.2", 00:19:15.424 "adrfam": "ipv4", 00:19:15.424 "trsvcid": "4420", 00:19:15.424 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:15.424 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:15.424 "prchk_reftag": false, 00:19:15.424 "prchk_guard": false, 00:19:15.424 "hdgst": false, 00:19:15.424 "ddgst": false, 00:19:15.424 "psk": "/tmp/tmp.7995QD2od2", 00:19:15.424 "method": "bdev_nvme_attach_controller", 00:19:15.424 "req_id": 1 00:19:15.424 } 00:19:15.424 Got JSON-RPC error response 00:19:15.424 response: 00:19:15.424 { 00:19:15.424 "code": -5, 00:19:15.424 "message": "Input/output error" 00:19:15.424 } 00:19:15.424 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 245303 00:19:15.424 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 245303 ']' 00:19:15.424 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 245303 00:19:15.424 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:15.424 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:15.424 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 245303 00:19:15.424 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:15.424 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:15.424 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 245303' 00:19:15.424 killing process with pid 245303 00:19:15.424 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 245303 00:19:15.424 Received shutdown signal, test time was about 10.000000 seconds 00:19:15.424 00:19:15.424 Latency(us) 00:19:15.424 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:15.424 =================================================================================================================== 00:19:15.424 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:15.424 [2024-07-26 14:13:23.330673] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:15.424 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 245303 00:19:15.682 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:15.682 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:15.682 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:15.682 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:15.682 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:15.682 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.7995QD2od2 00:19:15.682 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:15.682 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.7995QD2od2 00:19:15.682 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:15.682 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:15.682 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:15.682 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:15.682 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.7995QD2od2 00:19:15.682 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:15.682 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:19:15.682 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:15.682 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.7995QD2od2' 00:19:15.682 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:15.682 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=245439 00:19:15.682 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:15.682 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:15.682 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 245439 /var/tmp/bdevperf.sock 00:19:15.682 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 245439 ']' 00:19:15.682 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:15.682 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:15.682 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:15.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:15.682 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:15.682 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:15.682 [2024-07-26 14:13:23.624950] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:19:15.682 [2024-07-26 14:13:23.625027] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid245439 ] 00:19:15.682 EAL: No free 2048 kB hugepages reported on node 1 00:19:15.682 [2024-07-26 14:13:23.682764] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:15.940 [2024-07-26 14:13:23.786151] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:15.940 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:15.940 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:15.940 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.7995QD2od2 00:19:16.198 [2024-07-26 14:13:24.111484] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:16.198 [2024-07-26 14:13:24.111632] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:16.198 [2024-07-26 14:13:24.118428] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:16.198 [2024-07-26 14:13:24.118463] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:16.198 [2024-07-26 14:13:24.118516] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:16.198 [2024-07-26 14:13:24.119314] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a1f90 (107): Transport endpoint is not connected 00:19:16.198 [2024-07-26 14:13:24.120292] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a1f90 (9): Bad file descriptor 00:19:16.198 [2024-07-26 14:13:24.121291] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:19:16.198 [2024-07-26 14:13:24.121311] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:16.198 [2024-07-26 14:13:24.121329] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:19:16.198 request: 00:19:16.198 { 00:19:16.198 "name": "TLSTEST", 00:19:16.198 "trtype": "tcp", 00:19:16.198 "traddr": "10.0.0.2", 00:19:16.198 "adrfam": "ipv4", 00:19:16.198 "trsvcid": "4420", 00:19:16.198 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:16.198 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:16.198 "prchk_reftag": false, 00:19:16.198 "prchk_guard": false, 00:19:16.199 "hdgst": false, 00:19:16.199 "ddgst": false, 00:19:16.199 "psk": "/tmp/tmp.7995QD2od2", 00:19:16.199 "method": "bdev_nvme_attach_controller", 00:19:16.199 "req_id": 1 00:19:16.199 } 00:19:16.199 Got JSON-RPC error response 00:19:16.199 response: 00:19:16.199 { 00:19:16.199 "code": -5, 00:19:16.199 "message": "Input/output error" 00:19:16.199 } 00:19:16.199 14:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 245439 00:19:16.199 14:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 245439 ']' 00:19:16.199 14:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 245439 00:19:16.199 14:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:16.199 14:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:16.199 14:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 245439 00:19:16.199 14:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:16.199 14:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:16.199 14:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 245439' 00:19:16.199 killing process with pid 245439 00:19:16.199 14:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 245439 00:19:16.199 Received shutdown signal, test time was about 10.000000 seconds 00:19:16.199 00:19:16.199 Latency(us) 00:19:16.199 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:16.199 =================================================================================================================== 00:19:16.199 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:16.199 [2024-07-26 14:13:24.170435] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:16.199 14:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 245439 00:19:16.457 14:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:16.457 14:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:16.457 14:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:16.457 14:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:16.457 14:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:16.457 14:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:16.457 14:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:16.457 14:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:16.457 14:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:16.457 14:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:16.457 14:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:16.457 14:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:16.457 14:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:16.457 14:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:16.457 14:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:16.457 14:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:16.457 14:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:19:16.458 14:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:16.458 14:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=245478 00:19:16.458 14:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:16.458 14:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:16.458 14:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 245478 /var/tmp/bdevperf.sock 00:19:16.458 14:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 245478 ']' 00:19:16.458 14:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:16.458 14:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:16.458 14:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:16.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:16.458 14:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:16.458 14:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:16.458 [2024-07-26 14:13:24.469048] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:19:16.458 [2024-07-26 14:13:24.469143] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid245478 ] 00:19:16.716 EAL: No free 2048 kB hugepages reported on node 1 00:19:16.716 [2024-07-26 14:13:24.529896] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:16.716 [2024-07-26 14:13:24.635032] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:16.716 14:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:16.716 14:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:16.716 14:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:16.973 [2024-07-26 14:13:24.960619] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:16.973 [2024-07-26 14:13:24.962297] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2275770 (9): Bad file descriptor 00:19:16.973 [2024-07-26 14:13:24.963293] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:16.973 [2024-07-26 14:13:24.963314] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:16.973 [2024-07-26 14:13:24.963342] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:16.973 request: 00:19:16.973 { 00:19:16.973 "name": "TLSTEST", 00:19:16.973 "trtype": "tcp", 00:19:16.973 "traddr": "10.0.0.2", 00:19:16.973 "adrfam": "ipv4", 00:19:16.973 "trsvcid": "4420", 00:19:16.973 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:16.973 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:16.973 "prchk_reftag": false, 00:19:16.973 "prchk_guard": false, 00:19:16.973 "hdgst": false, 00:19:16.973 "ddgst": false, 00:19:16.973 "method": "bdev_nvme_attach_controller", 00:19:16.974 "req_id": 1 00:19:16.974 } 00:19:16.974 Got JSON-RPC error response 00:19:16.974 response: 00:19:16.974 { 00:19:16.974 "code": -5, 00:19:16.974 "message": "Input/output error" 00:19:16.974 } 00:19:16.974 14:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 245478 00:19:16.974 14:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 245478 ']' 00:19:16.974 14:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 245478 00:19:16.974 14:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:16.974 14:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:16.974 14:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 245478 00:19:17.231 14:13:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:17.231 14:13:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:17.231 14:13:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 245478' 00:19:17.231 killing process with pid 245478 00:19:17.231 14:13:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 245478 00:19:17.231 Received shutdown signal, test time was about 10.000000 seconds 00:19:17.231 00:19:17.231 Latency(us) 00:19:17.231 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:17.231 =================================================================================================================== 00:19:17.231 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:17.231 14:13:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 245478 00:19:17.489 14:13:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:17.489 14:13:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:17.489 14:13:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:17.489 14:13:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:17.489 14:13:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:17.489 14:13:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@158 -- # killprocess 242079 00:19:17.489 14:13:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 242079 ']' 00:19:17.489 14:13:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 242079 00:19:17.489 14:13:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:17.489 14:13:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:17.489 14:13:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 242079 00:19:17.489 14:13:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:17.489 14:13:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:17.489 14:13:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 242079' 00:19:17.489 killing process with pid 242079 00:19:17.489 14:13:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 242079 00:19:17.489 [2024-07-26 14:13:25.292000] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:17.489 14:13:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 242079 00:19:17.747 14:13:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:19:17.748 14:13:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:19:17.748 14:13:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:19:17.748 14:13:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:19:17.748 14:13:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:19:17.748 14:13:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:19:17.748 14:13:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:19:17.748 14:13:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:17.748 14:13:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:19:17.748 14:13:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.UtPld0HauR 00:19:17.748 14:13:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:17.748 14:13:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.UtPld0HauR 00:19:17.748 14:13:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:19:17.748 14:13:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:17.748 14:13:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:17.748 14:13:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:17.748 14:13:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=245700 00:19:17.748 14:13:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:17.748 14:13:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 245700 00:19:17.748 14:13:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 245700 ']' 00:19:17.748 14:13:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:17.748 14:13:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:17.748 14:13:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:17.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:17.748 14:13:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:17.748 14:13:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:17.748 [2024-07-26 14:13:25.669122] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:19:17.748 [2024-07-26 14:13:25.669224] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:17.748 EAL: No free 2048 kB hugepages reported on node 1 00:19:17.748 [2024-07-26 14:13:25.732757] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:18.006 [2024-07-26 14:13:25.832756] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:18.006 [2024-07-26 14:13:25.832812] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:18.006 [2024-07-26 14:13:25.832832] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:18.006 [2024-07-26 14:13:25.832842] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:18.006 [2024-07-26 14:13:25.832858] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:18.006 [2024-07-26 14:13:25.832890] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:18.006 14:13:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:18.006 14:13:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:18.006 14:13:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:18.006 14:13:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:18.006 14:13:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:18.006 14:13:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:18.006 14:13:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.UtPld0HauR 00:19:18.006 14:13:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.UtPld0HauR 00:19:18.006 14:13:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:18.263 [2024-07-26 14:13:26.206638] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:18.264 14:13:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:18.521 14:13:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:18.779 [2024-07-26 14:13:26.707963] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:18.779 [2024-07-26 14:13:26.708189] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:18.779 14:13:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:19.036 malloc0 00:19:19.036 14:13:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:19.293 14:13:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.UtPld0HauR 00:19:19.551 [2024-07-26 14:13:27.445378] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:19.551 14:13:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.UtPld0HauR 00:19:19.551 14:13:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:19.551 14:13:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:19.551 14:13:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:19.551 14:13:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.UtPld0HauR' 00:19:19.551 14:13:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:19.551 14:13:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=245907 00:19:19.551 14:13:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:19.551 14:13:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:19.551 14:13:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 245907 /var/tmp/bdevperf.sock 00:19:19.551 14:13:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 245907 ']' 00:19:19.551 14:13:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:19.552 14:13:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:19.552 14:13:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:19.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:19.552 14:13:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:19.552 14:13:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:19.552 [2024-07-26 14:13:27.497700] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:19:19.552 [2024-07-26 14:13:27.497765] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid245907 ] 00:19:19.552 EAL: No free 2048 kB hugepages reported on node 1 00:19:19.552 [2024-07-26 14:13:27.553342] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:19.809 [2024-07-26 14:13:27.661101] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:19.809 14:13:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:19.809 14:13:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:19.809 14:13:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.UtPld0HauR 00:19:20.068 [2024-07-26 14:13:27.991971] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:20.068 [2024-07-26 14:13:27.992083] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:20.068 TLSTESTn1 00:19:20.326 14:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:20.326 Running I/O for 10 seconds... 00:19:30.320 00:19:30.320 Latency(us) 00:19:30.320 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:30.320 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:30.320 Verification LBA range: start 0x0 length 0x2000 00:19:30.320 TLSTESTn1 : 10.02 3403.67 13.30 0.00 0.00 37542.89 8009.96 49710.27 00:19:30.320 =================================================================================================================== 00:19:30.320 Total : 3403.67 13.30 0.00 0.00 37542.89 8009.96 49710.27 00:19:30.320 0 00:19:30.320 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:30.320 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 245907 00:19:30.320 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 245907 ']' 00:19:30.320 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 245907 00:19:30.320 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:30.320 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:30.320 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 245907 00:19:30.320 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:30.320 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:30.320 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 245907' 00:19:30.320 killing process with pid 245907 00:19:30.320 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 245907 00:19:30.320 Received shutdown signal, test time was about 10.000000 seconds 00:19:30.320 00:19:30.320 Latency(us) 00:19:30.320 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:30.320 =================================================================================================================== 00:19:30.320 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:30.320 [2024-07-26 14:13:38.273608] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:30.320 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 245907 00:19:30.577 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.UtPld0HauR 00:19:30.577 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.UtPld0HauR 00:19:30.577 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:30.577 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.UtPld0HauR 00:19:30.577 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:30.577 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:30.577 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:30.577 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:30.577 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.UtPld0HauR 00:19:30.577 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:30.577 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:30.577 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:30.577 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.UtPld0HauR' 00:19:30.577 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:30.577 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=247220 00:19:30.577 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:30.577 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:30.577 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 247220 /var/tmp/bdevperf.sock 00:19:30.577 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 247220 ']' 00:19:30.577 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:30.577 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:30.577 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:30.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:30.577 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:30.577 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:30.577 [2024-07-26 14:13:38.588752] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:19:30.578 [2024-07-26 14:13:38.588848] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid247220 ] 00:19:30.835 EAL: No free 2048 kB hugepages reported on node 1 00:19:30.835 [2024-07-26 14:13:38.646860] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:30.835 [2024-07-26 14:13:38.750784] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:31.092 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:31.092 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:31.092 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.UtPld0HauR 00:19:31.092 [2024-07-26 14:13:39.074791] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:31.092 [2024-07-26 14:13:39.074891] bdev_nvme.c:6153:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:19:31.093 [2024-07-26 14:13:39.074906] bdev_nvme.c:6258:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.UtPld0HauR 00:19:31.093 request: 00:19:31.093 { 00:19:31.093 "name": "TLSTEST", 00:19:31.093 "trtype": "tcp", 00:19:31.093 "traddr": "10.0.0.2", 00:19:31.093 "adrfam": "ipv4", 00:19:31.093 "trsvcid": "4420", 00:19:31.093 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:31.093 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:31.093 "prchk_reftag": false, 00:19:31.093 "prchk_guard": false, 00:19:31.093 "hdgst": false, 00:19:31.093 "ddgst": false, 00:19:31.093 "psk": "/tmp/tmp.UtPld0HauR", 00:19:31.093 "method": "bdev_nvme_attach_controller", 00:19:31.093 "req_id": 1 00:19:31.093 } 00:19:31.093 Got JSON-RPC error response 00:19:31.093 response: 00:19:31.093 { 00:19:31.093 "code": -1, 00:19:31.093 "message": "Operation not permitted" 00:19:31.093 } 00:19:31.093 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 247220 00:19:31.093 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 247220 ']' 00:19:31.093 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 247220 00:19:31.093 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:31.093 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:31.093 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 247220 00:19:31.351 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:31.351 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:31.351 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 247220' 00:19:31.351 killing process with pid 247220 00:19:31.351 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 247220 00:19:31.351 Received shutdown signal, test time was about 10.000000 seconds 00:19:31.351 00:19:31.351 Latency(us) 00:19:31.351 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:31.351 =================================================================================================================== 00:19:31.351 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:31.351 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 247220 00:19:31.609 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:31.609 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:31.609 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:31.609 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:31.609 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:31.609 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@174 -- # killprocess 245700 00:19:31.609 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 245700 ']' 00:19:31.609 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 245700 00:19:31.609 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:31.609 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:31.609 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 245700 00:19:31.609 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:31.609 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:31.609 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 245700' 00:19:31.609 killing process with pid 245700 00:19:31.609 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 245700 00:19:31.609 [2024-07-26 14:13:39.404478] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:31.609 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 245700 00:19:31.867 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:19:31.867 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:31.867 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:31.867 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:31.867 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=247369 00:19:31.867 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:31.867 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 247369 00:19:31.867 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 247369 ']' 00:19:31.867 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:31.867 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:31.867 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:31.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:31.867 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:31.867 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:31.867 [2024-07-26 14:13:39.739122] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:19:31.867 [2024-07-26 14:13:39.739214] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:31.867 EAL: No free 2048 kB hugepages reported on node 1 00:19:31.867 [2024-07-26 14:13:39.801341] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:32.125 [2024-07-26 14:13:39.906163] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:32.125 [2024-07-26 14:13:39.906212] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:32.125 [2024-07-26 14:13:39.906236] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:32.125 [2024-07-26 14:13:39.906247] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:32.125 [2024-07-26 14:13:39.906256] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:32.125 [2024-07-26 14:13:39.906281] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:32.125 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:32.125 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:32.125 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:32.125 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:32.125 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:32.125 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:32.125 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.UtPld0HauR 00:19:32.125 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:32.125 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.UtPld0HauR 00:19:32.125 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:19:32.125 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:32.125 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:19:32.125 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:32.125 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.UtPld0HauR 00:19:32.125 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.UtPld0HauR 00:19:32.125 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:32.383 [2024-07-26 14:13:40.274406] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:32.383 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:32.640 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:32.898 [2024-07-26 14:13:40.767754] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:32.898 [2024-07-26 14:13:40.768015] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:32.898 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:33.156 malloc0 00:19:33.156 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:33.414 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.UtPld0HauR 00:19:33.672 [2024-07-26 14:13:41.517375] tcp.c:3635:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:19:33.672 [2024-07-26 14:13:41.517413] tcp.c:3721:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:19:33.672 [2024-07-26 14:13:41.517459] subsystem.c:1052:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:19:33.672 request: 00:19:33.672 { 00:19:33.672 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:33.672 "host": "nqn.2016-06.io.spdk:host1", 00:19:33.672 "psk": "/tmp/tmp.UtPld0HauR", 00:19:33.672 "method": "nvmf_subsystem_add_host", 00:19:33.672 "req_id": 1 00:19:33.672 } 00:19:33.672 Got JSON-RPC error response 00:19:33.672 response: 00:19:33.672 { 00:19:33.672 "code": -32603, 00:19:33.672 "message": "Internal error" 00:19:33.672 } 00:19:33.672 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:33.672 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:33.672 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:33.672 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:33.672 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@180 -- # killprocess 247369 00:19:33.672 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 247369 ']' 00:19:33.672 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 247369 00:19:33.672 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:33.672 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:33.672 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 247369 00:19:33.672 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:33.672 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:33.672 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 247369' 00:19:33.672 killing process with pid 247369 00:19:33.672 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 247369 00:19:33.672 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 247369 00:19:33.930 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.UtPld0HauR 00:19:33.931 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:19:33.931 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:33.931 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:33.931 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:33.931 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=247662 00:19:33.931 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:33.931 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 247662 00:19:33.931 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 247662 ']' 00:19:33.931 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:33.931 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:33.931 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:33.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:33.931 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:33.931 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:33.931 [2024-07-26 14:13:41.894127] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:19:33.931 [2024-07-26 14:13:41.894214] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:33.931 EAL: No free 2048 kB hugepages reported on node 1 00:19:34.189 [2024-07-26 14:13:41.964905] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:34.189 [2024-07-26 14:13:42.073303] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:34.189 [2024-07-26 14:13:42.073368] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:34.189 [2024-07-26 14:13:42.073381] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:34.189 [2024-07-26 14:13:42.073392] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:34.189 [2024-07-26 14:13:42.073401] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:34.189 [2024-07-26 14:13:42.073435] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:35.188 14:13:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:35.188 14:13:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:35.188 14:13:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:35.188 14:13:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:35.188 14:13:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:35.188 14:13:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:35.188 14:13:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.UtPld0HauR 00:19:35.188 14:13:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.UtPld0HauR 00:19:35.188 14:13:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:35.188 [2024-07-26 14:13:43.081572] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:35.188 14:13:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:35.475 14:13:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:35.756 [2024-07-26 14:13:43.566876] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:35.756 [2024-07-26 14:13:43.567099] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:35.756 14:13:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:36.025 malloc0 00:19:36.025 14:13:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:36.286 14:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.UtPld0HauR 00:19:36.543 [2024-07-26 14:13:44.315086] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:36.543 14:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=247963 00:19:36.543 14:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:36.543 14:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:36.543 14:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 247963 /var/tmp/bdevperf.sock 00:19:36.543 14:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 247963 ']' 00:19:36.543 14:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:36.543 14:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:36.543 14:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:36.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:36.543 14:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:36.543 14:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:36.543 [2024-07-26 14:13:44.371857] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:19:36.543 [2024-07-26 14:13:44.371939] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid247963 ] 00:19:36.543 EAL: No free 2048 kB hugepages reported on node 1 00:19:36.543 [2024-07-26 14:13:44.430773] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:36.543 [2024-07-26 14:13:44.540375] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:36.801 14:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:36.801 14:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:36.801 14:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.UtPld0HauR 00:19:37.058 [2024-07-26 14:13:44.859981] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:37.058 [2024-07-26 14:13:44.860105] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:37.058 TLSTESTn1 00:19:37.058 14:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:19:37.316 14:13:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:19:37.316 "subsystems": [ 00:19:37.316 { 00:19:37.316 "subsystem": "keyring", 00:19:37.316 "config": [] 00:19:37.316 }, 00:19:37.316 { 00:19:37.316 "subsystem": "iobuf", 00:19:37.316 "config": [ 00:19:37.316 { 00:19:37.316 "method": "iobuf_set_options", 00:19:37.316 "params": { 00:19:37.316 "small_pool_count": 8192, 00:19:37.316 "large_pool_count": 1024, 00:19:37.316 "small_bufsize": 8192, 00:19:37.316 "large_bufsize": 135168 00:19:37.316 } 00:19:37.316 } 00:19:37.316 ] 00:19:37.316 }, 00:19:37.316 { 00:19:37.316 "subsystem": "sock", 00:19:37.316 "config": [ 00:19:37.316 { 00:19:37.316 "method": "sock_set_default_impl", 00:19:37.316 "params": { 00:19:37.316 "impl_name": "posix" 00:19:37.316 } 00:19:37.316 }, 00:19:37.316 { 00:19:37.316 "method": "sock_impl_set_options", 00:19:37.316 "params": { 00:19:37.316 "impl_name": "ssl", 00:19:37.316 "recv_buf_size": 4096, 00:19:37.316 "send_buf_size": 4096, 00:19:37.316 "enable_recv_pipe": true, 00:19:37.316 "enable_quickack": false, 00:19:37.316 "enable_placement_id": 0, 00:19:37.316 "enable_zerocopy_send_server": true, 00:19:37.316 "enable_zerocopy_send_client": false, 00:19:37.316 "zerocopy_threshold": 0, 00:19:37.316 "tls_version": 0, 00:19:37.316 "enable_ktls": false 00:19:37.316 } 00:19:37.316 }, 00:19:37.316 { 00:19:37.316 "method": "sock_impl_set_options", 00:19:37.316 "params": { 00:19:37.316 "impl_name": "posix", 00:19:37.316 "recv_buf_size": 2097152, 00:19:37.316 "send_buf_size": 2097152, 00:19:37.316 "enable_recv_pipe": true, 00:19:37.316 "enable_quickack": false, 00:19:37.316 "enable_placement_id": 0, 00:19:37.316 "enable_zerocopy_send_server": true, 00:19:37.316 "enable_zerocopy_send_client": false, 00:19:37.316 "zerocopy_threshold": 0, 00:19:37.316 "tls_version": 0, 00:19:37.316 "enable_ktls": false 00:19:37.316 } 00:19:37.316 } 00:19:37.316 ] 00:19:37.316 }, 00:19:37.316 { 00:19:37.316 "subsystem": "vmd", 00:19:37.316 "config": [] 00:19:37.316 }, 00:19:37.316 { 00:19:37.316 "subsystem": "accel", 00:19:37.316 "config": [ 00:19:37.316 { 00:19:37.316 "method": "accel_set_options", 00:19:37.316 "params": { 00:19:37.316 "small_cache_size": 128, 00:19:37.316 "large_cache_size": 16, 00:19:37.316 "task_count": 2048, 00:19:37.316 "sequence_count": 2048, 00:19:37.316 "buf_count": 2048 00:19:37.316 } 00:19:37.316 } 00:19:37.316 ] 00:19:37.316 }, 00:19:37.316 { 00:19:37.316 "subsystem": "bdev", 00:19:37.316 "config": [ 00:19:37.316 { 00:19:37.316 "method": "bdev_set_options", 00:19:37.316 "params": { 00:19:37.316 "bdev_io_pool_size": 65535, 00:19:37.316 "bdev_io_cache_size": 256, 00:19:37.316 "bdev_auto_examine": true, 00:19:37.316 "iobuf_small_cache_size": 128, 00:19:37.316 "iobuf_large_cache_size": 16 00:19:37.316 } 00:19:37.316 }, 00:19:37.317 { 00:19:37.317 "method": "bdev_raid_set_options", 00:19:37.317 "params": { 00:19:37.317 "process_window_size_kb": 1024, 00:19:37.317 "process_max_bandwidth_mb_sec": 0 00:19:37.317 } 00:19:37.317 }, 00:19:37.317 { 00:19:37.317 "method": "bdev_iscsi_set_options", 00:19:37.317 "params": { 00:19:37.317 "timeout_sec": 30 00:19:37.317 } 00:19:37.317 }, 00:19:37.317 { 00:19:37.317 "method": "bdev_nvme_set_options", 00:19:37.317 "params": { 00:19:37.317 "action_on_timeout": "none", 00:19:37.317 "timeout_us": 0, 00:19:37.317 "timeout_admin_us": 0, 00:19:37.317 "keep_alive_timeout_ms": 10000, 00:19:37.317 "arbitration_burst": 0, 00:19:37.317 "low_priority_weight": 0, 00:19:37.317 "medium_priority_weight": 0, 00:19:37.317 "high_priority_weight": 0, 00:19:37.317 "nvme_adminq_poll_period_us": 10000, 00:19:37.317 "nvme_ioq_poll_period_us": 0, 00:19:37.317 "io_queue_requests": 0, 00:19:37.317 "delay_cmd_submit": true, 00:19:37.317 "transport_retry_count": 4, 00:19:37.317 "bdev_retry_count": 3, 00:19:37.317 "transport_ack_timeout": 0, 00:19:37.317 "ctrlr_loss_timeout_sec": 0, 00:19:37.317 "reconnect_delay_sec": 0, 00:19:37.317 "fast_io_fail_timeout_sec": 0, 00:19:37.317 "disable_auto_failback": false, 00:19:37.317 "generate_uuids": false, 00:19:37.317 "transport_tos": 0, 00:19:37.317 "nvme_error_stat": false, 00:19:37.317 "rdma_srq_size": 0, 00:19:37.317 "io_path_stat": false, 00:19:37.317 "allow_accel_sequence": false, 00:19:37.317 "rdma_max_cq_size": 0, 00:19:37.317 "rdma_cm_event_timeout_ms": 0, 00:19:37.317 "dhchap_digests": [ 00:19:37.317 "sha256", 00:19:37.317 "sha384", 00:19:37.317 "sha512" 00:19:37.317 ], 00:19:37.317 "dhchap_dhgroups": [ 00:19:37.317 "null", 00:19:37.317 "ffdhe2048", 00:19:37.317 "ffdhe3072", 00:19:37.317 "ffdhe4096", 00:19:37.317 "ffdhe6144", 00:19:37.317 "ffdhe8192" 00:19:37.317 ] 00:19:37.317 } 00:19:37.317 }, 00:19:37.317 { 00:19:37.317 "method": "bdev_nvme_set_hotplug", 00:19:37.317 "params": { 00:19:37.317 "period_us": 100000, 00:19:37.317 "enable": false 00:19:37.317 } 00:19:37.317 }, 00:19:37.317 { 00:19:37.317 "method": "bdev_malloc_create", 00:19:37.317 "params": { 00:19:37.317 "name": "malloc0", 00:19:37.317 "num_blocks": 8192, 00:19:37.317 "block_size": 4096, 00:19:37.317 "physical_block_size": 4096, 00:19:37.317 "uuid": "4d28e1fd-3a44-43db-97b7-55d8ccff26b8", 00:19:37.317 "optimal_io_boundary": 0, 00:19:37.317 "md_size": 0, 00:19:37.317 "dif_type": 0, 00:19:37.317 "dif_is_head_of_md": false, 00:19:37.317 "dif_pi_format": 0 00:19:37.317 } 00:19:37.317 }, 00:19:37.317 { 00:19:37.317 "method": "bdev_wait_for_examine" 00:19:37.317 } 00:19:37.317 ] 00:19:37.317 }, 00:19:37.317 { 00:19:37.317 "subsystem": "nbd", 00:19:37.317 "config": [] 00:19:37.317 }, 00:19:37.317 { 00:19:37.317 "subsystem": "scheduler", 00:19:37.317 "config": [ 00:19:37.317 { 00:19:37.317 "method": "framework_set_scheduler", 00:19:37.317 "params": { 00:19:37.317 "name": "static" 00:19:37.317 } 00:19:37.317 } 00:19:37.317 ] 00:19:37.317 }, 00:19:37.317 { 00:19:37.317 "subsystem": "nvmf", 00:19:37.317 "config": [ 00:19:37.317 { 00:19:37.317 "method": "nvmf_set_config", 00:19:37.317 "params": { 00:19:37.317 "discovery_filter": "match_any", 00:19:37.317 "admin_cmd_passthru": { 00:19:37.317 "identify_ctrlr": false 00:19:37.317 } 00:19:37.317 } 00:19:37.317 }, 00:19:37.317 { 00:19:37.317 "method": "nvmf_set_max_subsystems", 00:19:37.317 "params": { 00:19:37.317 "max_subsystems": 1024 00:19:37.317 } 00:19:37.317 }, 00:19:37.317 { 00:19:37.317 "method": "nvmf_set_crdt", 00:19:37.317 "params": { 00:19:37.317 "crdt1": 0, 00:19:37.317 "crdt2": 0, 00:19:37.317 "crdt3": 0 00:19:37.317 } 00:19:37.317 }, 00:19:37.317 { 00:19:37.317 "method": "nvmf_create_transport", 00:19:37.317 "params": { 00:19:37.317 "trtype": "TCP", 00:19:37.317 "max_queue_depth": 128, 00:19:37.317 "max_io_qpairs_per_ctrlr": 127, 00:19:37.317 "in_capsule_data_size": 4096, 00:19:37.317 "max_io_size": 131072, 00:19:37.317 "io_unit_size": 131072, 00:19:37.317 "max_aq_depth": 128, 00:19:37.317 "num_shared_buffers": 511, 00:19:37.317 "buf_cache_size": 4294967295, 00:19:37.317 "dif_insert_or_strip": false, 00:19:37.317 "zcopy": false, 00:19:37.317 "c2h_success": false, 00:19:37.317 "sock_priority": 0, 00:19:37.317 "abort_timeout_sec": 1, 00:19:37.317 "ack_timeout": 0, 00:19:37.317 "data_wr_pool_size": 0 00:19:37.317 } 00:19:37.317 }, 00:19:37.317 { 00:19:37.317 "method": "nvmf_create_subsystem", 00:19:37.317 "params": { 00:19:37.317 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:37.317 "allow_any_host": false, 00:19:37.317 "serial_number": "SPDK00000000000001", 00:19:37.317 "model_number": "SPDK bdev Controller", 00:19:37.317 "max_namespaces": 10, 00:19:37.317 "min_cntlid": 1, 00:19:37.317 "max_cntlid": 65519, 00:19:37.317 "ana_reporting": false 00:19:37.317 } 00:19:37.317 }, 00:19:37.317 { 00:19:37.317 "method": "nvmf_subsystem_add_host", 00:19:37.317 "params": { 00:19:37.317 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:37.317 "host": "nqn.2016-06.io.spdk:host1", 00:19:37.317 "psk": "/tmp/tmp.UtPld0HauR" 00:19:37.317 } 00:19:37.317 }, 00:19:37.317 { 00:19:37.317 "method": "nvmf_subsystem_add_ns", 00:19:37.317 "params": { 00:19:37.317 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:37.317 "namespace": { 00:19:37.317 "nsid": 1, 00:19:37.317 "bdev_name": "malloc0", 00:19:37.317 "nguid": "4D28E1FD3A4443DB97B755D8CCFF26B8", 00:19:37.317 "uuid": "4d28e1fd-3a44-43db-97b7-55d8ccff26b8", 00:19:37.317 "no_auto_visible": false 00:19:37.317 } 00:19:37.317 } 00:19:37.317 }, 00:19:37.317 { 00:19:37.317 "method": "nvmf_subsystem_add_listener", 00:19:37.317 "params": { 00:19:37.317 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:37.317 "listen_address": { 00:19:37.317 "trtype": "TCP", 00:19:37.317 "adrfam": "IPv4", 00:19:37.317 "traddr": "10.0.0.2", 00:19:37.317 "trsvcid": "4420" 00:19:37.317 }, 00:19:37.317 "secure_channel": true 00:19:37.317 } 00:19:37.317 } 00:19:37.317 ] 00:19:37.317 } 00:19:37.317 ] 00:19:37.317 }' 00:19:37.317 14:13:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:37.575 14:13:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:19:37.575 "subsystems": [ 00:19:37.575 { 00:19:37.575 "subsystem": "keyring", 00:19:37.575 "config": [] 00:19:37.575 }, 00:19:37.575 { 00:19:37.575 "subsystem": "iobuf", 00:19:37.575 "config": [ 00:19:37.575 { 00:19:37.575 "method": "iobuf_set_options", 00:19:37.575 "params": { 00:19:37.575 "small_pool_count": 8192, 00:19:37.575 "large_pool_count": 1024, 00:19:37.575 "small_bufsize": 8192, 00:19:37.575 "large_bufsize": 135168 00:19:37.575 } 00:19:37.575 } 00:19:37.575 ] 00:19:37.575 }, 00:19:37.575 { 00:19:37.575 "subsystem": "sock", 00:19:37.575 "config": [ 00:19:37.575 { 00:19:37.575 "method": "sock_set_default_impl", 00:19:37.575 "params": { 00:19:37.575 "impl_name": "posix" 00:19:37.575 } 00:19:37.575 }, 00:19:37.575 { 00:19:37.575 "method": "sock_impl_set_options", 00:19:37.575 "params": { 00:19:37.575 "impl_name": "ssl", 00:19:37.575 "recv_buf_size": 4096, 00:19:37.575 "send_buf_size": 4096, 00:19:37.575 "enable_recv_pipe": true, 00:19:37.575 "enable_quickack": false, 00:19:37.575 "enable_placement_id": 0, 00:19:37.575 "enable_zerocopy_send_server": true, 00:19:37.575 "enable_zerocopy_send_client": false, 00:19:37.575 "zerocopy_threshold": 0, 00:19:37.575 "tls_version": 0, 00:19:37.575 "enable_ktls": false 00:19:37.575 } 00:19:37.575 }, 00:19:37.575 { 00:19:37.575 "method": "sock_impl_set_options", 00:19:37.575 "params": { 00:19:37.575 "impl_name": "posix", 00:19:37.575 "recv_buf_size": 2097152, 00:19:37.575 "send_buf_size": 2097152, 00:19:37.575 "enable_recv_pipe": true, 00:19:37.575 "enable_quickack": false, 00:19:37.575 "enable_placement_id": 0, 00:19:37.575 "enable_zerocopy_send_server": true, 00:19:37.575 "enable_zerocopy_send_client": false, 00:19:37.575 "zerocopy_threshold": 0, 00:19:37.575 "tls_version": 0, 00:19:37.575 "enable_ktls": false 00:19:37.575 } 00:19:37.575 } 00:19:37.575 ] 00:19:37.575 }, 00:19:37.575 { 00:19:37.575 "subsystem": "vmd", 00:19:37.575 "config": [] 00:19:37.575 }, 00:19:37.575 { 00:19:37.575 "subsystem": "accel", 00:19:37.575 "config": [ 00:19:37.575 { 00:19:37.575 "method": "accel_set_options", 00:19:37.575 "params": { 00:19:37.575 "small_cache_size": 128, 00:19:37.575 "large_cache_size": 16, 00:19:37.575 "task_count": 2048, 00:19:37.575 "sequence_count": 2048, 00:19:37.575 "buf_count": 2048 00:19:37.575 } 00:19:37.575 } 00:19:37.575 ] 00:19:37.575 }, 00:19:37.575 { 00:19:37.575 "subsystem": "bdev", 00:19:37.575 "config": [ 00:19:37.575 { 00:19:37.575 "method": "bdev_set_options", 00:19:37.575 "params": { 00:19:37.575 "bdev_io_pool_size": 65535, 00:19:37.575 "bdev_io_cache_size": 256, 00:19:37.575 "bdev_auto_examine": true, 00:19:37.575 "iobuf_small_cache_size": 128, 00:19:37.575 "iobuf_large_cache_size": 16 00:19:37.575 } 00:19:37.575 }, 00:19:37.575 { 00:19:37.575 "method": "bdev_raid_set_options", 00:19:37.575 "params": { 00:19:37.575 "process_window_size_kb": 1024, 00:19:37.575 "process_max_bandwidth_mb_sec": 0 00:19:37.575 } 00:19:37.575 }, 00:19:37.575 { 00:19:37.575 "method": "bdev_iscsi_set_options", 00:19:37.575 "params": { 00:19:37.575 "timeout_sec": 30 00:19:37.575 } 00:19:37.575 }, 00:19:37.575 { 00:19:37.575 "method": "bdev_nvme_set_options", 00:19:37.575 "params": { 00:19:37.575 "action_on_timeout": "none", 00:19:37.575 "timeout_us": 0, 00:19:37.575 "timeout_admin_us": 0, 00:19:37.575 "keep_alive_timeout_ms": 10000, 00:19:37.575 "arbitration_burst": 0, 00:19:37.575 "low_priority_weight": 0, 00:19:37.575 "medium_priority_weight": 0, 00:19:37.575 "high_priority_weight": 0, 00:19:37.575 "nvme_adminq_poll_period_us": 10000, 00:19:37.575 "nvme_ioq_poll_period_us": 0, 00:19:37.575 "io_queue_requests": 512, 00:19:37.575 "delay_cmd_submit": true, 00:19:37.575 "transport_retry_count": 4, 00:19:37.575 "bdev_retry_count": 3, 00:19:37.575 "transport_ack_timeout": 0, 00:19:37.575 "ctrlr_loss_timeout_sec": 0, 00:19:37.575 "reconnect_delay_sec": 0, 00:19:37.575 "fast_io_fail_timeout_sec": 0, 00:19:37.575 "disable_auto_failback": false, 00:19:37.575 "generate_uuids": false, 00:19:37.575 "transport_tos": 0, 00:19:37.575 "nvme_error_stat": false, 00:19:37.575 "rdma_srq_size": 0, 00:19:37.575 "io_path_stat": false, 00:19:37.575 "allow_accel_sequence": false, 00:19:37.575 "rdma_max_cq_size": 0, 00:19:37.575 "rdma_cm_event_timeout_ms": 0, 00:19:37.575 "dhchap_digests": [ 00:19:37.575 "sha256", 00:19:37.575 "sha384", 00:19:37.575 "sha512" 00:19:37.575 ], 00:19:37.575 "dhchap_dhgroups": [ 00:19:37.575 "null", 00:19:37.575 "ffdhe2048", 00:19:37.575 "ffdhe3072", 00:19:37.575 "ffdhe4096", 00:19:37.575 "ffdhe6144", 00:19:37.575 "ffdhe8192" 00:19:37.575 ] 00:19:37.575 } 00:19:37.575 }, 00:19:37.575 { 00:19:37.575 "method": "bdev_nvme_attach_controller", 00:19:37.575 "params": { 00:19:37.575 "name": "TLSTEST", 00:19:37.575 "trtype": "TCP", 00:19:37.575 "adrfam": "IPv4", 00:19:37.575 "traddr": "10.0.0.2", 00:19:37.575 "trsvcid": "4420", 00:19:37.575 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:37.575 "prchk_reftag": false, 00:19:37.575 "prchk_guard": false, 00:19:37.575 "ctrlr_loss_timeout_sec": 0, 00:19:37.575 "reconnect_delay_sec": 0, 00:19:37.575 "fast_io_fail_timeout_sec": 0, 00:19:37.575 "psk": "/tmp/tmp.UtPld0HauR", 00:19:37.575 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:37.575 "hdgst": false, 00:19:37.575 "ddgst": false 00:19:37.575 } 00:19:37.575 }, 00:19:37.575 { 00:19:37.575 "method": "bdev_nvme_set_hotplug", 00:19:37.575 "params": { 00:19:37.575 "period_us": 100000, 00:19:37.575 "enable": false 00:19:37.575 } 00:19:37.575 }, 00:19:37.575 { 00:19:37.575 "method": "bdev_wait_for_examine" 00:19:37.575 } 00:19:37.575 ] 00:19:37.575 }, 00:19:37.575 { 00:19:37.575 "subsystem": "nbd", 00:19:37.575 "config": [] 00:19:37.575 } 00:19:37.575 ] 00:19:37.575 }' 00:19:37.575 14:13:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # killprocess 247963 00:19:37.575 14:13:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 247963 ']' 00:19:37.575 14:13:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 247963 00:19:37.575 14:13:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:37.575 14:13:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:37.576 14:13:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 247963 00:19:37.833 14:13:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:37.833 14:13:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:37.833 14:13:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 247963' 00:19:37.833 killing process with pid 247963 00:19:37.833 14:13:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 247963 00:19:37.833 Received shutdown signal, test time was about 10.000000 seconds 00:19:37.833 00:19:37.833 Latency(us) 00:19:37.833 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:37.833 =================================================================================================================== 00:19:37.833 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:37.833 [2024-07-26 14:13:45.615397] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:37.833 14:13:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 247963 00:19:38.090 14:13:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@200 -- # killprocess 247662 00:19:38.090 14:13:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 247662 ']' 00:19:38.090 14:13:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 247662 00:19:38.091 14:13:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:38.091 14:13:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:38.091 14:13:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 247662 00:19:38.091 14:13:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:38.091 14:13:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:38.091 14:13:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 247662' 00:19:38.091 killing process with pid 247662 00:19:38.091 14:13:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 247662 00:19:38.091 [2024-07-26 14:13:45.886708] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:38.091 14:13:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 247662 00:19:38.349 14:13:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:19:38.349 14:13:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:38.349 14:13:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:19:38.349 "subsystems": [ 00:19:38.349 { 00:19:38.349 "subsystem": "keyring", 00:19:38.349 "config": [] 00:19:38.349 }, 00:19:38.349 { 00:19:38.349 "subsystem": "iobuf", 00:19:38.349 "config": [ 00:19:38.349 { 00:19:38.349 "method": "iobuf_set_options", 00:19:38.349 "params": { 00:19:38.349 "small_pool_count": 8192, 00:19:38.349 "large_pool_count": 1024, 00:19:38.349 "small_bufsize": 8192, 00:19:38.349 "large_bufsize": 135168 00:19:38.349 } 00:19:38.349 } 00:19:38.349 ] 00:19:38.349 }, 00:19:38.349 { 00:19:38.349 "subsystem": "sock", 00:19:38.349 "config": [ 00:19:38.349 { 00:19:38.349 "method": "sock_set_default_impl", 00:19:38.349 "params": { 00:19:38.349 "impl_name": "posix" 00:19:38.349 } 00:19:38.349 }, 00:19:38.349 { 00:19:38.349 "method": "sock_impl_set_options", 00:19:38.349 "params": { 00:19:38.349 "impl_name": "ssl", 00:19:38.349 "recv_buf_size": 4096, 00:19:38.349 "send_buf_size": 4096, 00:19:38.349 "enable_recv_pipe": true, 00:19:38.349 "enable_quickack": false, 00:19:38.349 "enable_placement_id": 0, 00:19:38.349 "enable_zerocopy_send_server": true, 00:19:38.349 "enable_zerocopy_send_client": false, 00:19:38.349 "zerocopy_threshold": 0, 00:19:38.349 "tls_version": 0, 00:19:38.349 "enable_ktls": false 00:19:38.349 } 00:19:38.349 }, 00:19:38.349 { 00:19:38.349 "method": "sock_impl_set_options", 00:19:38.349 "params": { 00:19:38.349 "impl_name": "posix", 00:19:38.349 "recv_buf_size": 2097152, 00:19:38.349 "send_buf_size": 2097152, 00:19:38.349 "enable_recv_pipe": true, 00:19:38.349 "enable_quickack": false, 00:19:38.349 "enable_placement_id": 0, 00:19:38.349 "enable_zerocopy_send_server": true, 00:19:38.349 "enable_zerocopy_send_client": false, 00:19:38.349 "zerocopy_threshold": 0, 00:19:38.349 "tls_version": 0, 00:19:38.349 "enable_ktls": false 00:19:38.349 } 00:19:38.349 } 00:19:38.349 ] 00:19:38.349 }, 00:19:38.349 { 00:19:38.349 "subsystem": "vmd", 00:19:38.349 "config": [] 00:19:38.349 }, 00:19:38.349 { 00:19:38.349 "subsystem": "accel", 00:19:38.349 "config": [ 00:19:38.349 { 00:19:38.349 "method": "accel_set_options", 00:19:38.349 "params": { 00:19:38.349 "small_cache_size": 128, 00:19:38.349 "large_cache_size": 16, 00:19:38.349 "task_count": 2048, 00:19:38.349 "sequence_count": 2048, 00:19:38.349 "buf_count": 2048 00:19:38.349 } 00:19:38.349 } 00:19:38.349 ] 00:19:38.349 }, 00:19:38.349 { 00:19:38.349 "subsystem": "bdev", 00:19:38.349 "config": [ 00:19:38.349 { 00:19:38.349 "method": "bdev_set_options", 00:19:38.349 "params": { 00:19:38.349 "bdev_io_pool_size": 65535, 00:19:38.349 "bdev_io_cache_size": 256, 00:19:38.349 "bdev_auto_examine": true, 00:19:38.349 "iobuf_small_cache_size": 128, 00:19:38.350 "iobuf_large_cache_size": 16 00:19:38.350 } 00:19:38.350 }, 00:19:38.350 { 00:19:38.350 "method": "bdev_raid_set_options", 00:19:38.350 "params": { 00:19:38.350 "process_window_size_kb": 1024, 00:19:38.350 "process_max_bandwidth_mb_sec": 0 00:19:38.350 } 00:19:38.350 }, 00:19:38.350 { 00:19:38.350 "method": "bdev_iscsi_set_options", 00:19:38.350 "params": { 00:19:38.350 "timeout_sec": 30 00:19:38.350 } 00:19:38.350 }, 00:19:38.350 { 00:19:38.350 "method": "bdev_nvme_set_options", 00:19:38.350 "params": { 00:19:38.350 "action_on_timeout": "none", 00:19:38.350 "timeout_us": 0, 00:19:38.350 "timeout_admin_us": 0, 00:19:38.350 "keep_alive_timeout_ms": 10000, 00:19:38.350 "arbitration_burst": 0, 00:19:38.350 "low_priority_weight": 0, 00:19:38.350 "medium_priority_weight": 0, 00:19:38.350 "high_priority_weight": 0, 00:19:38.350 "nvme_adminq_poll_period_us": 10000, 00:19:38.350 "nvme_ioq_poll_period_us": 0, 00:19:38.350 "io_queue_requests": 0, 00:19:38.350 "delay_cmd_submit": true, 00:19:38.350 "transport_retry_count": 4, 00:19:38.350 "bdev_retry_count": 3, 00:19:38.350 "transport_ack_timeout": 0, 00:19:38.350 "ctrlr_loss_timeout_sec": 0, 00:19:38.350 "reconnect_delay_sec": 0, 00:19:38.350 "fast_io_fail_timeout_sec": 0, 00:19:38.350 "disable_auto_failback": false, 00:19:38.350 "generate_uuids": false, 00:19:38.350 "transport_tos": 0, 00:19:38.350 "nvme_error_stat": false, 00:19:38.350 "rdma_srq_size": 0, 00:19:38.350 "io_path_stat": false, 00:19:38.350 "allow_accel_sequence": false, 00:19:38.350 "rdma_max_cq_size": 0, 00:19:38.350 "rdma_cm_event_timeout_ms": 0, 00:19:38.350 "dhchap_digests": [ 00:19:38.350 "sha256", 00:19:38.350 "sha384", 00:19:38.350 "sha512" 00:19:38.350 ], 00:19:38.350 "dhchap_dhgroups": [ 00:19:38.350 "null", 00:19:38.350 "ffdhe2048", 00:19:38.350 "ffdhe3072", 00:19:38.350 "ffdhe4096", 00:19:38.350 "ffdhe6144", 00:19:38.350 "ffdhe8192" 00:19:38.350 ] 00:19:38.350 } 00:19:38.350 }, 00:19:38.350 { 00:19:38.350 "method": "bdev_nvme_set_hotplug", 00:19:38.350 "params": { 00:19:38.350 "period_us": 100000, 00:19:38.350 "enable": false 00:19:38.350 } 00:19:38.350 }, 00:19:38.350 { 00:19:38.350 "method": "bdev_malloc_create", 00:19:38.350 "params": { 00:19:38.350 "name": "malloc0", 00:19:38.350 "num_blocks": 8192, 00:19:38.350 "block_size": 4096, 00:19:38.350 "physical_block_size": 4096, 00:19:38.350 "uuid": "4d28e1fd-3a44-43db-97b7-55d8ccff26b8", 00:19:38.350 "optimal_io_boundary": 0, 00:19:38.350 "md_size": 0, 00:19:38.350 "dif_type": 0, 00:19:38.350 "dif_is_head_of_md": false, 00:19:38.350 "dif_pi_format": 0 00:19:38.350 } 00:19:38.350 }, 00:19:38.350 { 00:19:38.350 "method": "bdev_wait_for_examine" 00:19:38.350 } 00:19:38.350 ] 00:19:38.350 }, 00:19:38.350 { 00:19:38.350 "subsystem": "nbd", 00:19:38.350 "config": [] 00:19:38.350 }, 00:19:38.350 { 00:19:38.350 "subsystem": "scheduler", 00:19:38.350 "config": [ 00:19:38.350 { 00:19:38.350 "method": "framework_set_scheduler", 00:19:38.350 "params": { 00:19:38.350 "name": "static" 00:19:38.350 } 00:19:38.350 } 00:19:38.350 ] 00:19:38.350 }, 00:19:38.350 { 00:19:38.350 "subsystem": "nvmf", 00:19:38.350 "config": [ 00:19:38.350 { 00:19:38.350 "method": "nvmf_set_config", 00:19:38.350 "params": { 00:19:38.350 "discovery_filter": "match_any", 00:19:38.350 "admin_cmd_passthru": { 00:19:38.350 "identify_ctrlr": false 00:19:38.350 } 00:19:38.350 } 00:19:38.350 }, 00:19:38.350 { 00:19:38.350 "method": "nvmf_set_max_subsystems", 00:19:38.350 "params": { 00:19:38.350 "max_subsystems": 1024 00:19:38.350 } 00:19:38.350 }, 00:19:38.350 { 00:19:38.350 "method": "nvmf_set_crdt", 00:19:38.350 "params": { 00:19:38.350 "crdt1": 0, 00:19:38.350 "crdt2": 0, 00:19:38.350 "crdt3": 0 00:19:38.350 } 00:19:38.350 }, 00:19:38.350 { 00:19:38.350 "method": "nvmf_create_transport", 00:19:38.350 "params": { 00:19:38.350 "trtype": "TCP", 00:19:38.350 "max_queue_depth": 128, 00:19:38.350 "max_io_qpairs_per_ctrlr": 127, 00:19:38.350 "in_capsule_data_size": 4096, 00:19:38.350 "max_io_size": 131072, 00:19:38.350 "io_unit_size": 131072, 00:19:38.350 "max_aq_depth": 128, 00:19:38.350 "num_shared_buffers": 511, 00:19:38.350 "buf_cache_size": 4294967295, 00:19:38.350 "dif_insert_or_strip": false, 00:19:38.350 "zcopy": false, 00:19:38.350 "c2h_success": false, 00:19:38.350 "sock_priority": 0, 00:19:38.350 "abort_timeout_sec": 1, 00:19:38.350 "ack_timeout": 0, 00:19:38.350 "data_wr_pool_size": 0 00:19:38.350 } 00:19:38.350 }, 00:19:38.350 { 00:19:38.350 "method": "nvmf_create_subsystem", 00:19:38.350 "params": { 00:19:38.350 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:38.350 "allow_any_host": false, 00:19:38.350 "serial_number": "SPDK00000000000001", 00:19:38.350 "model_number": "SPDK bdev Controller", 00:19:38.350 "max_namespaces": 10, 00:19:38.350 "min_cntlid": 1, 00:19:38.350 "max_cntlid": 65519, 00:19:38.350 "ana_reporting": false 00:19:38.350 } 00:19:38.350 }, 00:19:38.350 { 00:19:38.350 "method": "nvmf_subsystem_add_host", 00:19:38.350 "params": { 00:19:38.350 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:38.350 "host": "nqn.2016-06.io.spdk:host1", 00:19:38.350 "psk": "/tmp/tmp.UtPld0HauR" 00:19:38.350 } 00:19:38.350 }, 00:19:38.350 { 00:19:38.350 "method": "nvmf_subsystem_add_ns", 00:19:38.350 "params": { 00:19:38.350 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:38.350 "namespace": { 00:19:38.350 "nsid": 1, 00:19:38.350 "bdev_name": "malloc0", 00:19:38.350 "nguid": "4D28E1FD3A4443DB97B755D8CCFF26B8", 00:19:38.350 "uuid": "4d28e1fd-3a44-43db-97b7-55d8ccff26b8", 00:19:38.350 "no_auto_visible": false 00:19:38.350 } 00:19:38.350 } 00:19:38.350 }, 00:19:38.350 { 00:19:38.350 "method": "nvmf_subsystem_add_listener", 00:19:38.350 "params": { 00:19:38.350 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:38.350 "listen_address": { 00:19:38.350 "trtype": "TCP", 00:19:38.350 "adrfam": "IPv4", 00:19:38.350 "traddr": "10.0.0.2", 00:19:38.350 "trsvcid": "4420" 00:19:38.350 }, 00:19:38.350 "secure_channel": true 00:19:38.350 } 00:19:38.350 } 00:19:38.350 ] 00:19:38.350 } 00:19:38.350 ] 00:19:38.350 }' 00:19:38.350 14:13:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:38.350 14:13:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:38.350 14:13:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=248192 00:19:38.350 14:13:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:19:38.350 14:13:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 248192 00:19:38.350 14:13:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 248192 ']' 00:19:38.350 14:13:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:38.350 14:13:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:38.350 14:13:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:38.350 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:38.350 14:13:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:38.351 14:13:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:38.351 [2024-07-26 14:13:46.185555] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:19:38.351 [2024-07-26 14:13:46.185634] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:38.351 EAL: No free 2048 kB hugepages reported on node 1 00:19:38.351 [2024-07-26 14:13:46.248723] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:38.351 [2024-07-26 14:13:46.353398] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:38.351 [2024-07-26 14:13:46.353463] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:38.351 [2024-07-26 14:13:46.353477] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:38.351 [2024-07-26 14:13:46.353488] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:38.351 [2024-07-26 14:13:46.353497] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:38.351 [2024-07-26 14:13:46.353604] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:38.608 [2024-07-26 14:13:46.583486] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:38.608 [2024-07-26 14:13:46.604859] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:38.608 [2024-07-26 14:13:46.620922] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:38.608 [2024-07-26 14:13:46.621145] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:39.172 14:13:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:39.172 14:13:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:39.172 14:13:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:39.430 14:13:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:39.430 14:13:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:39.430 14:13:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:39.430 14:13:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=248286 00:19:39.430 14:13:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 248286 /var/tmp/bdevperf.sock 00:19:39.430 14:13:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 248286 ']' 00:19:39.430 14:13:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:39.430 14:13:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:39.430 14:13:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:19:39.430 14:13:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:39.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:39.430 14:13:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:19:39.430 "subsystems": [ 00:19:39.430 { 00:19:39.430 "subsystem": "keyring", 00:19:39.430 "config": [] 00:19:39.430 }, 00:19:39.430 { 00:19:39.430 "subsystem": "iobuf", 00:19:39.430 "config": [ 00:19:39.430 { 00:19:39.430 "method": "iobuf_set_options", 00:19:39.430 "params": { 00:19:39.430 "small_pool_count": 8192, 00:19:39.430 "large_pool_count": 1024, 00:19:39.430 "small_bufsize": 8192, 00:19:39.430 "large_bufsize": 135168 00:19:39.430 } 00:19:39.430 } 00:19:39.430 ] 00:19:39.430 }, 00:19:39.430 { 00:19:39.430 "subsystem": "sock", 00:19:39.430 "config": [ 00:19:39.430 { 00:19:39.430 "method": "sock_set_default_impl", 00:19:39.430 "params": { 00:19:39.430 "impl_name": "posix" 00:19:39.430 } 00:19:39.430 }, 00:19:39.430 { 00:19:39.430 "method": "sock_impl_set_options", 00:19:39.430 "params": { 00:19:39.430 "impl_name": "ssl", 00:19:39.430 "recv_buf_size": 4096, 00:19:39.430 "send_buf_size": 4096, 00:19:39.430 "enable_recv_pipe": true, 00:19:39.430 "enable_quickack": false, 00:19:39.430 "enable_placement_id": 0, 00:19:39.430 "enable_zerocopy_send_server": true, 00:19:39.430 "enable_zerocopy_send_client": false, 00:19:39.430 "zerocopy_threshold": 0, 00:19:39.430 "tls_version": 0, 00:19:39.430 "enable_ktls": false 00:19:39.430 } 00:19:39.430 }, 00:19:39.430 { 00:19:39.430 "method": "sock_impl_set_options", 00:19:39.430 "params": { 00:19:39.430 "impl_name": "posix", 00:19:39.430 "recv_buf_size": 2097152, 00:19:39.430 "send_buf_size": 2097152, 00:19:39.430 "enable_recv_pipe": true, 00:19:39.430 "enable_quickack": false, 00:19:39.430 "enable_placement_id": 0, 00:19:39.430 "enable_zerocopy_send_server": true, 00:19:39.430 "enable_zerocopy_send_client": false, 00:19:39.430 "zerocopy_threshold": 0, 00:19:39.430 "tls_version": 0, 00:19:39.430 "enable_ktls": false 00:19:39.430 } 00:19:39.430 } 00:19:39.430 ] 00:19:39.430 }, 00:19:39.430 { 00:19:39.430 "subsystem": "vmd", 00:19:39.430 "config": [] 00:19:39.430 }, 00:19:39.430 { 00:19:39.430 "subsystem": "accel", 00:19:39.430 "config": [ 00:19:39.430 { 00:19:39.430 "method": "accel_set_options", 00:19:39.430 "params": { 00:19:39.430 "small_cache_size": 128, 00:19:39.430 "large_cache_size": 16, 00:19:39.430 "task_count": 2048, 00:19:39.430 "sequence_count": 2048, 00:19:39.430 "buf_count": 2048 00:19:39.430 } 00:19:39.430 } 00:19:39.430 ] 00:19:39.430 }, 00:19:39.430 { 00:19:39.430 "subsystem": "bdev", 00:19:39.430 "config": [ 00:19:39.430 { 00:19:39.430 "method": "bdev_set_options", 00:19:39.430 "params": { 00:19:39.430 "bdev_io_pool_size": 65535, 00:19:39.430 "bdev_io_cache_size": 256, 00:19:39.430 "bdev_auto_examine": true, 00:19:39.430 "iobuf_small_cache_size": 128, 00:19:39.431 "iobuf_large_cache_size": 16 00:19:39.431 } 00:19:39.431 }, 00:19:39.431 { 00:19:39.431 "method": "bdev_raid_set_options", 00:19:39.431 "params": { 00:19:39.431 "process_window_size_kb": 1024, 00:19:39.431 "process_max_bandwidth_mb_sec": 0 00:19:39.431 } 00:19:39.431 }, 00:19:39.431 { 00:19:39.431 "method": "bdev_iscsi_set_options", 00:19:39.431 "params": { 00:19:39.431 "timeout_sec": 30 00:19:39.431 } 00:19:39.431 }, 00:19:39.431 { 00:19:39.431 "method": "bdev_nvme_set_options", 00:19:39.431 "params": { 00:19:39.431 "action_on_timeout": "none", 00:19:39.431 "timeout_us": 0, 00:19:39.431 "timeout_admin_us": 0, 00:19:39.431 "keep_alive_timeout_ms": 10000, 00:19:39.431 "arbitration_burst": 0, 00:19:39.431 "low_priority_weight": 0, 00:19:39.431 "medium_priority_weight": 0, 00:19:39.431 "high_priority_weight": 0, 00:19:39.431 "nvme_adminq_poll_period_us": 10000, 00:19:39.431 "nvme_ioq_poll_period_us": 0, 00:19:39.431 "io_queue_requests": 512, 00:19:39.431 "delay_cmd_submit": true, 00:19:39.431 "transport_retry_count": 4, 00:19:39.431 "bdev_retry_count": 3, 00:19:39.431 "transport_ack_timeout": 0, 00:19:39.431 "ctrlr_loss_timeout_sec": 0, 00:19:39.431 "reconnect_delay_sec": 0, 00:19:39.431 "fast_io_fail_timeout_sec": 0, 00:19:39.431 "disable_auto_failback": false, 00:19:39.431 "generate_uuids": false, 00:19:39.431 "transport_tos": 0, 00:19:39.431 "nvme_error_stat": false, 00:19:39.431 "rdma_srq_size": 0, 00:19:39.431 "io_path_stat": false, 00:19:39.431 "allow_accel_sequence": false, 00:19:39.431 "rdma_max_cq_size": 0, 00:19:39.431 "rdma_cm_event_timeout_ms": 0, 00:19:39.431 "dhchap_digests": [ 00:19:39.431 "sha256", 00:19:39.431 "sha384", 00:19:39.431 "sha512" 00:19:39.431 ], 00:19:39.431 "dhchap_dhgroups": [ 00:19:39.431 "null", 00:19:39.431 "ffdhe2048", 00:19:39.431 "ffdhe3072", 00:19:39.431 "ffdhe4096", 00:19:39.431 "ffdhe6144", 00:19:39.431 "ffdhe8192" 00:19:39.431 ] 00:19:39.431 } 00:19:39.431 }, 00:19:39.431 { 00:19:39.431 "method": "bdev_nvme_attach_controller", 00:19:39.431 "params": { 00:19:39.431 "name": "TLSTEST", 00:19:39.431 "trtype": "TCP", 00:19:39.431 "adrfam": "IPv4", 00:19:39.431 "traddr": "10.0.0.2", 00:19:39.431 "trsvcid": "4420", 00:19:39.431 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:39.431 "prchk_reftag": false, 00:19:39.431 "prchk_guard": false, 00:19:39.431 "ctrlr_loss_timeout_sec": 0, 00:19:39.431 "reconnect_delay_sec": 0, 00:19:39.431 "fast_io_fail_timeout_sec": 0, 00:19:39.431 "psk": "/tmp/tmp.UtPld0HauR", 00:19:39.431 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:39.431 "hdgst": false, 00:19:39.431 "ddgst": false 00:19:39.431 } 00:19:39.431 }, 00:19:39.431 { 00:19:39.431 "method": "bdev_nvme_set_hotplug", 00:19:39.431 "params": { 00:19:39.431 "period_us": 100000, 00:19:39.431 "enable": false 00:19:39.431 } 00:19:39.431 }, 00:19:39.431 { 00:19:39.431 "method": "bdev_wait_for_examine" 00:19:39.431 } 00:19:39.431 ] 00:19:39.431 }, 00:19:39.431 { 00:19:39.431 "subsystem": "nbd", 00:19:39.431 "config": [] 00:19:39.431 } 00:19:39.431 ] 00:19:39.431 }' 00:19:39.431 14:13:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:39.431 14:13:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:39.431 [2024-07-26 14:13:47.254963] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:19:39.431 [2024-07-26 14:13:47.255036] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid248286 ] 00:19:39.431 EAL: No free 2048 kB hugepages reported on node 1 00:19:39.431 [2024-07-26 14:13:47.314250] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:39.431 [2024-07-26 14:13:47.423599] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:39.689 [2024-07-26 14:13:47.595566] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:39.689 [2024-07-26 14:13:47.595708] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:40.253 14:13:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:40.253 14:13:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:40.253 14:13:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:40.509 Running I/O for 10 seconds... 00:19:50.488 00:19:50.488 Latency(us) 00:19:50.488 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:50.488 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:50.488 Verification LBA range: start 0x0 length 0x2000 00:19:50.488 TLSTESTn1 : 10.02 3129.85 12.23 0.00 0.00 40818.31 6941.96 81167.55 00:19:50.488 =================================================================================================================== 00:19:50.488 Total : 3129.85 12.23 0.00 0.00 40818.31 6941.96 81167.55 00:19:50.488 0 00:19:50.488 14:13:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:50.488 14:13:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@214 -- # killprocess 248286 00:19:50.488 14:13:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 248286 ']' 00:19:50.488 14:13:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 248286 00:19:50.488 14:13:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:50.488 14:13:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:50.488 14:13:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 248286 00:19:50.488 14:13:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:50.488 14:13:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:50.488 14:13:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 248286' 00:19:50.488 killing process with pid 248286 00:19:50.488 14:13:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 248286 00:19:50.488 Received shutdown signal, test time was about 10.000000 seconds 00:19:50.488 00:19:50.488 Latency(us) 00:19:50.488 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:50.488 =================================================================================================================== 00:19:50.488 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:50.488 [2024-07-26 14:13:58.396467] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:50.488 14:13:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 248286 00:19:50.745 14:13:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # killprocess 248192 00:19:50.745 14:13:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 248192 ']' 00:19:50.745 14:13:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 248192 00:19:50.745 14:13:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:50.745 14:13:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:50.745 14:13:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 248192 00:19:50.745 14:13:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:50.745 14:13:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:50.745 14:13:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 248192' 00:19:50.745 killing process with pid 248192 00:19:50.745 14:13:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 248192 00:19:50.745 [2024-07-26 14:13:58.677798] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:50.745 14:13:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 248192 00:19:51.004 14:13:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:19:51.004 14:13:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:51.004 14:13:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:51.004 14:13:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:51.004 14:13:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=249721 00:19:51.004 14:13:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:51.004 14:13:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 249721 00:19:51.004 14:13:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 249721 ']' 00:19:51.004 14:13:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:51.004 14:13:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:51.004 14:13:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:51.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:51.004 14:13:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:51.004 14:13:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:51.004 [2024-07-26 14:13:58.997824] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:19:51.004 [2024-07-26 14:13:58.997925] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:51.262 EAL: No free 2048 kB hugepages reported on node 1 00:19:51.262 [2024-07-26 14:13:59.060326] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:51.262 [2024-07-26 14:13:59.156834] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:51.262 [2024-07-26 14:13:59.156891] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:51.262 [2024-07-26 14:13:59.156914] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:51.262 [2024-07-26 14:13:59.156925] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:51.262 [2024-07-26 14:13:59.156934] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:51.262 [2024-07-26 14:13:59.156958] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:51.262 14:13:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:51.262 14:13:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:51.262 14:13:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:51.262 14:13:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:51.262 14:13:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:51.520 14:13:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:51.520 14:13:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.UtPld0HauR 00:19:51.520 14:13:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.UtPld0HauR 00:19:51.520 14:13:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:51.796 [2024-07-26 14:13:59.549734] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:51.796 14:13:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:51.796 14:13:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:52.054 [2024-07-26 14:14:00.043082] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:52.054 [2024-07-26 14:14:00.043331] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:52.054 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:52.312 malloc0 00:19:52.312 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:52.876 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.UtPld0HauR 00:19:52.877 [2024-07-26 14:14:00.835055] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:52.877 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=249889 00:19:52.877 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:19:52.877 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:52.877 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 249889 /var/tmp/bdevperf.sock 00:19:52.877 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 249889 ']' 00:19:52.877 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:52.877 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:52.877 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:52.877 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:52.877 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:52.877 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:53.134 [2024-07-26 14:14:00.897662] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:19:53.134 [2024-07-26 14:14:00.897746] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid249889 ] 00:19:53.134 EAL: No free 2048 kB hugepages reported on node 1 00:19:53.134 [2024-07-26 14:14:00.958211] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:53.135 [2024-07-26 14:14:01.066901] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:53.392 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:53.392 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:53.392 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.UtPld0HauR 00:19:53.649 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:53.649 [2024-07-26 14:14:01.641363] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:53.906 nvme0n1 00:19:53.906 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:53.906 Running I/O for 1 seconds... 00:19:55.276 00:19:55.276 Latency(us) 00:19:55.276 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:55.276 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:55.276 Verification LBA range: start 0x0 length 0x2000 00:19:55.276 nvme0n1 : 1.02 3370.93 13.17 0.00 0.00 37607.02 8204.14 46409.20 00:19:55.276 =================================================================================================================== 00:19:55.276 Total : 3370.93 13.17 0.00 0.00 37607.02 8204.14 46409.20 00:19:55.276 0 00:19:55.276 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # killprocess 249889 00:19:55.276 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 249889 ']' 00:19:55.276 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 249889 00:19:55.276 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:55.276 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:55.276 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 249889 00:19:55.276 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:55.276 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:55.276 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 249889' 00:19:55.276 killing process with pid 249889 00:19:55.276 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 249889 00:19:55.276 Received shutdown signal, test time was about 1.000000 seconds 00:19:55.276 00:19:55.276 Latency(us) 00:19:55.276 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:55.276 =================================================================================================================== 00:19:55.276 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:55.276 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 249889 00:19:55.276 14:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@235 -- # killprocess 249721 00:19:55.276 14:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 249721 ']' 00:19:55.276 14:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 249721 00:19:55.277 14:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:55.277 14:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:55.277 14:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 249721 00:19:55.277 14:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:55.277 14:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:55.277 14:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 249721' 00:19:55.277 killing process with pid 249721 00:19:55.277 14:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 249721 00:19:55.277 [2024-07-26 14:14:03.187103] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:55.277 14:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 249721 00:19:55.533 14:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@240 -- # nvmfappstart 00:19:55.533 14:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:55.533 14:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:55.533 14:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:55.533 14:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=250283 00:19:55.534 14:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:55.534 14:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 250283 00:19:55.534 14:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 250283 ']' 00:19:55.534 14:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:55.534 14:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:55.534 14:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:55.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:55.534 14:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:55.534 14:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:55.534 [2024-07-26 14:14:03.512394] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:19:55.534 [2024-07-26 14:14:03.512476] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:55.791 EAL: No free 2048 kB hugepages reported on node 1 00:19:55.791 [2024-07-26 14:14:03.586327] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:55.791 [2024-07-26 14:14:03.694719] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:55.791 [2024-07-26 14:14:03.694780] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:55.791 [2024-07-26 14:14:03.694793] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:55.791 [2024-07-26 14:14:03.694804] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:55.791 [2024-07-26 14:14:03.694813] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:55.791 [2024-07-26 14:14:03.694840] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:56.723 14:14:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:56.723 14:14:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:56.723 14:14:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:56.723 14:14:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:56.723 14:14:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:56.723 14:14:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:56.723 14:14:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@241 -- # rpc_cmd 00:19:56.723 14:14:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.723 14:14:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:56.723 [2024-07-26 14:14:04.530208] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:56.723 malloc0 00:19:56.723 [2024-07-26 14:14:04.561323] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:56.723 [2024-07-26 14:14:04.571727] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:56.723 14:14:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.723 14:14:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # bdevperf_pid=250437 00:19:56.723 14:14:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@252 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:19:56.723 14:14:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # waitforlisten 250437 /var/tmp/bdevperf.sock 00:19:56.723 14:14:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 250437 ']' 00:19:56.723 14:14:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:56.723 14:14:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:56.723 14:14:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:56.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:56.723 14:14:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:56.723 14:14:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:56.723 [2024-07-26 14:14:04.635433] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:19:56.723 [2024-07-26 14:14:04.635493] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid250437 ] 00:19:56.723 EAL: No free 2048 kB hugepages reported on node 1 00:19:56.723 [2024-07-26 14:14:04.691392] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:56.981 [2024-07-26 14:14:04.797750] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:56.981 14:14:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:56.981 14:14:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:56.981 14:14:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@257 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.UtPld0HauR 00:19:57.239 14:14:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:57.496 [2024-07-26 14:14:05.362728] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:57.496 nvme0n1 00:19:57.496 14:14:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@262 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:57.754 Running I/O for 1 seconds... 00:19:58.686 00:19:58.686 Latency(us) 00:19:58.686 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:58.686 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:58.686 Verification LBA range: start 0x0 length 0x2000 00:19:58.686 nvme0n1 : 1.03 2720.30 10.63 0.00 0.00 46512.15 9126.49 45049.93 00:19:58.686 =================================================================================================================== 00:19:58.686 Total : 2720.30 10.63 0.00 0.00 46512.15 9126.49 45049.93 00:19:58.686 0 00:19:58.686 14:14:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # rpc_cmd save_config 00:19:58.686 14:14:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.686 14:14:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:58.943 14:14:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.943 14:14:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # tgtcfg='{ 00:19:58.943 "subsystems": [ 00:19:58.943 { 00:19:58.943 "subsystem": "keyring", 00:19:58.943 "config": [ 00:19:58.943 { 00:19:58.943 "method": "keyring_file_add_key", 00:19:58.943 "params": { 00:19:58.943 "name": "key0", 00:19:58.943 "path": "/tmp/tmp.UtPld0HauR" 00:19:58.943 } 00:19:58.943 } 00:19:58.943 ] 00:19:58.943 }, 00:19:58.943 { 00:19:58.943 "subsystem": "iobuf", 00:19:58.943 "config": [ 00:19:58.943 { 00:19:58.943 "method": "iobuf_set_options", 00:19:58.943 "params": { 00:19:58.943 "small_pool_count": 8192, 00:19:58.943 "large_pool_count": 1024, 00:19:58.943 "small_bufsize": 8192, 00:19:58.943 "large_bufsize": 135168 00:19:58.943 } 00:19:58.943 } 00:19:58.943 ] 00:19:58.943 }, 00:19:58.943 { 00:19:58.943 "subsystem": "sock", 00:19:58.943 "config": [ 00:19:58.943 { 00:19:58.943 "method": "sock_set_default_impl", 00:19:58.943 "params": { 00:19:58.943 "impl_name": "posix" 00:19:58.943 } 00:19:58.943 }, 00:19:58.943 { 00:19:58.943 "method": "sock_impl_set_options", 00:19:58.943 "params": { 00:19:58.943 "impl_name": "ssl", 00:19:58.943 "recv_buf_size": 4096, 00:19:58.943 "send_buf_size": 4096, 00:19:58.943 "enable_recv_pipe": true, 00:19:58.943 "enable_quickack": false, 00:19:58.943 "enable_placement_id": 0, 00:19:58.943 "enable_zerocopy_send_server": true, 00:19:58.943 "enable_zerocopy_send_client": false, 00:19:58.943 "zerocopy_threshold": 0, 00:19:58.943 "tls_version": 0, 00:19:58.943 "enable_ktls": false 00:19:58.943 } 00:19:58.943 }, 00:19:58.943 { 00:19:58.943 "method": "sock_impl_set_options", 00:19:58.943 "params": { 00:19:58.943 "impl_name": "posix", 00:19:58.943 "recv_buf_size": 2097152, 00:19:58.943 "send_buf_size": 2097152, 00:19:58.943 "enable_recv_pipe": true, 00:19:58.943 "enable_quickack": false, 00:19:58.943 "enable_placement_id": 0, 00:19:58.943 "enable_zerocopy_send_server": true, 00:19:58.943 "enable_zerocopy_send_client": false, 00:19:58.943 "zerocopy_threshold": 0, 00:19:58.943 "tls_version": 0, 00:19:58.943 "enable_ktls": false 00:19:58.943 } 00:19:58.943 } 00:19:58.943 ] 00:19:58.943 }, 00:19:58.943 { 00:19:58.943 "subsystem": "vmd", 00:19:58.943 "config": [] 00:19:58.943 }, 00:19:58.943 { 00:19:58.943 "subsystem": "accel", 00:19:58.943 "config": [ 00:19:58.943 { 00:19:58.943 "method": "accel_set_options", 00:19:58.943 "params": { 00:19:58.943 "small_cache_size": 128, 00:19:58.943 "large_cache_size": 16, 00:19:58.943 "task_count": 2048, 00:19:58.943 "sequence_count": 2048, 00:19:58.943 "buf_count": 2048 00:19:58.943 } 00:19:58.943 } 00:19:58.943 ] 00:19:58.943 }, 00:19:58.943 { 00:19:58.943 "subsystem": "bdev", 00:19:58.943 "config": [ 00:19:58.943 { 00:19:58.943 "method": "bdev_set_options", 00:19:58.943 "params": { 00:19:58.943 "bdev_io_pool_size": 65535, 00:19:58.943 "bdev_io_cache_size": 256, 00:19:58.943 "bdev_auto_examine": true, 00:19:58.943 "iobuf_small_cache_size": 128, 00:19:58.943 "iobuf_large_cache_size": 16 00:19:58.943 } 00:19:58.943 }, 00:19:58.943 { 00:19:58.943 "method": "bdev_raid_set_options", 00:19:58.944 "params": { 00:19:58.944 "process_window_size_kb": 1024, 00:19:58.944 "process_max_bandwidth_mb_sec": 0 00:19:58.944 } 00:19:58.944 }, 00:19:58.944 { 00:19:58.944 "method": "bdev_iscsi_set_options", 00:19:58.944 "params": { 00:19:58.944 "timeout_sec": 30 00:19:58.944 } 00:19:58.944 }, 00:19:58.944 { 00:19:58.944 "method": "bdev_nvme_set_options", 00:19:58.944 "params": { 00:19:58.944 "action_on_timeout": "none", 00:19:58.944 "timeout_us": 0, 00:19:58.944 "timeout_admin_us": 0, 00:19:58.944 "keep_alive_timeout_ms": 10000, 00:19:58.944 "arbitration_burst": 0, 00:19:58.944 "low_priority_weight": 0, 00:19:58.944 "medium_priority_weight": 0, 00:19:58.944 "high_priority_weight": 0, 00:19:58.944 "nvme_adminq_poll_period_us": 10000, 00:19:58.944 "nvme_ioq_poll_period_us": 0, 00:19:58.944 "io_queue_requests": 0, 00:19:58.944 "delay_cmd_submit": true, 00:19:58.944 "transport_retry_count": 4, 00:19:58.944 "bdev_retry_count": 3, 00:19:58.944 "transport_ack_timeout": 0, 00:19:58.944 "ctrlr_loss_timeout_sec": 0, 00:19:58.944 "reconnect_delay_sec": 0, 00:19:58.944 "fast_io_fail_timeout_sec": 0, 00:19:58.944 "disable_auto_failback": false, 00:19:58.944 "generate_uuids": false, 00:19:58.944 "transport_tos": 0, 00:19:58.944 "nvme_error_stat": false, 00:19:58.944 "rdma_srq_size": 0, 00:19:58.944 "io_path_stat": false, 00:19:58.944 "allow_accel_sequence": false, 00:19:58.944 "rdma_max_cq_size": 0, 00:19:58.944 "rdma_cm_event_timeout_ms": 0, 00:19:58.944 "dhchap_digests": [ 00:19:58.944 "sha256", 00:19:58.944 "sha384", 00:19:58.944 "sha512" 00:19:58.944 ], 00:19:58.944 "dhchap_dhgroups": [ 00:19:58.944 "null", 00:19:58.944 "ffdhe2048", 00:19:58.944 "ffdhe3072", 00:19:58.944 "ffdhe4096", 00:19:58.944 "ffdhe6144", 00:19:58.944 "ffdhe8192" 00:19:58.944 ] 00:19:58.944 } 00:19:58.944 }, 00:19:58.944 { 00:19:58.944 "method": "bdev_nvme_set_hotplug", 00:19:58.944 "params": { 00:19:58.944 "period_us": 100000, 00:19:58.944 "enable": false 00:19:58.944 } 00:19:58.944 }, 00:19:58.944 { 00:19:58.944 "method": "bdev_malloc_create", 00:19:58.944 "params": { 00:19:58.944 "name": "malloc0", 00:19:58.944 "num_blocks": 8192, 00:19:58.944 "block_size": 4096, 00:19:58.944 "physical_block_size": 4096, 00:19:58.944 "uuid": "dc202d16-5561-41e1-a3f2-86dd174e1f64", 00:19:58.944 "optimal_io_boundary": 0, 00:19:58.944 "md_size": 0, 00:19:58.944 "dif_type": 0, 00:19:58.944 "dif_is_head_of_md": false, 00:19:58.944 "dif_pi_format": 0 00:19:58.944 } 00:19:58.944 }, 00:19:58.944 { 00:19:58.944 "method": "bdev_wait_for_examine" 00:19:58.944 } 00:19:58.944 ] 00:19:58.944 }, 00:19:58.944 { 00:19:58.944 "subsystem": "nbd", 00:19:58.944 "config": [] 00:19:58.944 }, 00:19:58.944 { 00:19:58.944 "subsystem": "scheduler", 00:19:58.944 "config": [ 00:19:58.944 { 00:19:58.944 "method": "framework_set_scheduler", 00:19:58.944 "params": { 00:19:58.944 "name": "static" 00:19:58.944 } 00:19:58.944 } 00:19:58.944 ] 00:19:58.944 }, 00:19:58.944 { 00:19:58.944 "subsystem": "nvmf", 00:19:58.944 "config": [ 00:19:58.944 { 00:19:58.944 "method": "nvmf_set_config", 00:19:58.944 "params": { 00:19:58.944 "discovery_filter": "match_any", 00:19:58.944 "admin_cmd_passthru": { 00:19:58.944 "identify_ctrlr": false 00:19:58.944 } 00:19:58.944 } 00:19:58.944 }, 00:19:58.944 { 00:19:58.944 "method": "nvmf_set_max_subsystems", 00:19:58.944 "params": { 00:19:58.944 "max_subsystems": 1024 00:19:58.944 } 00:19:58.944 }, 00:19:58.944 { 00:19:58.944 "method": "nvmf_set_crdt", 00:19:58.944 "params": { 00:19:58.944 "crdt1": 0, 00:19:58.944 "crdt2": 0, 00:19:58.944 "crdt3": 0 00:19:58.944 } 00:19:58.944 }, 00:19:58.944 { 00:19:58.944 "method": "nvmf_create_transport", 00:19:58.944 "params": { 00:19:58.944 "trtype": "TCP", 00:19:58.944 "max_queue_depth": 128, 00:19:58.944 "max_io_qpairs_per_ctrlr": 127, 00:19:58.944 "in_capsule_data_size": 4096, 00:19:58.944 "max_io_size": 131072, 00:19:58.944 "io_unit_size": 131072, 00:19:58.944 "max_aq_depth": 128, 00:19:58.944 "num_shared_buffers": 511, 00:19:58.944 "buf_cache_size": 4294967295, 00:19:58.944 "dif_insert_or_strip": false, 00:19:58.944 "zcopy": false, 00:19:58.944 "c2h_success": false, 00:19:58.944 "sock_priority": 0, 00:19:58.944 "abort_timeout_sec": 1, 00:19:58.944 "ack_timeout": 0, 00:19:58.944 "data_wr_pool_size": 0 00:19:58.944 } 00:19:58.944 }, 00:19:58.944 { 00:19:58.944 "method": "nvmf_create_subsystem", 00:19:58.944 "params": { 00:19:58.944 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:58.944 "allow_any_host": false, 00:19:58.944 "serial_number": "00000000000000000000", 00:19:58.944 "model_number": "SPDK bdev Controller", 00:19:58.944 "max_namespaces": 32, 00:19:58.944 "min_cntlid": 1, 00:19:58.944 "max_cntlid": 65519, 00:19:58.944 "ana_reporting": false 00:19:58.944 } 00:19:58.944 }, 00:19:58.944 { 00:19:58.944 "method": "nvmf_subsystem_add_host", 00:19:58.944 "params": { 00:19:58.944 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:58.944 "host": "nqn.2016-06.io.spdk:host1", 00:19:58.944 "psk": "key0" 00:19:58.944 } 00:19:58.944 }, 00:19:58.944 { 00:19:58.944 "method": "nvmf_subsystem_add_ns", 00:19:58.944 "params": { 00:19:58.944 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:58.944 "namespace": { 00:19:58.944 "nsid": 1, 00:19:58.944 "bdev_name": "malloc0", 00:19:58.944 "nguid": "DC202D16556141E1A3F286DD174E1F64", 00:19:58.944 "uuid": "dc202d16-5561-41e1-a3f2-86dd174e1f64", 00:19:58.944 "no_auto_visible": false 00:19:58.944 } 00:19:58.944 } 00:19:58.944 }, 00:19:58.944 { 00:19:58.944 "method": "nvmf_subsystem_add_listener", 00:19:58.944 "params": { 00:19:58.944 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:58.944 "listen_address": { 00:19:58.944 "trtype": "TCP", 00:19:58.944 "adrfam": "IPv4", 00:19:58.944 "traddr": "10.0.0.2", 00:19:58.944 "trsvcid": "4420" 00:19:58.944 }, 00:19:58.944 "secure_channel": false, 00:19:58.944 "sock_impl": "ssl" 00:19:58.944 } 00:19:58.944 } 00:19:58.944 ] 00:19:58.944 } 00:19:58.944 ] 00:19:58.944 }' 00:19:58.944 14:14:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:59.202 14:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # bperfcfg='{ 00:19:59.202 "subsystems": [ 00:19:59.202 { 00:19:59.202 "subsystem": "keyring", 00:19:59.202 "config": [ 00:19:59.202 { 00:19:59.202 "method": "keyring_file_add_key", 00:19:59.202 "params": { 00:19:59.202 "name": "key0", 00:19:59.202 "path": "/tmp/tmp.UtPld0HauR" 00:19:59.202 } 00:19:59.202 } 00:19:59.202 ] 00:19:59.202 }, 00:19:59.202 { 00:19:59.202 "subsystem": "iobuf", 00:19:59.202 "config": [ 00:19:59.202 { 00:19:59.202 "method": "iobuf_set_options", 00:19:59.202 "params": { 00:19:59.202 "small_pool_count": 8192, 00:19:59.202 "large_pool_count": 1024, 00:19:59.202 "small_bufsize": 8192, 00:19:59.202 "large_bufsize": 135168 00:19:59.202 } 00:19:59.202 } 00:19:59.202 ] 00:19:59.202 }, 00:19:59.202 { 00:19:59.202 "subsystem": "sock", 00:19:59.202 "config": [ 00:19:59.202 { 00:19:59.202 "method": "sock_set_default_impl", 00:19:59.202 "params": { 00:19:59.202 "impl_name": "posix" 00:19:59.202 } 00:19:59.202 }, 00:19:59.202 { 00:19:59.202 "method": "sock_impl_set_options", 00:19:59.202 "params": { 00:19:59.202 "impl_name": "ssl", 00:19:59.202 "recv_buf_size": 4096, 00:19:59.202 "send_buf_size": 4096, 00:19:59.202 "enable_recv_pipe": true, 00:19:59.202 "enable_quickack": false, 00:19:59.202 "enable_placement_id": 0, 00:19:59.202 "enable_zerocopy_send_server": true, 00:19:59.202 "enable_zerocopy_send_client": false, 00:19:59.202 "zerocopy_threshold": 0, 00:19:59.202 "tls_version": 0, 00:19:59.202 "enable_ktls": false 00:19:59.202 } 00:19:59.202 }, 00:19:59.202 { 00:19:59.202 "method": "sock_impl_set_options", 00:19:59.202 "params": { 00:19:59.202 "impl_name": "posix", 00:19:59.202 "recv_buf_size": 2097152, 00:19:59.202 "send_buf_size": 2097152, 00:19:59.202 "enable_recv_pipe": true, 00:19:59.202 "enable_quickack": false, 00:19:59.202 "enable_placement_id": 0, 00:19:59.202 "enable_zerocopy_send_server": true, 00:19:59.202 "enable_zerocopy_send_client": false, 00:19:59.202 "zerocopy_threshold": 0, 00:19:59.202 "tls_version": 0, 00:19:59.202 "enable_ktls": false 00:19:59.202 } 00:19:59.202 } 00:19:59.202 ] 00:19:59.202 }, 00:19:59.202 { 00:19:59.202 "subsystem": "vmd", 00:19:59.202 "config": [] 00:19:59.202 }, 00:19:59.202 { 00:19:59.202 "subsystem": "accel", 00:19:59.202 "config": [ 00:19:59.202 { 00:19:59.202 "method": "accel_set_options", 00:19:59.202 "params": { 00:19:59.202 "small_cache_size": 128, 00:19:59.202 "large_cache_size": 16, 00:19:59.202 "task_count": 2048, 00:19:59.202 "sequence_count": 2048, 00:19:59.202 "buf_count": 2048 00:19:59.202 } 00:19:59.202 } 00:19:59.202 ] 00:19:59.202 }, 00:19:59.202 { 00:19:59.202 "subsystem": "bdev", 00:19:59.202 "config": [ 00:19:59.202 { 00:19:59.202 "method": "bdev_set_options", 00:19:59.202 "params": { 00:19:59.202 "bdev_io_pool_size": 65535, 00:19:59.202 "bdev_io_cache_size": 256, 00:19:59.202 "bdev_auto_examine": true, 00:19:59.202 "iobuf_small_cache_size": 128, 00:19:59.202 "iobuf_large_cache_size": 16 00:19:59.202 } 00:19:59.203 }, 00:19:59.203 { 00:19:59.203 "method": "bdev_raid_set_options", 00:19:59.203 "params": { 00:19:59.203 "process_window_size_kb": 1024, 00:19:59.203 "process_max_bandwidth_mb_sec": 0 00:19:59.203 } 00:19:59.203 }, 00:19:59.203 { 00:19:59.203 "method": "bdev_iscsi_set_options", 00:19:59.203 "params": { 00:19:59.203 "timeout_sec": 30 00:19:59.203 } 00:19:59.203 }, 00:19:59.203 { 00:19:59.203 "method": "bdev_nvme_set_options", 00:19:59.203 "params": { 00:19:59.203 "action_on_timeout": "none", 00:19:59.203 "timeout_us": 0, 00:19:59.203 "timeout_admin_us": 0, 00:19:59.203 "keep_alive_timeout_ms": 10000, 00:19:59.203 "arbitration_burst": 0, 00:19:59.203 "low_priority_weight": 0, 00:19:59.203 "medium_priority_weight": 0, 00:19:59.203 "high_priority_weight": 0, 00:19:59.203 "nvme_adminq_poll_period_us": 10000, 00:19:59.203 "nvme_ioq_poll_period_us": 0, 00:19:59.203 "io_queue_requests": 512, 00:19:59.203 "delay_cmd_submit": true, 00:19:59.203 "transport_retry_count": 4, 00:19:59.203 "bdev_retry_count": 3, 00:19:59.203 "transport_ack_timeout": 0, 00:19:59.203 "ctrlr_loss_timeout_sec": 0, 00:19:59.203 "reconnect_delay_sec": 0, 00:19:59.203 "fast_io_fail_timeout_sec": 0, 00:19:59.203 "disable_auto_failback": false, 00:19:59.203 "generate_uuids": false, 00:19:59.203 "transport_tos": 0, 00:19:59.203 "nvme_error_stat": false, 00:19:59.203 "rdma_srq_size": 0, 00:19:59.203 "io_path_stat": false, 00:19:59.203 "allow_accel_sequence": false, 00:19:59.203 "rdma_max_cq_size": 0, 00:19:59.203 "rdma_cm_event_timeout_ms": 0, 00:19:59.203 "dhchap_digests": [ 00:19:59.203 "sha256", 00:19:59.203 "sha384", 00:19:59.203 "sha512" 00:19:59.203 ], 00:19:59.203 "dhchap_dhgroups": [ 00:19:59.203 "null", 00:19:59.203 "ffdhe2048", 00:19:59.203 "ffdhe3072", 00:19:59.203 "ffdhe4096", 00:19:59.203 "ffdhe6144", 00:19:59.203 "ffdhe8192" 00:19:59.203 ] 00:19:59.203 } 00:19:59.203 }, 00:19:59.203 { 00:19:59.203 "method": "bdev_nvme_attach_controller", 00:19:59.203 "params": { 00:19:59.203 "name": "nvme0", 00:19:59.203 "trtype": "TCP", 00:19:59.203 "adrfam": "IPv4", 00:19:59.203 "traddr": "10.0.0.2", 00:19:59.203 "trsvcid": "4420", 00:19:59.203 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:59.203 "prchk_reftag": false, 00:19:59.203 "prchk_guard": false, 00:19:59.203 "ctrlr_loss_timeout_sec": 0, 00:19:59.203 "reconnect_delay_sec": 0, 00:19:59.203 "fast_io_fail_timeout_sec": 0, 00:19:59.203 "psk": "key0", 00:19:59.203 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:59.203 "hdgst": false, 00:19:59.203 "ddgst": false 00:19:59.203 } 00:19:59.203 }, 00:19:59.203 { 00:19:59.203 "method": "bdev_nvme_set_hotplug", 00:19:59.203 "params": { 00:19:59.203 "period_us": 100000, 00:19:59.203 "enable": false 00:19:59.203 } 00:19:59.203 }, 00:19:59.203 { 00:19:59.203 "method": "bdev_enable_histogram", 00:19:59.203 "params": { 00:19:59.203 "name": "nvme0n1", 00:19:59.203 "enable": true 00:19:59.203 } 00:19:59.203 }, 00:19:59.203 { 00:19:59.203 "method": "bdev_wait_for_examine" 00:19:59.203 } 00:19:59.203 ] 00:19:59.203 }, 00:19:59.203 { 00:19:59.203 "subsystem": "nbd", 00:19:59.203 "config": [] 00:19:59.203 } 00:19:59.203 ] 00:19:59.203 }' 00:19:59.203 14:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # killprocess 250437 00:19:59.203 14:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 250437 ']' 00:19:59.203 14:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 250437 00:19:59.203 14:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:59.203 14:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:59.203 14:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 250437 00:19:59.203 14:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:59.203 14:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:59.203 14:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 250437' 00:19:59.203 killing process with pid 250437 00:19:59.203 14:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 250437 00:19:59.203 Received shutdown signal, test time was about 1.000000 seconds 00:19:59.203 00:19:59.203 Latency(us) 00:19:59.203 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:59.203 =================================================================================================================== 00:19:59.203 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:59.203 14:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 250437 00:19:59.460 14:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@269 -- # killprocess 250283 00:19:59.461 14:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 250283 ']' 00:19:59.461 14:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 250283 00:19:59.461 14:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:59.461 14:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:59.461 14:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 250283 00:19:59.461 14:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:59.461 14:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:59.461 14:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 250283' 00:19:59.461 killing process with pid 250283 00:19:59.461 14:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 250283 00:19:59.461 14:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 250283 00:19:59.718 14:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # nvmfappstart -c /dev/fd/62 00:19:59.718 14:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:59.718 14:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # echo '{ 00:19:59.718 "subsystems": [ 00:19:59.718 { 00:19:59.718 "subsystem": "keyring", 00:19:59.718 "config": [ 00:19:59.718 { 00:19:59.718 "method": "keyring_file_add_key", 00:19:59.718 "params": { 00:19:59.718 "name": "key0", 00:19:59.718 "path": "/tmp/tmp.UtPld0HauR" 00:19:59.718 } 00:19:59.718 } 00:19:59.718 ] 00:19:59.718 }, 00:19:59.718 { 00:19:59.718 "subsystem": "iobuf", 00:19:59.718 "config": [ 00:19:59.718 { 00:19:59.718 "method": "iobuf_set_options", 00:19:59.718 "params": { 00:19:59.718 "small_pool_count": 8192, 00:19:59.718 "large_pool_count": 1024, 00:19:59.718 "small_bufsize": 8192, 00:19:59.718 "large_bufsize": 135168 00:19:59.718 } 00:19:59.718 } 00:19:59.718 ] 00:19:59.718 }, 00:19:59.718 { 00:19:59.718 "subsystem": "sock", 00:19:59.718 "config": [ 00:19:59.718 { 00:19:59.718 "method": "sock_set_default_impl", 00:19:59.718 "params": { 00:19:59.718 "impl_name": "posix" 00:19:59.718 } 00:19:59.718 }, 00:19:59.718 { 00:19:59.718 "method": "sock_impl_set_options", 00:19:59.718 "params": { 00:19:59.718 "impl_name": "ssl", 00:19:59.718 "recv_buf_size": 4096, 00:19:59.718 "send_buf_size": 4096, 00:19:59.718 "enable_recv_pipe": true, 00:19:59.718 "enable_quickack": false, 00:19:59.718 "enable_placement_id": 0, 00:19:59.718 "enable_zerocopy_send_server": true, 00:19:59.718 "enable_zerocopy_send_client": false, 00:19:59.718 "zerocopy_threshold": 0, 00:19:59.719 "tls_version": 0, 00:19:59.719 "enable_ktls": false 00:19:59.719 } 00:19:59.719 }, 00:19:59.719 { 00:19:59.719 "method": "sock_impl_set_options", 00:19:59.719 "params": { 00:19:59.719 "impl_name": "posix", 00:19:59.719 "recv_buf_size": 2097152, 00:19:59.719 "send_buf_size": 2097152, 00:19:59.719 "enable_recv_pipe": true, 00:19:59.719 "enable_quickack": false, 00:19:59.719 "enable_placement_id": 0, 00:19:59.719 "enable_zerocopy_send_server": true, 00:19:59.719 "enable_zerocopy_send_client": false, 00:19:59.719 "zerocopy_threshold": 0, 00:19:59.719 "tls_version": 0, 00:19:59.719 "enable_ktls": false 00:19:59.719 } 00:19:59.719 } 00:19:59.719 ] 00:19:59.719 }, 00:19:59.719 { 00:19:59.719 "subsystem": "vmd", 00:19:59.719 "config": [] 00:19:59.719 }, 00:19:59.719 { 00:19:59.719 "subsystem": "accel", 00:19:59.719 "config": [ 00:19:59.719 { 00:19:59.719 "method": "accel_set_options", 00:19:59.719 "params": { 00:19:59.719 "small_cache_size": 128, 00:19:59.719 "large_cache_size": 16, 00:19:59.719 "task_count": 2048, 00:19:59.719 "sequence_count": 2048, 00:19:59.719 "buf_count": 2048 00:19:59.719 } 00:19:59.719 } 00:19:59.719 ] 00:19:59.719 }, 00:19:59.719 { 00:19:59.719 "subsystem": "bdev", 00:19:59.719 "config": [ 00:19:59.719 { 00:19:59.719 "method": "bdev_set_options", 00:19:59.719 "params": { 00:19:59.719 "bdev_io_pool_size": 65535, 00:19:59.719 "bdev_io_cache_size": 256, 00:19:59.719 "bdev_auto_examine": true, 00:19:59.719 "iobuf_small_cache_size": 128, 00:19:59.719 "iobuf_large_cache_size": 16 00:19:59.719 } 00:19:59.719 }, 00:19:59.719 { 00:19:59.719 "method": "bdev_raid_set_options", 00:19:59.719 "params": { 00:19:59.719 "process_window_size_kb": 1024, 00:19:59.719 "process_max_bandwidth_mb_sec": 0 00:19:59.719 } 00:19:59.719 }, 00:19:59.719 { 00:19:59.719 "method": "bdev_iscsi_set_options", 00:19:59.719 "params": { 00:19:59.719 "timeout_sec": 30 00:19:59.719 } 00:19:59.719 }, 00:19:59.719 { 00:19:59.719 "method": "bdev_nvme_set_options", 00:19:59.719 "params": { 00:19:59.719 "action_on_timeout": "none", 00:19:59.719 "timeout_us": 0, 00:19:59.719 "timeout_admin_us": 0, 00:19:59.719 "keep_alive_timeout_ms": 10000, 00:19:59.719 "arbitration_burst": 0, 00:19:59.719 "low_priority_weight": 0, 00:19:59.719 "medium_priority_weight": 0, 00:19:59.719 "high_priority_weight": 0, 00:19:59.719 "nvme_adminq_poll_period_us": 10000, 00:19:59.719 "nvme_ioq_poll_period_us": 0, 00:19:59.719 "io_queue_requests": 0, 00:19:59.719 "delay_cmd_submit": true, 00:19:59.719 "transport_retry_count": 4, 00:19:59.719 "bdev_retry_count": 3, 00:19:59.719 "transport_ack_timeout": 0, 00:19:59.719 "ctrlr_loss_timeout_sec": 0, 00:19:59.719 "reconnect_delay_sec": 0, 00:19:59.719 "fast_io_fail_timeout_sec": 0, 00:19:59.719 "disable_auto_failback": false, 00:19:59.719 "generate_uuids": false, 00:19:59.719 "transport_tos": 0, 00:19:59.719 "nvme_error_stat": false, 00:19:59.719 "rdma_srq_size": 0, 00:19:59.719 "io_path_stat": false, 00:19:59.719 "allow_accel_sequence": false, 00:19:59.719 "rdma_max_cq_size": 0, 00:19:59.719 "rdma_cm_event_timeout_ms": 0, 00:19:59.719 "dhchap_digests": [ 00:19:59.719 "sha256", 00:19:59.719 "sha384", 00:19:59.719 "sha512" 00:19:59.719 ], 00:19:59.719 "dhchap_dhgroups": [ 00:19:59.719 "null", 00:19:59.719 "ffdhe2048", 00:19:59.719 "ffdhe3072", 00:19:59.719 "ffdhe4096", 00:19:59.719 "ffdhe6144", 00:19:59.719 "ffdhe8192" 00:19:59.719 ] 00:19:59.719 } 00:19:59.719 }, 00:19:59.719 { 00:19:59.719 "method": "bdev_nvme_set_hotplug", 00:19:59.719 "params": { 00:19:59.719 "period_us": 100000, 00:19:59.719 "enable": false 00:19:59.719 } 00:19:59.719 }, 00:19:59.719 { 00:19:59.719 "method": "bdev_malloc_create", 00:19:59.719 "params": { 00:19:59.719 "name": "malloc0", 00:19:59.719 "num_blocks": 8192, 00:19:59.719 "block_size": 4096, 00:19:59.719 "physical_block_size": 4096, 00:19:59.719 "uuid": "dc202d16-5561-41e1-a3f2-86dd174e1f64", 00:19:59.719 "optimal_io_boundary": 0, 00:19:59.719 "md_size": 0, 00:19:59.719 "dif_type": 0, 00:19:59.719 "dif_is_head_of_md": false, 00:19:59.719 "dif_pi_format": 0 00:19:59.719 } 00:19:59.719 }, 00:19:59.719 { 00:19:59.719 "method": "bdev_wait_for_examine" 00:19:59.719 } 00:19:59.719 ] 00:19:59.719 }, 00:19:59.719 { 00:19:59.719 "subsystem": "nbd", 00:19:59.719 "config": [] 00:19:59.719 }, 00:19:59.719 { 00:19:59.719 "subsystem": "scheduler", 00:19:59.719 "config": [ 00:19:59.719 { 00:19:59.719 "method": "framework_set_scheduler", 00:19:59.719 "params": { 00:19:59.719 "name": "static" 00:19:59.719 } 00:19:59.719 } 00:19:59.719 ] 00:19:59.719 }, 00:19:59.719 { 00:19:59.719 "subsystem": "nvmf", 00:19:59.719 "config": [ 00:19:59.719 { 00:19:59.719 "method": "nvmf_set_config", 00:19:59.719 "params": { 00:19:59.719 "discovery_filter": "match_any", 00:19:59.719 "admin_cmd_passthru": { 00:19:59.719 "identify_ctrlr": false 00:19:59.719 } 00:19:59.719 } 00:19:59.719 }, 00:19:59.719 { 00:19:59.719 "method": "nvmf_set_max_subsystems", 00:19:59.719 "params": { 00:19:59.719 "max_subsystems": 1024 00:19:59.719 } 00:19:59.719 }, 00:19:59.719 { 00:19:59.719 "method": "nvmf_set_crdt", 00:19:59.719 "params": { 00:19:59.719 "crdt1": 0, 00:19:59.719 "crdt2": 0, 00:19:59.719 "crdt3": 0 00:19:59.719 } 00:19:59.719 }, 00:19:59.719 { 00:19:59.719 "method": "nvmf_create_transport", 00:19:59.719 "params": { 00:19:59.719 "trtype": "TCP", 00:19:59.719 "max_queue_depth": 128, 00:19:59.719 "max_io_qpairs_per_ctrlr": 127, 00:19:59.719 "in_capsule_data_size": 4096, 00:19:59.719 "max_io_size": 131072, 00:19:59.719 "io_unit_size": 131072, 00:19:59.719 "max_aq_depth": 128, 00:19:59.719 "num_shared_buffers": 511, 00:19:59.719 "buf_cache_size": 4294967295, 00:19:59.719 "dif_insert_or_strip": false, 00:19:59.719 "zcopy": false, 00:19:59.719 "c2h_success": false, 00:19:59.719 "sock_priority": 0, 00:19:59.719 "abort_timeout_sec": 1, 00:19:59.719 "ack_timeout": 0, 00:19:59.719 "data_wr_pool_size": 0 00:19:59.719 } 00:19:59.719 }, 00:19:59.719 { 00:19:59.719 "method": "nvmf_create_subsystem", 00:19:59.719 "params": { 00:19:59.719 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:59.719 "allow_any_host": false, 00:19:59.719 "serial_number": "00000000000000000000", 00:19:59.719 "model_number": "SPDK bdev Controller", 00:19:59.719 "max_namespaces": 32, 00:19:59.719 "min_cntlid": 1, 00:19:59.719 "max_cntlid": 65519, 00:19:59.719 "ana_reporting": false 00:19:59.719 } 00:19:59.719 }, 00:19:59.719 { 00:19:59.719 "method": "nvmf_subsystem_add_host", 00:19:59.719 "params": { 00:19:59.719 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:59.719 "host": "nqn.2016-06.io.spdk:host1", 00:19:59.719 "psk": "key0" 00:19:59.719 } 00:19:59.719 }, 00:19:59.719 { 00:19:59.719 "method": "nvmf_subsystem_add_ns", 00:19:59.719 "params": { 00:19:59.719 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:59.719 "namespace": { 00:19:59.719 "nsid": 1, 00:19:59.719 "bdev_name": "malloc0", 00:19:59.719 "nguid": "DC202D16556141E1A3F286DD174E1F64", 00:19:59.719 "uuid": "dc202d16-5561-41e1-a3f2-86dd174e1f64", 00:19:59.719 "no_auto_visible": false 00:19:59.719 } 00:19:59.719 } 00:19:59.719 }, 00:19:59.719 { 00:19:59.719 "method": "nvmf_subsystem_add_listener", 00:19:59.719 "params": { 00:19:59.719 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:59.719 "listen_address": { 00:19:59.719 "trtype": "TCP", 00:19:59.719 "adrfam": "IPv4", 00:19:59.719 "traddr": "10.0.0.2", 00:19:59.719 "trsvcid": "4420" 00:19:59.719 }, 00:19:59.719 "secure_channel": false, 00:19:59.719 "sock_impl": "ssl" 00:19:59.719 } 00:19:59.719 } 00:19:59.719 ] 00:19:59.719 } 00:19:59.719 ] 00:19:59.719 }' 00:19:59.719 14:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:59.719 14:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:59.719 14:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=250728 00:19:59.719 14:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:19:59.719 14:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 250728 00:19:59.719 14:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 250728 ']' 00:19:59.719 14:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:59.719 14:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:59.720 14:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:59.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:59.720 14:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:59.720 14:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:59.720 [2024-07-26 14:14:07.720118] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:19:59.720 [2024-07-26 14:14:07.720200] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:59.977 EAL: No free 2048 kB hugepages reported on node 1 00:19:59.977 [2024-07-26 14:14:07.790398] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:59.977 [2024-07-26 14:14:07.899983] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:59.977 [2024-07-26 14:14:07.900031] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:59.977 [2024-07-26 14:14:07.900058] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:59.977 [2024-07-26 14:14:07.900070] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:59.977 [2024-07-26 14:14:07.900080] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:59.977 [2024-07-26 14:14:07.900141] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:00.235 [2024-07-26 14:14:08.123729] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:00.235 [2024-07-26 14:14:08.170972] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:00.235 [2024-07-26 14:14:08.171177] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:00.800 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:00.800 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:00.800 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:00.800 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:00.800 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:00.800 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:00.800 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # bdevperf_pid=250878 00:20:00.800 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@275 -- # waitforlisten 250878 /var/tmp/bdevperf.sock 00:20:00.800 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 250878 ']' 00:20:00.800 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:00.800 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:00.800 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:20:00.800 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:00.800 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:00.800 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:00.800 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # echo '{ 00:20:00.800 "subsystems": [ 00:20:00.800 { 00:20:00.800 "subsystem": "keyring", 00:20:00.800 "config": [ 00:20:00.800 { 00:20:00.800 "method": "keyring_file_add_key", 00:20:00.800 "params": { 00:20:00.800 "name": "key0", 00:20:00.800 "path": "/tmp/tmp.UtPld0HauR" 00:20:00.800 } 00:20:00.800 } 00:20:00.800 ] 00:20:00.800 }, 00:20:00.800 { 00:20:00.800 "subsystem": "iobuf", 00:20:00.800 "config": [ 00:20:00.800 { 00:20:00.800 "method": "iobuf_set_options", 00:20:00.800 "params": { 00:20:00.800 "small_pool_count": 8192, 00:20:00.800 "large_pool_count": 1024, 00:20:00.800 "small_bufsize": 8192, 00:20:00.800 "large_bufsize": 135168 00:20:00.800 } 00:20:00.800 } 00:20:00.800 ] 00:20:00.800 }, 00:20:00.800 { 00:20:00.800 "subsystem": "sock", 00:20:00.800 "config": [ 00:20:00.800 { 00:20:00.800 "method": "sock_set_default_impl", 00:20:00.800 "params": { 00:20:00.800 "impl_name": "posix" 00:20:00.800 } 00:20:00.800 }, 00:20:00.800 { 00:20:00.800 "method": "sock_impl_set_options", 00:20:00.800 "params": { 00:20:00.800 "impl_name": "ssl", 00:20:00.800 "recv_buf_size": 4096, 00:20:00.800 "send_buf_size": 4096, 00:20:00.800 "enable_recv_pipe": true, 00:20:00.800 "enable_quickack": false, 00:20:00.800 "enable_placement_id": 0, 00:20:00.800 "enable_zerocopy_send_server": true, 00:20:00.800 "enable_zerocopy_send_client": false, 00:20:00.800 "zerocopy_threshold": 0, 00:20:00.800 "tls_version": 0, 00:20:00.800 "enable_ktls": false 00:20:00.800 } 00:20:00.800 }, 00:20:00.800 { 00:20:00.800 "method": "sock_impl_set_options", 00:20:00.800 "params": { 00:20:00.800 "impl_name": "posix", 00:20:00.800 "recv_buf_size": 2097152, 00:20:00.800 "send_buf_size": 2097152, 00:20:00.800 "enable_recv_pipe": true, 00:20:00.800 "enable_quickack": false, 00:20:00.800 "enable_placement_id": 0, 00:20:00.800 "enable_zerocopy_send_server": true, 00:20:00.800 "enable_zerocopy_send_client": false, 00:20:00.800 "zerocopy_threshold": 0, 00:20:00.800 "tls_version": 0, 00:20:00.800 "enable_ktls": false 00:20:00.800 } 00:20:00.800 } 00:20:00.800 ] 00:20:00.800 }, 00:20:00.800 { 00:20:00.800 "subsystem": "vmd", 00:20:00.800 "config": [] 00:20:00.800 }, 00:20:00.800 { 00:20:00.800 "subsystem": "accel", 00:20:00.800 "config": [ 00:20:00.800 { 00:20:00.800 "method": "accel_set_options", 00:20:00.800 "params": { 00:20:00.800 "small_cache_size": 128, 00:20:00.800 "large_cache_size": 16, 00:20:00.800 "task_count": 2048, 00:20:00.800 "sequence_count": 2048, 00:20:00.800 "buf_count": 2048 00:20:00.800 } 00:20:00.800 } 00:20:00.800 ] 00:20:00.800 }, 00:20:00.800 { 00:20:00.800 "subsystem": "bdev", 00:20:00.800 "config": [ 00:20:00.800 { 00:20:00.800 "method": "bdev_set_options", 00:20:00.800 "params": { 00:20:00.800 "bdev_io_pool_size": 65535, 00:20:00.800 "bdev_io_cache_size": 256, 00:20:00.800 "bdev_auto_examine": true, 00:20:00.800 "iobuf_small_cache_size": 128, 00:20:00.800 "iobuf_large_cache_size": 16 00:20:00.800 } 00:20:00.800 }, 00:20:00.800 { 00:20:00.800 "method": "bdev_raid_set_options", 00:20:00.800 "params": { 00:20:00.800 "process_window_size_kb": 1024, 00:20:00.800 "process_max_bandwidth_mb_sec": 0 00:20:00.800 } 00:20:00.800 }, 00:20:00.800 { 00:20:00.800 "method": "bdev_iscsi_set_options", 00:20:00.800 "params": { 00:20:00.800 "timeout_sec": 30 00:20:00.800 } 00:20:00.800 }, 00:20:00.800 { 00:20:00.800 "method": "bdev_nvme_set_options", 00:20:00.800 "params": { 00:20:00.800 "action_on_timeout": "none", 00:20:00.800 "timeout_us": 0, 00:20:00.800 "timeout_admin_us": 0, 00:20:00.800 "keep_alive_timeout_ms": 10000, 00:20:00.800 "arbitration_burst": 0, 00:20:00.800 "low_priority_weight": 0, 00:20:00.800 "medium_priority_weight": 0, 00:20:00.800 "high_priority_weight": 0, 00:20:00.800 "nvme_adminq_poll_period_us": 10000, 00:20:00.800 "nvme_ioq_poll_period_us": 0, 00:20:00.800 "io_queue_requests": 512, 00:20:00.800 "delay_cmd_submit": true, 00:20:00.800 "transport_retry_count": 4, 00:20:00.800 "bdev_retry_count": 3, 00:20:00.800 "transport_ack_timeout": 0, 00:20:00.800 "ctrlr_loss_timeout_sec": 0, 00:20:00.800 "reconnect_delay_sec": 0, 00:20:00.800 "fast_io_fail_timeout_sec": 0, 00:20:00.800 "disable_auto_failback": false, 00:20:00.800 "generate_uuids": false, 00:20:00.800 "transport_tos": 0, 00:20:00.800 "nvme_error_stat": false, 00:20:00.800 "rdma_srq_size": 0, 00:20:00.800 "io_path_stat": false, 00:20:00.800 "allow_accel_sequence": false, 00:20:00.800 "rdma_max_cq_size": 0, 00:20:00.800 "rdma_cm_event_timeout_ms": 0, 00:20:00.800 "dhchap_digests": [ 00:20:00.800 "sha256", 00:20:00.800 "sha384", 00:20:00.800 "sha512" 00:20:00.800 ], 00:20:00.800 "dhchap_dhgroups": [ 00:20:00.800 "null", 00:20:00.800 "ffdhe2048", 00:20:00.800 "ffdhe3072", 00:20:00.800 "ffdhe4096", 00:20:00.800 "ffdhe6144", 00:20:00.800 "ffdhe8192" 00:20:00.800 ] 00:20:00.800 } 00:20:00.800 }, 00:20:00.800 { 00:20:00.800 "method": "bdev_nvme_attach_controller", 00:20:00.800 "params": { 00:20:00.800 "name": "nvme0", 00:20:00.800 "trtype": "TCP", 00:20:00.800 "adrfam": "IPv4", 00:20:00.800 "traddr": "10.0.0.2", 00:20:00.800 "trsvcid": "4420", 00:20:00.800 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:00.800 "prchk_reftag": false, 00:20:00.800 "prchk_guard": false, 00:20:00.800 "ctrlr_loss_timeout_sec": 0, 00:20:00.800 "reconnect_delay_sec": 0, 00:20:00.800 "fast_io_fail_timeout_sec": 0, 00:20:00.800 "psk": "key0", 00:20:00.801 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:00.801 "hdgst": false, 00:20:00.801 "ddgst": false 00:20:00.801 } 00:20:00.801 }, 00:20:00.801 { 00:20:00.801 "method": "bdev_nvme_set_hotplug", 00:20:00.801 "params": { 00:20:00.801 "period_us": 100000, 00:20:00.801 "enable": false 00:20:00.801 } 00:20:00.801 }, 00:20:00.801 { 00:20:00.801 "method": "bdev_enable_histogram", 00:20:00.801 "params": { 00:20:00.801 "name": "nvme0n1", 00:20:00.801 "enable": true 00:20:00.801 } 00:20:00.801 }, 00:20:00.801 { 00:20:00.801 "method": "bdev_wait_for_examine" 00:20:00.801 } 00:20:00.801 ] 00:20:00.801 }, 00:20:00.801 { 00:20:00.801 "subsystem": "nbd", 00:20:00.801 "config": [] 00:20:00.801 } 00:20:00.801 ] 00:20:00.801 }' 00:20:00.801 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:00.801 [2024-07-26 14:14:08.723123] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:20:00.801 [2024-07-26 14:14:08.723190] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid250878 ] 00:20:00.801 EAL: No free 2048 kB hugepages reported on node 1 00:20:00.801 [2024-07-26 14:14:08.780933] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:01.058 [2024-07-26 14:14:08.887312] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:01.058 [2024-07-26 14:14:09.060975] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:01.989 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:01.989 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:01.989 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:01.989 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # jq -r '.[].name' 00:20:01.989 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:01.989 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@278 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:02.247 Running I/O for 1 seconds... 00:20:03.179 00:20:03.179 Latency(us) 00:20:03.179 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:03.179 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:03.179 Verification LBA range: start 0x0 length 0x2000 00:20:03.179 nvme0n1 : 1.04 3153.56 12.32 0.00 0.00 39786.43 8980.86 53205.52 00:20:03.179 =================================================================================================================== 00:20:03.179 Total : 3153.56 12.32 0.00 0.00 39786.43 8980.86 53205.52 00:20:03.179 0 00:20:03.179 14:14:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # trap - SIGINT SIGTERM EXIT 00:20:03.179 14:14:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@281 -- # cleanup 00:20:03.179 14:14:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:20:03.179 14:14:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # type=--id 00:20:03.179 14:14:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@809 -- # id=0 00:20:03.179 14:14:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:20:03.179 14:14:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:03.179 14:14:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:20:03.179 14:14:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:20:03.179 14:14:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # for n in $shm_files 00:20:03.179 14:14:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:03.179 nvmf_trace.0 00:20:03.179 14:14:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # return 0 00:20:03.179 14:14:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 250878 00:20:03.179 14:14:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 250878 ']' 00:20:03.179 14:14:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 250878 00:20:03.179 14:14:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:03.179 14:14:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:03.179 14:14:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 250878 00:20:03.179 14:14:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:03.179 14:14:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:03.179 14:14:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 250878' 00:20:03.179 killing process with pid 250878 00:20:03.179 14:14:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 250878 00:20:03.179 Received shutdown signal, test time was about 1.000000 seconds 00:20:03.179 00:20:03.179 Latency(us) 00:20:03.179 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:03.179 =================================================================================================================== 00:20:03.179 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:03.179 14:14:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 250878 00:20:03.438 14:14:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:20:03.438 14:14:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:03.438 14:14:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:20:03.696 14:14:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:03.696 14:14:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:20:03.696 14:14:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:03.696 14:14:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:03.696 rmmod nvme_tcp 00:20:03.696 rmmod nvme_fabrics 00:20:03.696 rmmod nvme_keyring 00:20:03.696 14:14:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:03.696 14:14:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:20:03.696 14:14:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:20:03.696 14:14:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 250728 ']' 00:20:03.696 14:14:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 250728 00:20:03.696 14:14:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 250728 ']' 00:20:03.696 14:14:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 250728 00:20:03.696 14:14:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:03.696 14:14:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:03.696 14:14:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 250728 00:20:03.696 14:14:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:03.696 14:14:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:03.696 14:14:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 250728' 00:20:03.696 killing process with pid 250728 00:20:03.696 14:14:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 250728 00:20:03.696 14:14:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 250728 00:20:03.955 14:14:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:03.955 14:14:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:03.955 14:14:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:03.955 14:14:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:03.955 14:14:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:03.955 14:14:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:03.955 14:14:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:03.955 14:14:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:05.858 14:14:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:05.858 14:14:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.7995QD2od2 /tmp/tmp.ScuL97guE7 /tmp/tmp.UtPld0HauR 00:20:05.858 00:20:05.858 real 1m20.697s 00:20:05.858 user 2m5.367s 00:20:05.858 sys 0m26.324s 00:20:05.858 14:14:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:05.858 14:14:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:05.858 ************************************ 00:20:05.858 END TEST nvmf_tls 00:20:05.858 ************************************ 00:20:06.116 14:14:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:06.116 14:14:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:06.116 14:14:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:06.116 14:14:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:06.116 ************************************ 00:20:06.116 START TEST nvmf_fips 00:20:06.116 ************************************ 00:20:06.116 14:14:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:06.116 * Looking for test storage... 00:20:06.116 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:20:06.116 14:14:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:06.116 14:14:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:20:06.116 14:14:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:06.116 14:14:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:06.116 14:14:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:06.116 14:14:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:06.116 14:14:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:06.116 14:14:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:06.116 14:14:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:06.116 14:14:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:06.116 14:14:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:06.116 14:14:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:06.116 14:14:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:06.116 14:14:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:20:06.116 14:14:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:06.116 14:14:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:06.116 14:14:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:06.117 14:14:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:06.117 14:14:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:06.117 14:14:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:06.117 14:14:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:06.117 14:14:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:06.117 14:14:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:06.117 14:14:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:06.117 14:14:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:06.117 14:14:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:20:06.117 14:14:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:06.117 14:14:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:20:06.117 14:14:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:06.117 14:14:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:06.117 14:14:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:06.117 14:14:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:06.117 14:14:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:06.117 14:14:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:06.117 14:14:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:06.117 14:14:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:06.117 14:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:06.117 14:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:20:06.117 14:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:20:06.117 14:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:20:06.117 14:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:20:06.117 14:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:20:06.117 14:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:20:06.117 14:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:20:06.117 14:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:20:06.117 14:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:20:06.117 14:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:20:06.117 14:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:20:06.117 14:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:20:06.117 14:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:20:06.117 14:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:20:06.117 14:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:20:06.117 14:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:20:06.117 14:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:20:06.117 14:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:20:06.117 14:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:20:06.117 14:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:06.117 14:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:20:06.117 14:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:20:06.117 14:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:06.117 14:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:20:06.117 14:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:20:06.117 14:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:20:06.117 14:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:20:06.117 14:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:06.117 14:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:20:06.117 14:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:20:06.117 14:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:20:06.117 14:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:20:06.117 14:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:20:06.117 14:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:06.117 14:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:20:06.117 14:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:20:06.117 14:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:06.117 14:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:20:06.117 14:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:20:06.117 14:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:20:06.117 14:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:20:06.117 14:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:06.117 14:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:20:06.117 14:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:20:06.117 14:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:20:06.117 14:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:20:06.117 14:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:20:06.117 14:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:06.117 14:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:20:06.117 14:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:20:06.117 14:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:20:06.117 14:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:20:06.117 14:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:20:06.117 14:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:20:06.117 14:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:20:06.117 14:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:06.117 14:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:20:06.117 14:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:20:06.117 14:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:20:06.117 14:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:20:06.117 14:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:20:06.117 14:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:20:06.117 14:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:20:06.117 14:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:20:06.117 14:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:20:06.117 14:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:20:06.117 14:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:20:06.117 14:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:20:06.117 14:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@37 -- # cat 00:20:06.117 14:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:20:06.117 14:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:20:06.117 14:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:20:06.117 14:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:20:06.117 14:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:20:06.118 14:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:20:06.118 14:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:20:06.377 14:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:20:06.377 14:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:20:06.377 14:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:20:06.377 14:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:20:06.377 14:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # : 00:20:06.377 14:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:20:06.377 14:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:20:06.377 14:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:20:06.377 14:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:06.377 14:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:20:06.377 14:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:06.377 14:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:20:06.377 14:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:06.377 14:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:20:06.377 14:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:20:06.377 14:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:20:06.377 Error setting digest 00:20:06.377 00A25BE88E7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:20:06.377 00A25BE88E7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:20:06.377 14:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:20:06.377 14:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:06.377 14:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:06.377 14:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:06.377 14:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:20:06.377 14:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:06.377 14:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:06.377 14:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:06.377 14:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:06.377 14:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:06.377 14:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:06.377 14:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:06.377 14:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:06.377 14:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:06.377 14:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:06.377 14:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:20:06.377 14:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:08.279 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:08.279 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:20:08.279 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:08.279 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:08.279 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:08.279 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:08.279 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:08.279 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:20:08.279 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:08.279 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:20:08.279 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:20:08.279 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:20:08.279 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:20:08.279 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:20:08.279 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:20:08.280 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:08.280 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:08.280 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:08.280 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:08.280 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:08.280 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:08.280 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:08.280 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:08.280 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:08.280 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:08.280 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:08.280 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:08.280 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:08.280 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:08.280 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:08.280 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:08.280 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:08.280 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:08.280 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:20:08.280 Found 0000:09:00.0 (0x8086 - 0x159b) 00:20:08.280 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:08.280 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:08.280 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:08.280 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:08.280 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:08.280 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:08.280 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:20:08.280 Found 0000:09:00.1 (0x8086 - 0x159b) 00:20:08.280 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:08.280 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:08.280 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:08.280 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:08.280 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:08.280 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:08.280 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:08.280 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:08.280 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:08.280 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:08.280 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:08.280 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:08.280 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:08.280 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:08.280 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:08.280 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:20:08.280 Found net devices under 0000:09:00.0: cvl_0_0 00:20:08.280 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:08.280 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:08.280 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:08.280 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:08.280 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:08.280 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:08.280 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:08.280 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:08.280 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:20:08.280 Found net devices under 0000:09:00.1: cvl_0_1 00:20:08.280 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:08.280 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:08.280 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:20:08.280 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:08.280 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:08.280 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:08.280 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:08.280 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:08.280 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:08.280 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:08.280 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:08.280 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:08.280 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:08.280 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:08.280 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:08.280 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:08.280 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:08.280 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:08.280 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:08.538 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:08.538 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:08.538 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:08.539 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:08.539 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:08.539 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:08.539 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:08.539 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:08.539 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.183 ms 00:20:08.539 00:20:08.539 --- 10.0.0.2 ping statistics --- 00:20:08.539 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:08.539 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:20:08.539 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:08.539 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:08.539 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.045 ms 00:20:08.539 00:20:08.539 --- 10.0.0.1 ping statistics --- 00:20:08.539 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:08.539 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:20:08.539 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:08.539 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:20:08.539 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:08.539 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:08.539 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:08.539 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:08.539 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:08.539 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:08.539 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:08.539 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:20:08.539 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:08.539 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:08.539 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:08.539 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=253241 00:20:08.539 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:08.539 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 253241 00:20:08.539 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 253241 ']' 00:20:08.539 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:08.539 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:08.539 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:08.539 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:08.539 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:08.539 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:08.539 [2024-07-26 14:14:16.487719] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:20:08.539 [2024-07-26 14:14:16.487808] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:08.539 EAL: No free 2048 kB hugepages reported on node 1 00:20:08.539 [2024-07-26 14:14:16.552437] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:08.797 [2024-07-26 14:14:16.655760] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:08.797 [2024-07-26 14:14:16.655831] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:08.797 [2024-07-26 14:14:16.655844] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:08.797 [2024-07-26 14:14:16.655854] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:08.797 [2024-07-26 14:14:16.655883] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:08.797 [2024-07-26 14:14:16.655908] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:09.729 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:09.729 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:20:09.729 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:09.729 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:09.729 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:09.729 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:09.729 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:20:09.729 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:09.729 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:09.729 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:09.729 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:09.729 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:09.729 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:09.729 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:09.729 [2024-07-26 14:14:17.644362] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:09.729 [2024-07-26 14:14:17.660382] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:09.729 [2024-07-26 14:14:17.660556] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:09.729 [2024-07-26 14:14:17.691669] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:09.729 malloc0 00:20:09.729 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:09.729 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=253394 00:20:09.729 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:09.729 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 253394 /var/tmp/bdevperf.sock 00:20:09.729 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 253394 ']' 00:20:09.729 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:09.729 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:09.729 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:09.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:09.729 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:09.729 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:09.987 [2024-07-26 14:14:17.785379] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:20:09.987 [2024-07-26 14:14:17.785476] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid253394 ] 00:20:09.987 EAL: No free 2048 kB hugepages reported on node 1 00:20:09.987 [2024-07-26 14:14:17.843063] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:09.987 [2024-07-26 14:14:17.951272] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:10.920 14:14:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:10.920 14:14:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:20:10.920 14:14:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:11.178 [2024-07-26 14:14:19.030749] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:11.178 [2024-07-26 14:14:19.030911] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:11.178 TLSTESTn1 00:20:11.178 14:14:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:11.436 Running I/O for 10 seconds... 00:20:21.394 00:20:21.394 Latency(us) 00:20:21.394 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:21.394 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:21.394 Verification LBA range: start 0x0 length 0x2000 00:20:21.395 TLSTESTn1 : 10.05 3398.83 13.28 0.00 0.00 37561.32 8252.68 53593.88 00:20:21.395 =================================================================================================================== 00:20:21.395 Total : 3398.83 13.28 0.00 0.00 37561.32 8252.68 53593.88 00:20:21.395 0 00:20:21.395 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:20:21.395 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:20:21.395 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # type=--id 00:20:21.395 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@809 -- # id=0 00:20:21.395 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:20:21.395 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:21.395 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:20:21.395 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:20:21.395 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # for n in $shm_files 00:20:21.395 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:21.395 nvmf_trace.0 00:20:21.395 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # return 0 00:20:21.395 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 253394 00:20:21.395 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 253394 ']' 00:20:21.395 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 253394 00:20:21.395 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:20:21.653 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:21.653 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 253394 00:20:21.653 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:20:21.653 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:20:21.653 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 253394' 00:20:21.653 killing process with pid 253394 00:20:21.653 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 253394 00:20:21.653 Received shutdown signal, test time was about 10.000000 seconds 00:20:21.653 00:20:21.653 Latency(us) 00:20:21.653 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:21.653 =================================================================================================================== 00:20:21.653 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:21.653 [2024-07-26 14:14:29.440809] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:21.653 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 253394 00:20:21.911 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:20:21.911 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:21.911 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:20:21.911 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:21.911 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:20:21.911 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:21.911 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:21.911 rmmod nvme_tcp 00:20:21.911 rmmod nvme_fabrics 00:20:21.911 rmmod nvme_keyring 00:20:21.911 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:21.911 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:20:21.911 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:20:21.911 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 253241 ']' 00:20:21.911 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 253241 00:20:21.911 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 253241 ']' 00:20:21.911 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 253241 00:20:21.911 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:20:21.911 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:21.911 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 253241 00:20:21.911 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:21.911 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:21.911 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 253241' 00:20:21.911 killing process with pid 253241 00:20:21.911 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 253241 00:20:21.911 [2024-07-26 14:14:29.782658] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:21.911 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 253241 00:20:22.169 14:14:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:22.169 14:14:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:22.169 14:14:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:22.169 14:14:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:22.169 14:14:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:22.169 14:14:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:22.169 14:14:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:22.169 14:14:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:24.705 14:14:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:24.705 14:14:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:24.705 00:20:24.705 real 0m18.184s 00:20:24.705 user 0m24.422s 00:20:24.705 sys 0m5.422s 00:20:24.705 14:14:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:24.705 14:14:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:24.705 ************************************ 00:20:24.705 END TEST nvmf_fips 00:20:24.705 ************************************ 00:20:24.705 14:14:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@45 -- # '[' 0 -eq 1 ']' 00:20:24.705 14:14:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@51 -- # [[ phy == phy ]] 00:20:24.705 14:14:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@52 -- # '[' tcp = tcp ']' 00:20:24.705 14:14:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # gather_supported_nvmf_pci_devs 00:20:24.705 14:14:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@285 -- # xtrace_disable 00:20:24.705 14:14:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:26.615 14:14:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:26.615 14:14:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@291 -- # pci_devs=() 00:20:26.615 14:14:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:26.615 14:14:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:26.615 14:14:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:26.615 14:14:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:26.615 14:14:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:26.615 14:14:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@295 -- # net_devs=() 00:20:26.615 14:14:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:26.615 14:14:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@296 -- # e810=() 00:20:26.615 14:14:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@296 -- # local -ga e810 00:20:26.615 14:14:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@297 -- # x722=() 00:20:26.615 14:14:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@297 -- # local -ga x722 00:20:26.615 14:14:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@298 -- # mlx=() 00:20:26.615 14:14:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@298 -- # local -ga mlx 00:20:26.615 14:14:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:26.615 14:14:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:26.615 14:14:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:26.615 14:14:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:26.615 14:14:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:26.615 14:14:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:26.615 14:14:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:26.615 14:14:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:26.615 14:14:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:26.615 14:14:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:26.615 14:14:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:26.615 14:14:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:26.615 14:14:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:26.615 14:14:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:26.615 14:14:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:26.615 14:14:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:26.615 14:14:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:26.615 14:14:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:26.615 14:14:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:20:26.615 Found 0000:09:00.0 (0x8086 - 0x159b) 00:20:26.615 14:14:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:26.615 14:14:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:26.615 14:14:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:26.615 14:14:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:26.615 14:14:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:26.615 14:14:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:26.615 14:14:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:20:26.615 Found 0000:09:00.1 (0x8086 - 0x159b) 00:20:26.615 14:14:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:26.615 14:14:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:26.615 14:14:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:26.615 14:14:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:26.615 14:14:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:26.615 14:14:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:26.615 14:14:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:26.615 14:14:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:26.615 14:14:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:26.615 14:14:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:26.615 14:14:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:26.615 14:14:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:26.615 14:14:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:26.615 14:14:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:26.615 14:14:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:26.615 14:14:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:20:26.615 Found net devices under 0000:09:00.0: cvl_0_0 00:20:26.615 14:14:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:26.615 14:14:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:26.615 14:14:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:26.616 14:14:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:26.616 14:14:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:26.616 14:14:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:26.616 14:14:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:26.616 14:14:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:26.616 14:14:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:20:26.616 Found net devices under 0000:09:00.1: cvl_0_1 00:20:26.616 14:14:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:26.616 14:14:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:26.616 14:14:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:26.616 14:14:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # (( 2 > 0 )) 00:20:26.616 14:14:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:20:26.616 14:14:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:26.616 14:14:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:26.616 14:14:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:26.616 ************************************ 00:20:26.616 START TEST nvmf_perf_adq 00:20:26.616 ************************************ 00:20:26.616 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:20:26.616 * Looking for test storage... 00:20:26.616 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:26.616 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:26.616 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:20:26.616 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:26.616 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:26.616 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:26.616 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:26.616 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:26.616 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:26.616 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:26.616 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:26.616 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:26.616 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:26.616 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:26.616 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:20:26.616 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:26.616 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:26.616 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:26.616 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:26.616 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:26.616 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:26.616 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:26.616 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:26.616 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:26.616 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:26.616 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:26.616 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:20:26.616 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:26.616 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:20:26.616 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:26.616 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:26.616 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:26.616 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:26.616 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:26.616 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:26.616 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:26.616 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:26.616 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:20:26.616 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:20:26.616 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:28.522 14:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:28.522 14:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:20:28.522 14:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:28.522 14:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:28.522 14:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:28.522 14:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:28.522 14:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:28.522 14:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:20:28.522 14:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:28.522 14:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:20:28.522 14:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:20:28.522 14:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:20:28.522 14:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:20:28.522 14:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:20:28.522 14:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:20:28.522 14:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:28.522 14:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:28.522 14:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:28.522 14:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:28.522 14:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:28.522 14:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:28.522 14:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:28.522 14:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:28.522 14:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:28.522 14:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:28.522 14:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:28.522 14:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:28.522 14:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:28.522 14:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:28.522 14:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:28.522 14:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:28.522 14:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:28.522 14:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:28.522 14:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:20:28.522 Found 0000:09:00.0 (0x8086 - 0x159b) 00:20:28.522 14:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:28.522 14:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:28.522 14:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:28.522 14:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:28.522 14:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:28.522 14:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:28.522 14:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:20:28.522 Found 0000:09:00.1 (0x8086 - 0x159b) 00:20:28.522 14:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:28.522 14:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:28.522 14:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:28.522 14:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:28.522 14:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:28.522 14:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:28.522 14:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:28.522 14:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:28.522 14:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:28.522 14:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:28.522 14:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:28.522 14:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:28.522 14:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:28.522 14:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:28.523 14:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:28.523 14:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:20:28.523 Found net devices under 0000:09:00.0: cvl_0_0 00:20:28.523 14:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:28.523 14:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:28.523 14:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:28.523 14:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:28.523 14:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:28.523 14:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:28.523 14:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:28.523 14:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:28.523 14:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:20:28.523 Found net devices under 0000:09:00.1: cvl_0_1 00:20:28.523 14:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:28.523 14:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:28.523 14:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:28.523 14:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:20:28.523 14:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:20:28.523 14:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:20:28.523 14:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:20:29.092 14:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:20:32.381 14:14:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:20:37.652 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:20:37.652 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:37.653 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:37.653 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:37.653 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:37.653 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:37.653 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:37.653 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:37.653 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:37.653 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:37.653 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:37.653 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:20:37.653 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:37.653 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:37.653 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:20:37.653 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:37.653 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:37.653 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:37.653 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:37.653 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:37.653 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:20:37.653 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:37.653 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:20:37.653 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:20:37.653 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:20:37.653 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:20:37.653 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:20:37.653 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:20:37.653 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:37.653 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:37.653 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:37.653 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:37.653 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:37.653 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:37.653 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:37.653 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:37.653 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:37.653 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:37.653 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:37.653 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:37.653 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:37.653 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:37.653 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:37.653 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:37.653 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:37.653 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:37.653 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:20:37.653 Found 0000:09:00.0 (0x8086 - 0x159b) 00:20:37.653 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:37.653 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:37.653 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:37.653 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:37.653 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:37.653 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:37.653 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:20:37.653 Found 0000:09:00.1 (0x8086 - 0x159b) 00:20:37.653 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:37.653 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:37.653 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:37.653 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:37.653 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:37.653 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:37.653 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:37.653 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:37.653 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:37.653 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:37.653 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:37.653 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:37.653 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:37.653 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:37.653 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:37.653 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:20:37.653 Found net devices under 0000:09:00.0: cvl_0_0 00:20:37.653 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:37.653 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:37.653 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:37.653 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:37.653 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:37.653 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:37.653 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:37.653 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:37.653 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:20:37.653 Found net devices under 0000:09:00.1: cvl_0_1 00:20:37.653 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:37.653 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:37.653 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:20:37.653 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:37.653 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:37.653 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:37.653 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:37.653 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:37.653 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:37.653 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:37.653 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:37.653 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:37.653 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:37.653 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:37.653 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:37.653 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:37.653 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:37.653 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:37.653 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:37.653 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:37.653 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:37.653 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:37.653 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:37.653 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:37.653 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:37.653 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:37.653 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:37.653 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.201 ms 00:20:37.653 00:20:37.654 --- 10.0.0.2 ping statistics --- 00:20:37.654 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:37.654 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:20:37.654 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:37.654 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:37.654 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:20:37.654 00:20:37.654 --- 10.0.0.1 ping statistics --- 00:20:37.654 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:37.654 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:20:37.654 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:37.654 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:20:37.654 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:37.654 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:37.654 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:37.654 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:37.654 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:37.654 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:37.654 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:37.654 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:20:37.654 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:37.654 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:37.654 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:37.654 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=259406 00:20:37.654 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:20:37.654 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 259406 00:20:37.654 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 259406 ']' 00:20:37.654 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:37.654 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:37.654 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:37.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:37.654 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:37.654 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:37.654 [2024-07-26 14:14:45.258010] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:20:37.654 [2024-07-26 14:14:45.258084] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:37.654 EAL: No free 2048 kB hugepages reported on node 1 00:20:37.654 [2024-07-26 14:14:45.320903] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:37.654 [2024-07-26 14:14:45.436185] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:37.654 [2024-07-26 14:14:45.436229] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:37.654 [2024-07-26 14:14:45.436257] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:37.654 [2024-07-26 14:14:45.436268] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:37.654 [2024-07-26 14:14:45.436278] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:37.654 [2024-07-26 14:14:45.436337] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:37.654 [2024-07-26 14:14:45.436393] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:37.654 [2024-07-26 14:14:45.436756] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:37.654 [2024-07-26 14:14:45.436760] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:37.654 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:37.654 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:20:37.654 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:37.654 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:37.654 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:37.654 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:37.654 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:20:37.654 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:20:37.654 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:20:37.654 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.654 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:37.654 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.654 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:20:37.654 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:20:37.654 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.654 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:37.654 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.654 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:20:37.654 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.654 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:37.654 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.654 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:20:37.654 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.654 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:37.654 [2024-07-26 14:14:45.636058] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:37.654 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.654 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:37.654 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.654 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:37.654 Malloc1 00:20:37.654 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.654 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:37.654 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.654 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:37.912 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.912 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:37.912 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.912 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:37.912 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.912 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:37.912 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.912 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:37.912 [2024-07-26 14:14:45.686091] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:37.912 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.912 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=259558 00:20:37.912 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:20:37.912 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:20:37.912 EAL: No free 2048 kB hugepages reported on node 1 00:20:39.810 14:14:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:20:39.810 14:14:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.811 14:14:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:39.811 14:14:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.811 14:14:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:20:39.811 "tick_rate": 2700000000, 00:20:39.811 "poll_groups": [ 00:20:39.811 { 00:20:39.811 "name": "nvmf_tgt_poll_group_000", 00:20:39.811 "admin_qpairs": 1, 00:20:39.811 "io_qpairs": 1, 00:20:39.811 "current_admin_qpairs": 1, 00:20:39.811 "current_io_qpairs": 1, 00:20:39.811 "pending_bdev_io": 0, 00:20:39.811 "completed_nvme_io": 19327, 00:20:39.811 "transports": [ 00:20:39.811 { 00:20:39.811 "trtype": "TCP" 00:20:39.811 } 00:20:39.811 ] 00:20:39.811 }, 00:20:39.811 { 00:20:39.811 "name": "nvmf_tgt_poll_group_001", 00:20:39.811 "admin_qpairs": 0, 00:20:39.811 "io_qpairs": 1, 00:20:39.811 "current_admin_qpairs": 0, 00:20:39.811 "current_io_qpairs": 1, 00:20:39.811 "pending_bdev_io": 0, 00:20:39.811 "completed_nvme_io": 20897, 00:20:39.811 "transports": [ 00:20:39.811 { 00:20:39.811 "trtype": "TCP" 00:20:39.811 } 00:20:39.811 ] 00:20:39.811 }, 00:20:39.811 { 00:20:39.811 "name": "nvmf_tgt_poll_group_002", 00:20:39.811 "admin_qpairs": 0, 00:20:39.811 "io_qpairs": 1, 00:20:39.811 "current_admin_qpairs": 0, 00:20:39.811 "current_io_qpairs": 1, 00:20:39.811 "pending_bdev_io": 0, 00:20:39.811 "completed_nvme_io": 20399, 00:20:39.811 "transports": [ 00:20:39.811 { 00:20:39.811 "trtype": "TCP" 00:20:39.811 } 00:20:39.811 ] 00:20:39.811 }, 00:20:39.811 { 00:20:39.811 "name": "nvmf_tgt_poll_group_003", 00:20:39.811 "admin_qpairs": 0, 00:20:39.811 "io_qpairs": 1, 00:20:39.811 "current_admin_qpairs": 0, 00:20:39.811 "current_io_qpairs": 1, 00:20:39.811 "pending_bdev_io": 0, 00:20:39.811 "completed_nvme_io": 20685, 00:20:39.811 "transports": [ 00:20:39.811 { 00:20:39.811 "trtype": "TCP" 00:20:39.811 } 00:20:39.811 ] 00:20:39.811 } 00:20:39.811 ] 00:20:39.811 }' 00:20:39.811 14:14:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:20:39.811 14:14:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:20:39.811 14:14:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:20:39.811 14:14:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:20:39.811 14:14:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 259558 00:20:47.914 Initializing NVMe Controllers 00:20:47.914 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:47.914 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:20:47.914 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:20:47.914 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:20:47.914 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:20:47.914 Initialization complete. Launching workers. 00:20:47.914 ======================================================== 00:20:47.914 Latency(us) 00:20:47.914 Device Information : IOPS MiB/s Average min max 00:20:47.914 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10773.60 42.08 5941.72 2491.04 39811.26 00:20:47.914 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10877.80 42.49 5884.35 2165.54 9497.62 00:20:47.914 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10601.00 41.41 6037.99 2512.63 9913.28 00:20:47.914 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10195.00 39.82 6278.57 2469.09 10534.59 00:20:47.914 ======================================================== 00:20:47.914 Total : 42447.39 165.81 6031.97 2165.54 39811.26 00:20:47.914 00:20:47.914 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:20:47.914 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:47.914 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:20:47.914 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:47.914 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:20:47.914 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:47.914 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:47.914 rmmod nvme_tcp 00:20:47.914 rmmod nvme_fabrics 00:20:47.914 rmmod nvme_keyring 00:20:47.914 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:47.914 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:20:47.914 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:20:47.914 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 259406 ']' 00:20:47.914 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 259406 00:20:47.914 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 259406 ']' 00:20:47.914 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 259406 00:20:47.914 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:20:47.914 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:47.914 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 259406 00:20:47.914 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:47.914 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:47.914 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 259406' 00:20:47.914 killing process with pid 259406 00:20:47.914 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 259406 00:20:47.914 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 259406 00:20:48.482 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:48.482 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:48.482 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:48.482 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:48.482 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:48.482 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:48.482 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:48.482 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:50.383 14:14:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:50.384 14:14:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:20:50.384 14:14:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:20:50.948 14:14:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:20:53.478 14:15:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:20:58.760 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:20:58.760 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:58.760 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:58.760 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:58.760 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:58.760 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:58.760 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:58.760 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:58.760 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:58.760 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:58.760 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:58.760 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:20:58.760 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:58.760 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:58.760 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:20:58.760 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:58.760 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:58.760 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:58.760 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:58.760 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:58.760 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:20:58.760 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:58.760 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:20:58.760 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:20:58.760 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:20:58.760 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:20:58.760 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:20:58.760 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:20:58.760 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:58.760 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:58.760 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:58.760 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:58.760 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:58.760 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:58.760 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:58.760 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:58.760 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:58.760 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:58.760 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:58.760 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:58.760 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:58.760 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:58.760 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:58.760 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:58.760 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:58.760 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:58.760 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:20:58.760 Found 0000:09:00.0 (0x8086 - 0x159b) 00:20:58.760 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:58.760 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:58.760 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:58.760 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:58.760 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:58.760 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:58.760 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:20:58.760 Found 0000:09:00.1 (0x8086 - 0x159b) 00:20:58.760 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:58.760 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:58.760 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:58.760 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:58.760 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:58.760 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:58.760 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:58.760 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:58.760 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:58.760 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:58.760 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:58.760 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:58.760 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:58.760 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:58.760 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:58.760 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:20:58.760 Found net devices under 0000:09:00.0: cvl_0_0 00:20:58.760 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:58.760 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:58.760 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:58.760 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:58.760 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:58.760 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:58.760 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:58.760 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:58.760 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:20:58.760 Found net devices under 0000:09:00.1: cvl_0_1 00:20:58.760 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:58.760 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:58.760 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:20:58.760 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:58.760 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:58.760 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:58.760 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:58.760 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:58.760 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:58.761 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:58.761 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:58.761 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:58.761 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:58.761 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:58.761 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:58.761 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:58.761 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:58.761 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:58.761 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:58.761 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:58.761 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:58.761 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:58.761 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:58.761 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:58.761 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:58.761 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:58.761 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:58.761 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.197 ms 00:20:58.761 00:20:58.761 --- 10.0.0.2 ping statistics --- 00:20:58.761 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:58.761 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:20:58.761 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:58.761 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:58.761 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms 00:20:58.761 00:20:58.761 --- 10.0.0.1 ping statistics --- 00:20:58.761 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:58.761 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:20:58.761 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:58.761 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:20:58.761 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:58.761 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:58.761 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:58.761 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:58.761 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:58.761 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:58.761 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:58.761 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:20:58.761 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:20:58.761 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:20:58.761 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:20:58.761 net.core.busy_poll = 1 00:20:58.761 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:20:58.761 net.core.busy_read = 1 00:20:58.761 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:20:58.761 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:20:58.761 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:20:58.761 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:20:58.761 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:20:58.761 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:20:58.761 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:58.761 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:58.761 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:58.761 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=262285 00:20:58.761 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:20:58.761 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 262285 00:20:58.761 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 262285 ']' 00:20:58.761 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:58.761 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:58.761 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:58.761 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:58.761 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:58.761 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:58.761 [2024-07-26 14:15:06.338016] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:20:58.761 [2024-07-26 14:15:06.338096] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:58.761 EAL: No free 2048 kB hugepages reported on node 1 00:20:58.761 [2024-07-26 14:15:06.404883] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:58.761 [2024-07-26 14:15:06.514880] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:58.761 [2024-07-26 14:15:06.514939] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:58.761 [2024-07-26 14:15:06.514967] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:58.761 [2024-07-26 14:15:06.514979] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:58.761 [2024-07-26 14:15:06.514988] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:58.761 [2024-07-26 14:15:06.515051] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:58.761 [2024-07-26 14:15:06.515076] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:58.761 [2024-07-26 14:15:06.515133] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:58.761 [2024-07-26 14:15:06.515136] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:58.761 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:58.761 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:20:58.761 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:58.761 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:58.761 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:58.761 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:58.761 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:20:58.761 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:20:58.761 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:20:58.761 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.761 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:58.761 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.761 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:20:58.761 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:20:58.761 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.761 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:58.761 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.761 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:20:58.761 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.761 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:58.761 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.761 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:20:58.761 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.761 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:58.761 [2024-07-26 14:15:06.742665] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:58.761 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.761 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:58.761 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.762 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:58.762 Malloc1 00:20:58.762 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.019 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:59.019 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.020 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:59.020 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.020 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:59.020 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.020 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:59.020 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.020 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:59.020 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.020 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:59.020 [2024-07-26 14:15:06.796155] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:59.020 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.020 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=262322 00:20:59.020 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:20:59.020 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:20:59.020 EAL: No free 2048 kB hugepages reported on node 1 00:21:00.919 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:21:00.919 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.919 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:00.920 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.920 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:21:00.920 "tick_rate": 2700000000, 00:21:00.920 "poll_groups": [ 00:21:00.920 { 00:21:00.920 "name": "nvmf_tgt_poll_group_000", 00:21:00.920 "admin_qpairs": 1, 00:21:00.920 "io_qpairs": 4, 00:21:00.920 "current_admin_qpairs": 1, 00:21:00.920 "current_io_qpairs": 4, 00:21:00.920 "pending_bdev_io": 0, 00:21:00.920 "completed_nvme_io": 34679, 00:21:00.920 "transports": [ 00:21:00.920 { 00:21:00.920 "trtype": "TCP" 00:21:00.920 } 00:21:00.920 ] 00:21:00.920 }, 00:21:00.920 { 00:21:00.920 "name": "nvmf_tgt_poll_group_001", 00:21:00.920 "admin_qpairs": 0, 00:21:00.920 "io_qpairs": 0, 00:21:00.920 "current_admin_qpairs": 0, 00:21:00.920 "current_io_qpairs": 0, 00:21:00.920 "pending_bdev_io": 0, 00:21:00.920 "completed_nvme_io": 0, 00:21:00.920 "transports": [ 00:21:00.920 { 00:21:00.920 "trtype": "TCP" 00:21:00.920 } 00:21:00.920 ] 00:21:00.920 }, 00:21:00.920 { 00:21:00.920 "name": "nvmf_tgt_poll_group_002", 00:21:00.920 "admin_qpairs": 0, 00:21:00.920 "io_qpairs": 0, 00:21:00.920 "current_admin_qpairs": 0, 00:21:00.920 "current_io_qpairs": 0, 00:21:00.920 "pending_bdev_io": 0, 00:21:00.920 "completed_nvme_io": 0, 00:21:00.920 "transports": [ 00:21:00.920 { 00:21:00.920 "trtype": "TCP" 00:21:00.920 } 00:21:00.920 ] 00:21:00.920 }, 00:21:00.920 { 00:21:00.920 "name": "nvmf_tgt_poll_group_003", 00:21:00.920 "admin_qpairs": 0, 00:21:00.920 "io_qpairs": 0, 00:21:00.920 "current_admin_qpairs": 0, 00:21:00.920 "current_io_qpairs": 0, 00:21:00.920 "pending_bdev_io": 0, 00:21:00.920 "completed_nvme_io": 0, 00:21:00.920 "transports": [ 00:21:00.920 { 00:21:00.920 "trtype": "TCP" 00:21:00.920 } 00:21:00.920 ] 00:21:00.920 } 00:21:00.920 ] 00:21:00.920 }' 00:21:00.920 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:21:00.920 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:21:00.920 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=3 00:21:00.920 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 3 -lt 2 ]] 00:21:00.920 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 262322 00:21:09.195 Initializing NVMe Controllers 00:21:09.195 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:09.195 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:21:09.195 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:21:09.195 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:21:09.195 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:21:09.195 Initialization complete. Launching workers. 00:21:09.195 ======================================================== 00:21:09.195 Latency(us) 00:21:09.195 Device Information : IOPS MiB/s Average min max 00:21:09.195 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 5094.70 19.90 12570.84 1592.70 58909.19 00:21:09.195 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 4646.70 18.15 13773.77 1858.65 57929.47 00:21:09.195 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 4137.60 16.16 15513.25 1495.24 60365.58 00:21:09.195 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 4434.20 17.32 14454.95 1124.46 61772.31 00:21:09.195 ======================================================== 00:21:09.195 Total : 18313.20 71.54 13997.06 1124.46 61772.31 00:21:09.195 00:21:09.195 14:15:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:21:09.195 14:15:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:09.195 14:15:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:21:09.195 14:15:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:09.195 14:15:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:21:09.195 14:15:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:09.195 14:15:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:09.195 rmmod nvme_tcp 00:21:09.195 rmmod nvme_fabrics 00:21:09.195 rmmod nvme_keyring 00:21:09.195 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:09.195 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:21:09.195 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:21:09.195 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 262285 ']' 00:21:09.195 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 262285 00:21:09.195 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 262285 ']' 00:21:09.195 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 262285 00:21:09.195 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:21:09.195 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:09.195 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 262285 00:21:09.195 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:09.195 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:09.195 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 262285' 00:21:09.195 killing process with pid 262285 00:21:09.195 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 262285 00:21:09.195 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 262285 00:21:09.453 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:09.454 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:09.454 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:09.454 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:09.454 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:09.454 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:09.454 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:09.454 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:12.745 14:15:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:12.745 14:15:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:21:12.745 00:21:12.745 real 0m46.218s 00:21:12.745 user 2m39.917s 00:21:12.745 sys 0m10.207s 00:21:12.745 14:15:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:12.745 14:15:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:12.745 ************************************ 00:21:12.745 END TEST nvmf_perf_adq 00:21:12.745 ************************************ 00:21:12.745 14:15:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@63 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:21:12.745 14:15:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:12.745 14:15:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:12.745 14:15:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:12.745 ************************************ 00:21:12.745 START TEST nvmf_shutdown 00:21:12.745 ************************************ 00:21:12.745 14:15:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:21:12.745 * Looking for test storage... 00:21:12.745 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:12.745 14:15:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:12.745 14:15:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:21:12.745 14:15:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:12.745 14:15:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:12.745 14:15:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:12.745 14:15:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:12.745 14:15:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:12.745 14:15:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:12.745 14:15:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:12.745 14:15:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:12.745 14:15:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:12.745 14:15:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:12.745 14:15:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:12.745 14:15:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:21:12.745 14:15:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:12.745 14:15:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:12.745 14:15:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:12.745 14:15:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:12.745 14:15:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:12.745 14:15:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:12.745 14:15:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:12.745 14:15:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:12.745 14:15:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:12.745 14:15:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:12.745 14:15:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:12.745 14:15:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:21:12.745 14:15:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:12.745 14:15:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:21:12.745 14:15:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:12.745 14:15:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:12.745 14:15:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:12.745 14:15:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:12.745 14:15:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:12.745 14:15:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:12.745 14:15:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:12.745 14:15:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:12.745 14:15:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:12.745 14:15:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:12.745 14:15:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:21:12.745 14:15:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:21:12.745 14:15:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:12.745 14:15:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:12.745 ************************************ 00:21:12.745 START TEST nvmf_shutdown_tc1 00:21:12.745 ************************************ 00:21:12.745 14:15:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc1 00:21:12.746 14:15:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:21:12.746 14:15:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:21:12.746 14:15:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:12.746 14:15:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:12.746 14:15:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:12.746 14:15:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:12.746 14:15:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:12.746 14:15:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:12.746 14:15:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:12.746 14:15:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:12.746 14:15:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:12.746 14:15:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:12.746 14:15:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:12.746 14:15:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:15.285 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:15.285 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:15.285 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:15.285 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:15.285 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:15.285 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:15.285 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:15.285 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:21:15.285 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:15.285 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:21:15.285 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:21:15.285 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:21:15.285 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:21:15.285 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:21:15.285 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:15.285 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:15.285 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:15.285 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:15.285 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:15.285 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:15.285 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:15.285 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:15.285 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:15.285 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:15.285 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:15.285 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:15.285 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:15.285 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:15.285 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:15.285 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:15.285 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:15.285 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:15.285 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:15.285 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:21:15.285 Found 0000:09:00.0 (0x8086 - 0x159b) 00:21:15.285 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:15.285 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:15.285 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:15.285 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:15.285 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:15.285 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:15.285 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:21:15.285 Found 0000:09:00.1 (0x8086 - 0x159b) 00:21:15.285 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:15.285 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:15.285 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:15.285 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:15.285 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:15.285 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:15.285 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:15.285 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:15.285 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:15.285 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:15.285 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:15.285 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:15.285 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:15.285 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:15.285 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:15.285 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:21:15.285 Found net devices under 0000:09:00.0: cvl_0_0 00:21:15.285 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:15.285 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:15.285 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:15.285 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:15.286 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:15.286 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:15.286 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:15.286 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:15.286 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:21:15.286 Found net devices under 0000:09:00.1: cvl_0_1 00:21:15.286 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:15.286 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:15.286 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:21:15.286 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:15.286 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:15.286 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:15.286 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:15.286 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:15.286 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:15.286 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:15.286 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:15.286 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:15.286 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:15.286 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:15.286 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:15.286 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:15.286 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:15.286 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:15.286 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:15.286 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:15.286 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:15.286 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:15.286 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:15.286 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:15.286 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:15.286 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:15.286 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:15.286 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.200 ms 00:21:15.286 00:21:15.286 --- 10.0.0.2 ping statistics --- 00:21:15.286 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:15.286 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:21:15.286 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:15.286 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:15.286 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.092 ms 00:21:15.286 00:21:15.286 --- 10.0.0.1 ping statistics --- 00:21:15.286 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:15.286 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:21:15.286 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:15.286 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:21:15.286 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:15.286 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:15.286 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:15.286 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:15.286 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:15.286 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:15.286 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:15.286 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:21:15.286 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:15.286 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:15.286 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:15.286 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=266133 00:21:15.286 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:15.286 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 266133 00:21:15.286 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 266133 ']' 00:21:15.286 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:15.286 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:15.286 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:15.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:15.286 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:15.286 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:15.286 [2024-07-26 14:15:22.971450] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:21:15.286 [2024-07-26 14:15:22.971546] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:15.286 EAL: No free 2048 kB hugepages reported on node 1 00:21:15.286 [2024-07-26 14:15:23.037940] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:15.286 [2024-07-26 14:15:23.145047] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:15.286 [2024-07-26 14:15:23.145101] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:15.286 [2024-07-26 14:15:23.145121] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:15.286 [2024-07-26 14:15:23.145132] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:15.286 [2024-07-26 14:15:23.145142] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:15.286 [2024-07-26 14:15:23.145202] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:15.286 [2024-07-26 14:15:23.145263] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:15.286 [2024-07-26 14:15:23.145329] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:21:15.286 [2024-07-26 14:15:23.145332] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:15.286 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:15.286 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:21:15.286 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:15.286 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:15.286 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:15.286 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:15.286 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:15.286 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.286 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:15.286 [2024-07-26 14:15:23.296734] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:15.544 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.544 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:21:15.544 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:21:15.544 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:15.544 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:15.544 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:15.544 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:15.544 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:15.544 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:15.544 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:15.544 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:15.544 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:15.544 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:15.544 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:15.544 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:15.544 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:15.544 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:15.544 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:15.544 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:15.544 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:15.544 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:15.544 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:15.544 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:15.544 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:15.544 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:15.544 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:15.544 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:21:15.544 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.544 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:15.544 Malloc1 00:21:15.545 [2024-07-26 14:15:23.371105] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:15.545 Malloc2 00:21:15.545 Malloc3 00:21:15.545 Malloc4 00:21:15.545 Malloc5 00:21:15.802 Malloc6 00:21:15.802 Malloc7 00:21:15.802 Malloc8 00:21:15.802 Malloc9 00:21:15.802 Malloc10 00:21:15.802 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.802 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:21:15.802 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:15.802 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:16.060 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=266311 00:21:16.060 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 266311 /var/tmp/bdevperf.sock 00:21:16.060 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 266311 ']' 00:21:16.060 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:21:16.060 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:16.060 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:16.060 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:16.060 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:21:16.060 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:16.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:16.060 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:21:16.060 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:16.061 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:16.061 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:16.061 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:16.061 { 00:21:16.061 "params": { 00:21:16.061 "name": "Nvme$subsystem", 00:21:16.061 "trtype": "$TEST_TRANSPORT", 00:21:16.061 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:16.061 "adrfam": "ipv4", 00:21:16.061 "trsvcid": "$NVMF_PORT", 00:21:16.061 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:16.061 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:16.061 "hdgst": ${hdgst:-false}, 00:21:16.061 "ddgst": ${ddgst:-false} 00:21:16.061 }, 00:21:16.061 "method": "bdev_nvme_attach_controller" 00:21:16.061 } 00:21:16.061 EOF 00:21:16.061 )") 00:21:16.061 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:16.061 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:16.061 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:16.061 { 00:21:16.061 "params": { 00:21:16.061 "name": "Nvme$subsystem", 00:21:16.061 "trtype": "$TEST_TRANSPORT", 00:21:16.061 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:16.061 "adrfam": "ipv4", 00:21:16.061 "trsvcid": "$NVMF_PORT", 00:21:16.061 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:16.061 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:16.061 "hdgst": ${hdgst:-false}, 00:21:16.061 "ddgst": ${ddgst:-false} 00:21:16.061 }, 00:21:16.061 "method": "bdev_nvme_attach_controller" 00:21:16.061 } 00:21:16.061 EOF 00:21:16.061 )") 00:21:16.061 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:16.061 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:16.061 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:16.061 { 00:21:16.061 "params": { 00:21:16.061 "name": "Nvme$subsystem", 00:21:16.061 "trtype": "$TEST_TRANSPORT", 00:21:16.061 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:16.061 "adrfam": "ipv4", 00:21:16.061 "trsvcid": "$NVMF_PORT", 00:21:16.061 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:16.061 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:16.061 "hdgst": ${hdgst:-false}, 00:21:16.061 "ddgst": ${ddgst:-false} 00:21:16.061 }, 00:21:16.061 "method": "bdev_nvme_attach_controller" 00:21:16.061 } 00:21:16.061 EOF 00:21:16.061 )") 00:21:16.061 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:16.061 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:16.061 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:16.061 { 00:21:16.061 "params": { 00:21:16.061 "name": "Nvme$subsystem", 00:21:16.061 "trtype": "$TEST_TRANSPORT", 00:21:16.061 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:16.061 "adrfam": "ipv4", 00:21:16.061 "trsvcid": "$NVMF_PORT", 00:21:16.061 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:16.061 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:16.061 "hdgst": ${hdgst:-false}, 00:21:16.061 "ddgst": ${ddgst:-false} 00:21:16.061 }, 00:21:16.061 "method": "bdev_nvme_attach_controller" 00:21:16.061 } 00:21:16.061 EOF 00:21:16.061 )") 00:21:16.061 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:16.061 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:16.061 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:16.061 { 00:21:16.061 "params": { 00:21:16.061 "name": "Nvme$subsystem", 00:21:16.061 "trtype": "$TEST_TRANSPORT", 00:21:16.061 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:16.061 "adrfam": "ipv4", 00:21:16.061 "trsvcid": "$NVMF_PORT", 00:21:16.061 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:16.061 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:16.061 "hdgst": ${hdgst:-false}, 00:21:16.061 "ddgst": ${ddgst:-false} 00:21:16.061 }, 00:21:16.061 "method": "bdev_nvme_attach_controller" 00:21:16.061 } 00:21:16.061 EOF 00:21:16.061 )") 00:21:16.061 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:16.061 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:16.061 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:16.061 { 00:21:16.061 "params": { 00:21:16.061 "name": "Nvme$subsystem", 00:21:16.061 "trtype": "$TEST_TRANSPORT", 00:21:16.061 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:16.061 "adrfam": "ipv4", 00:21:16.061 "trsvcid": "$NVMF_PORT", 00:21:16.061 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:16.061 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:16.061 "hdgst": ${hdgst:-false}, 00:21:16.061 "ddgst": ${ddgst:-false} 00:21:16.061 }, 00:21:16.061 "method": "bdev_nvme_attach_controller" 00:21:16.061 } 00:21:16.061 EOF 00:21:16.061 )") 00:21:16.061 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:16.061 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:16.061 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:16.061 { 00:21:16.061 "params": { 00:21:16.061 "name": "Nvme$subsystem", 00:21:16.061 "trtype": "$TEST_TRANSPORT", 00:21:16.061 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:16.061 "adrfam": "ipv4", 00:21:16.061 "trsvcid": "$NVMF_PORT", 00:21:16.061 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:16.061 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:16.061 "hdgst": ${hdgst:-false}, 00:21:16.061 "ddgst": ${ddgst:-false} 00:21:16.061 }, 00:21:16.061 "method": "bdev_nvme_attach_controller" 00:21:16.061 } 00:21:16.061 EOF 00:21:16.061 )") 00:21:16.061 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:16.061 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:16.061 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:16.061 { 00:21:16.061 "params": { 00:21:16.061 "name": "Nvme$subsystem", 00:21:16.061 "trtype": "$TEST_TRANSPORT", 00:21:16.061 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:16.061 "adrfam": "ipv4", 00:21:16.061 "trsvcid": "$NVMF_PORT", 00:21:16.061 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:16.061 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:16.061 "hdgst": ${hdgst:-false}, 00:21:16.061 "ddgst": ${ddgst:-false} 00:21:16.061 }, 00:21:16.061 "method": "bdev_nvme_attach_controller" 00:21:16.061 } 00:21:16.061 EOF 00:21:16.061 )") 00:21:16.061 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:16.061 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:16.061 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:16.061 { 00:21:16.061 "params": { 00:21:16.061 "name": "Nvme$subsystem", 00:21:16.061 "trtype": "$TEST_TRANSPORT", 00:21:16.061 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:16.061 "adrfam": "ipv4", 00:21:16.061 "trsvcid": "$NVMF_PORT", 00:21:16.061 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:16.061 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:16.061 "hdgst": ${hdgst:-false}, 00:21:16.061 "ddgst": ${ddgst:-false} 00:21:16.061 }, 00:21:16.061 "method": "bdev_nvme_attach_controller" 00:21:16.061 } 00:21:16.061 EOF 00:21:16.061 )") 00:21:16.061 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:16.061 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:16.061 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:16.061 { 00:21:16.061 "params": { 00:21:16.061 "name": "Nvme$subsystem", 00:21:16.061 "trtype": "$TEST_TRANSPORT", 00:21:16.061 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:16.061 "adrfam": "ipv4", 00:21:16.061 "trsvcid": "$NVMF_PORT", 00:21:16.061 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:16.061 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:16.061 "hdgst": ${hdgst:-false}, 00:21:16.061 "ddgst": ${ddgst:-false} 00:21:16.061 }, 00:21:16.061 "method": "bdev_nvme_attach_controller" 00:21:16.061 } 00:21:16.061 EOF 00:21:16.061 )") 00:21:16.061 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:16.061 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:21:16.061 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:21:16.061 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:16.062 "params": { 00:21:16.062 "name": "Nvme1", 00:21:16.062 "trtype": "tcp", 00:21:16.062 "traddr": "10.0.0.2", 00:21:16.062 "adrfam": "ipv4", 00:21:16.062 "trsvcid": "4420", 00:21:16.062 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:16.062 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:16.062 "hdgst": false, 00:21:16.062 "ddgst": false 00:21:16.062 }, 00:21:16.062 "method": "bdev_nvme_attach_controller" 00:21:16.062 },{ 00:21:16.062 "params": { 00:21:16.062 "name": "Nvme2", 00:21:16.062 "trtype": "tcp", 00:21:16.062 "traddr": "10.0.0.2", 00:21:16.062 "adrfam": "ipv4", 00:21:16.062 "trsvcid": "4420", 00:21:16.062 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:16.062 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:16.062 "hdgst": false, 00:21:16.062 "ddgst": false 00:21:16.062 }, 00:21:16.062 "method": "bdev_nvme_attach_controller" 00:21:16.062 },{ 00:21:16.062 "params": { 00:21:16.062 "name": "Nvme3", 00:21:16.062 "trtype": "tcp", 00:21:16.062 "traddr": "10.0.0.2", 00:21:16.062 "adrfam": "ipv4", 00:21:16.062 "trsvcid": "4420", 00:21:16.062 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:16.062 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:16.062 "hdgst": false, 00:21:16.062 "ddgst": false 00:21:16.062 }, 00:21:16.062 "method": "bdev_nvme_attach_controller" 00:21:16.062 },{ 00:21:16.062 "params": { 00:21:16.062 "name": "Nvme4", 00:21:16.062 "trtype": "tcp", 00:21:16.062 "traddr": "10.0.0.2", 00:21:16.062 "adrfam": "ipv4", 00:21:16.062 "trsvcid": "4420", 00:21:16.062 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:16.062 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:16.062 "hdgst": false, 00:21:16.062 "ddgst": false 00:21:16.062 }, 00:21:16.062 "method": "bdev_nvme_attach_controller" 00:21:16.062 },{ 00:21:16.062 "params": { 00:21:16.062 "name": "Nvme5", 00:21:16.062 "trtype": "tcp", 00:21:16.062 "traddr": "10.0.0.2", 00:21:16.062 "adrfam": "ipv4", 00:21:16.062 "trsvcid": "4420", 00:21:16.062 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:16.062 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:16.062 "hdgst": false, 00:21:16.062 "ddgst": false 00:21:16.062 }, 00:21:16.062 "method": "bdev_nvme_attach_controller" 00:21:16.062 },{ 00:21:16.062 "params": { 00:21:16.062 "name": "Nvme6", 00:21:16.062 "trtype": "tcp", 00:21:16.062 "traddr": "10.0.0.2", 00:21:16.062 "adrfam": "ipv4", 00:21:16.062 "trsvcid": "4420", 00:21:16.062 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:16.062 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:16.062 "hdgst": false, 00:21:16.062 "ddgst": false 00:21:16.062 }, 00:21:16.062 "method": "bdev_nvme_attach_controller" 00:21:16.062 },{ 00:21:16.062 "params": { 00:21:16.062 "name": "Nvme7", 00:21:16.062 "trtype": "tcp", 00:21:16.062 "traddr": "10.0.0.2", 00:21:16.062 "adrfam": "ipv4", 00:21:16.062 "trsvcid": "4420", 00:21:16.062 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:16.062 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:16.062 "hdgst": false, 00:21:16.062 "ddgst": false 00:21:16.062 }, 00:21:16.062 "method": "bdev_nvme_attach_controller" 00:21:16.062 },{ 00:21:16.062 "params": { 00:21:16.062 "name": "Nvme8", 00:21:16.062 "trtype": "tcp", 00:21:16.062 "traddr": "10.0.0.2", 00:21:16.062 "adrfam": "ipv4", 00:21:16.062 "trsvcid": "4420", 00:21:16.062 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:16.062 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:16.062 "hdgst": false, 00:21:16.062 "ddgst": false 00:21:16.062 }, 00:21:16.062 "method": "bdev_nvme_attach_controller" 00:21:16.062 },{ 00:21:16.062 "params": { 00:21:16.062 "name": "Nvme9", 00:21:16.062 "trtype": "tcp", 00:21:16.062 "traddr": "10.0.0.2", 00:21:16.062 "adrfam": "ipv4", 00:21:16.062 "trsvcid": "4420", 00:21:16.062 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:16.062 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:16.062 "hdgst": false, 00:21:16.062 "ddgst": false 00:21:16.062 }, 00:21:16.062 "method": "bdev_nvme_attach_controller" 00:21:16.062 },{ 00:21:16.062 "params": { 00:21:16.062 "name": "Nvme10", 00:21:16.062 "trtype": "tcp", 00:21:16.062 "traddr": "10.0.0.2", 00:21:16.062 "adrfam": "ipv4", 00:21:16.062 "trsvcid": "4420", 00:21:16.062 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:16.062 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:16.062 "hdgst": false, 00:21:16.062 "ddgst": false 00:21:16.062 }, 00:21:16.062 "method": "bdev_nvme_attach_controller" 00:21:16.062 }' 00:21:16.062 [2024-07-26 14:15:23.876984] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:21:16.062 [2024-07-26 14:15:23.877070] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:21:16.062 EAL: No free 2048 kB hugepages reported on node 1 00:21:16.062 [2024-07-26 14:15:23.940139] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:16.062 [2024-07-26 14:15:24.050188] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:17.979 14:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:17.979 14:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:21:17.979 14:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:17.979 14:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.979 14:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:17.979 14:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.979 14:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 266311 00:21:17.979 14:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:21:17.979 14:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:21:18.545 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 266311 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:21:18.545 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 266133 00:21:18.545 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:21:18.545 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:18.545 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:21:18.545 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:21:18.545 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:18.545 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:18.545 { 00:21:18.545 "params": { 00:21:18.545 "name": "Nvme$subsystem", 00:21:18.545 "trtype": "$TEST_TRANSPORT", 00:21:18.545 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:18.545 "adrfam": "ipv4", 00:21:18.545 "trsvcid": "$NVMF_PORT", 00:21:18.545 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:18.545 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:18.545 "hdgst": ${hdgst:-false}, 00:21:18.545 "ddgst": ${ddgst:-false} 00:21:18.545 }, 00:21:18.545 "method": "bdev_nvme_attach_controller" 00:21:18.545 } 00:21:18.545 EOF 00:21:18.545 )") 00:21:18.545 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:18.545 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:18.545 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:18.545 { 00:21:18.545 "params": { 00:21:18.545 "name": "Nvme$subsystem", 00:21:18.545 "trtype": "$TEST_TRANSPORT", 00:21:18.545 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:18.545 "adrfam": "ipv4", 00:21:18.545 "trsvcid": "$NVMF_PORT", 00:21:18.545 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:18.545 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:18.545 "hdgst": ${hdgst:-false}, 00:21:18.545 "ddgst": ${ddgst:-false} 00:21:18.545 }, 00:21:18.545 "method": "bdev_nvme_attach_controller" 00:21:18.545 } 00:21:18.545 EOF 00:21:18.545 )") 00:21:18.545 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:18.545 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:18.545 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:18.545 { 00:21:18.545 "params": { 00:21:18.545 "name": "Nvme$subsystem", 00:21:18.545 "trtype": "$TEST_TRANSPORT", 00:21:18.545 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:18.545 "adrfam": "ipv4", 00:21:18.545 "trsvcid": "$NVMF_PORT", 00:21:18.545 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:18.545 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:18.545 "hdgst": ${hdgst:-false}, 00:21:18.545 "ddgst": ${ddgst:-false} 00:21:18.545 }, 00:21:18.545 "method": "bdev_nvme_attach_controller" 00:21:18.545 } 00:21:18.545 EOF 00:21:18.545 )") 00:21:18.545 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:18.545 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:18.545 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:18.545 { 00:21:18.545 "params": { 00:21:18.545 "name": "Nvme$subsystem", 00:21:18.545 "trtype": "$TEST_TRANSPORT", 00:21:18.545 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:18.545 "adrfam": "ipv4", 00:21:18.545 "trsvcid": "$NVMF_PORT", 00:21:18.545 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:18.545 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:18.545 "hdgst": ${hdgst:-false}, 00:21:18.545 "ddgst": ${ddgst:-false} 00:21:18.545 }, 00:21:18.545 "method": "bdev_nvme_attach_controller" 00:21:18.545 } 00:21:18.545 EOF 00:21:18.545 )") 00:21:18.545 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:18.545 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:18.545 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:18.545 { 00:21:18.545 "params": { 00:21:18.545 "name": "Nvme$subsystem", 00:21:18.545 "trtype": "$TEST_TRANSPORT", 00:21:18.545 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:18.545 "adrfam": "ipv4", 00:21:18.545 "trsvcid": "$NVMF_PORT", 00:21:18.545 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:18.545 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:18.545 "hdgst": ${hdgst:-false}, 00:21:18.545 "ddgst": ${ddgst:-false} 00:21:18.545 }, 00:21:18.545 "method": "bdev_nvme_attach_controller" 00:21:18.545 } 00:21:18.545 EOF 00:21:18.545 )") 00:21:18.545 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:18.545 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:18.545 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:18.545 { 00:21:18.545 "params": { 00:21:18.545 "name": "Nvme$subsystem", 00:21:18.545 "trtype": "$TEST_TRANSPORT", 00:21:18.545 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:18.545 "adrfam": "ipv4", 00:21:18.545 "trsvcid": "$NVMF_PORT", 00:21:18.545 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:18.545 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:18.545 "hdgst": ${hdgst:-false}, 00:21:18.545 "ddgst": ${ddgst:-false} 00:21:18.545 }, 00:21:18.545 "method": "bdev_nvme_attach_controller" 00:21:18.545 } 00:21:18.545 EOF 00:21:18.545 )") 00:21:18.545 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:18.545 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:18.545 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:18.545 { 00:21:18.545 "params": { 00:21:18.545 "name": "Nvme$subsystem", 00:21:18.545 "trtype": "$TEST_TRANSPORT", 00:21:18.545 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:18.545 "adrfam": "ipv4", 00:21:18.545 "trsvcid": "$NVMF_PORT", 00:21:18.545 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:18.545 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:18.545 "hdgst": ${hdgst:-false}, 00:21:18.545 "ddgst": ${ddgst:-false} 00:21:18.545 }, 00:21:18.545 "method": "bdev_nvme_attach_controller" 00:21:18.545 } 00:21:18.545 EOF 00:21:18.545 )") 00:21:18.545 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:18.545 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:18.545 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:18.545 { 00:21:18.545 "params": { 00:21:18.545 "name": "Nvme$subsystem", 00:21:18.545 "trtype": "$TEST_TRANSPORT", 00:21:18.545 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:18.545 "adrfam": "ipv4", 00:21:18.545 "trsvcid": "$NVMF_PORT", 00:21:18.545 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:18.545 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:18.545 "hdgst": ${hdgst:-false}, 00:21:18.545 "ddgst": ${ddgst:-false} 00:21:18.545 }, 00:21:18.545 "method": "bdev_nvme_attach_controller" 00:21:18.545 } 00:21:18.545 EOF 00:21:18.545 )") 00:21:18.545 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:18.545 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:18.545 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:18.545 { 00:21:18.545 "params": { 00:21:18.545 "name": "Nvme$subsystem", 00:21:18.545 "trtype": "$TEST_TRANSPORT", 00:21:18.545 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:18.546 "adrfam": "ipv4", 00:21:18.546 "trsvcid": "$NVMF_PORT", 00:21:18.546 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:18.546 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:18.546 "hdgst": ${hdgst:-false}, 00:21:18.546 "ddgst": ${ddgst:-false} 00:21:18.546 }, 00:21:18.546 "method": "bdev_nvme_attach_controller" 00:21:18.546 } 00:21:18.546 EOF 00:21:18.546 )") 00:21:18.546 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:18.546 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:18.546 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:18.546 { 00:21:18.546 "params": { 00:21:18.546 "name": "Nvme$subsystem", 00:21:18.546 "trtype": "$TEST_TRANSPORT", 00:21:18.546 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:18.546 "adrfam": "ipv4", 00:21:18.546 "trsvcid": "$NVMF_PORT", 00:21:18.546 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:18.546 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:18.546 "hdgst": ${hdgst:-false}, 00:21:18.546 "ddgst": ${ddgst:-false} 00:21:18.546 }, 00:21:18.546 "method": "bdev_nvme_attach_controller" 00:21:18.546 } 00:21:18.546 EOF 00:21:18.546 )") 00:21:18.546 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:18.546 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:21:18.546 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:21:18.546 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:18.546 "params": { 00:21:18.546 "name": "Nvme1", 00:21:18.546 "trtype": "tcp", 00:21:18.546 "traddr": "10.0.0.2", 00:21:18.546 "adrfam": "ipv4", 00:21:18.546 "trsvcid": "4420", 00:21:18.546 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:18.546 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:18.546 "hdgst": false, 00:21:18.546 "ddgst": false 00:21:18.546 }, 00:21:18.546 "method": "bdev_nvme_attach_controller" 00:21:18.546 },{ 00:21:18.546 "params": { 00:21:18.546 "name": "Nvme2", 00:21:18.546 "trtype": "tcp", 00:21:18.546 "traddr": "10.0.0.2", 00:21:18.546 "adrfam": "ipv4", 00:21:18.546 "trsvcid": "4420", 00:21:18.546 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:18.546 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:18.546 "hdgst": false, 00:21:18.546 "ddgst": false 00:21:18.546 }, 00:21:18.546 "method": "bdev_nvme_attach_controller" 00:21:18.546 },{ 00:21:18.546 "params": { 00:21:18.546 "name": "Nvme3", 00:21:18.546 "trtype": "tcp", 00:21:18.546 "traddr": "10.0.0.2", 00:21:18.546 "adrfam": "ipv4", 00:21:18.546 "trsvcid": "4420", 00:21:18.546 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:18.546 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:18.546 "hdgst": false, 00:21:18.546 "ddgst": false 00:21:18.546 }, 00:21:18.546 "method": "bdev_nvme_attach_controller" 00:21:18.546 },{ 00:21:18.546 "params": { 00:21:18.546 "name": "Nvme4", 00:21:18.546 "trtype": "tcp", 00:21:18.546 "traddr": "10.0.0.2", 00:21:18.546 "adrfam": "ipv4", 00:21:18.546 "trsvcid": "4420", 00:21:18.546 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:18.546 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:18.546 "hdgst": false, 00:21:18.546 "ddgst": false 00:21:18.546 }, 00:21:18.546 "method": "bdev_nvme_attach_controller" 00:21:18.546 },{ 00:21:18.546 "params": { 00:21:18.546 "name": "Nvme5", 00:21:18.546 "trtype": "tcp", 00:21:18.546 "traddr": "10.0.0.2", 00:21:18.546 "adrfam": "ipv4", 00:21:18.546 "trsvcid": "4420", 00:21:18.546 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:18.546 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:18.546 "hdgst": false, 00:21:18.546 "ddgst": false 00:21:18.546 }, 00:21:18.546 "method": "bdev_nvme_attach_controller" 00:21:18.546 },{ 00:21:18.546 "params": { 00:21:18.546 "name": "Nvme6", 00:21:18.546 "trtype": "tcp", 00:21:18.546 "traddr": "10.0.0.2", 00:21:18.546 "adrfam": "ipv4", 00:21:18.546 "trsvcid": "4420", 00:21:18.546 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:18.546 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:18.546 "hdgst": false, 00:21:18.546 "ddgst": false 00:21:18.546 }, 00:21:18.546 "method": "bdev_nvme_attach_controller" 00:21:18.546 },{ 00:21:18.546 "params": { 00:21:18.546 "name": "Nvme7", 00:21:18.546 "trtype": "tcp", 00:21:18.546 "traddr": "10.0.0.2", 00:21:18.546 "adrfam": "ipv4", 00:21:18.546 "trsvcid": "4420", 00:21:18.546 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:18.546 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:18.546 "hdgst": false, 00:21:18.546 "ddgst": false 00:21:18.546 }, 00:21:18.546 "method": "bdev_nvme_attach_controller" 00:21:18.546 },{ 00:21:18.546 "params": { 00:21:18.546 "name": "Nvme8", 00:21:18.546 "trtype": "tcp", 00:21:18.546 "traddr": "10.0.0.2", 00:21:18.546 "adrfam": "ipv4", 00:21:18.546 "trsvcid": "4420", 00:21:18.546 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:18.546 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:18.546 "hdgst": false, 00:21:18.546 "ddgst": false 00:21:18.546 }, 00:21:18.546 "method": "bdev_nvme_attach_controller" 00:21:18.546 },{ 00:21:18.546 "params": { 00:21:18.546 "name": "Nvme9", 00:21:18.546 "trtype": "tcp", 00:21:18.546 "traddr": "10.0.0.2", 00:21:18.546 "adrfam": "ipv4", 00:21:18.546 "trsvcid": "4420", 00:21:18.546 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:18.546 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:18.546 "hdgst": false, 00:21:18.546 "ddgst": false 00:21:18.546 }, 00:21:18.546 "method": "bdev_nvme_attach_controller" 00:21:18.546 },{ 00:21:18.546 "params": { 00:21:18.546 "name": "Nvme10", 00:21:18.546 "trtype": "tcp", 00:21:18.546 "traddr": "10.0.0.2", 00:21:18.546 "adrfam": "ipv4", 00:21:18.546 "trsvcid": "4420", 00:21:18.546 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:18.546 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:18.546 "hdgst": false, 00:21:18.546 "ddgst": false 00:21:18.546 }, 00:21:18.546 "method": "bdev_nvme_attach_controller" 00:21:18.546 }' 00:21:18.546 [2024-07-26 14:15:26.551955] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:21:18.546 [2024-07-26 14:15:26.552044] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid266606 ] 00:21:18.804 EAL: No free 2048 kB hugepages reported on node 1 00:21:18.804 [2024-07-26 14:15:26.618947] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:18.804 [2024-07-26 14:15:26.730554] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:20.176 Running I/O for 1 seconds... 00:21:21.549 00:21:21.549 Latency(us) 00:21:21.549 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:21.549 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:21.549 Verification LBA range: start 0x0 length 0x400 00:21:21.549 Nvme1n1 : 1.14 224.95 14.06 0.00 0.00 281702.02 22622.06 288940.94 00:21:21.549 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:21.549 Verification LBA range: start 0x0 length 0x400 00:21:21.550 Nvme2n1 : 1.15 222.70 13.92 0.00 0.00 280045.04 21748.24 262532.36 00:21:21.550 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:21.550 Verification LBA range: start 0x0 length 0x400 00:21:21.550 Nvme3n1 : 1.08 237.05 14.82 0.00 0.00 257849.84 17961.72 257872.02 00:21:21.550 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:21.550 Verification LBA range: start 0x0 length 0x400 00:21:21.550 Nvme4n1 : 1.07 239.16 14.95 0.00 0.00 251098.64 20680.25 257872.02 00:21:21.550 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:21.550 Verification LBA range: start 0x0 length 0x400 00:21:21.550 Nvme5n1 : 1.16 221.58 13.85 0.00 0.00 267722.90 21845.33 254765.13 00:21:21.550 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:21.550 Verification LBA range: start 0x0 length 0x400 00:21:21.550 Nvme6n1 : 1.16 220.66 13.79 0.00 0.00 264418.61 22233.69 260978.92 00:21:21.550 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:21.550 Verification LBA range: start 0x0 length 0x400 00:21:21.550 Nvme7n1 : 1.18 272.34 17.02 0.00 0.00 209875.85 17767.54 237677.23 00:21:21.550 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:21.550 Verification LBA range: start 0x0 length 0x400 00:21:21.550 Nvme8n1 : 1.18 276.22 17.26 0.00 0.00 203792.80 4538.97 251658.24 00:21:21.550 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:21.550 Verification LBA range: start 0x0 length 0x400 00:21:21.550 Nvme9n1 : 1.16 219.79 13.74 0.00 0.00 252117.33 22622.06 259425.47 00:21:21.550 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:21.550 Verification LBA range: start 0x0 length 0x400 00:21:21.550 Nvme10n1 : 1.17 219.10 13.69 0.00 0.00 248677.83 22330.79 278066.82 00:21:21.550 =================================================================================================================== 00:21:21.550 Total : 2353.56 147.10 0.00 0.00 249507.16 4538.97 288940.94 00:21:21.550 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:21:21.550 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:21:21.550 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:21.550 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:21.550 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:21:21.550 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:21.550 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:21:21.550 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:21.550 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:21:21.550 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:21.550 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:21.550 rmmod nvme_tcp 00:21:21.550 rmmod nvme_fabrics 00:21:21.550 rmmod nvme_keyring 00:21:21.807 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:21.807 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:21:21.807 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:21:21.807 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 266133 ']' 00:21:21.807 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 266133 00:21:21.807 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@950 -- # '[' -z 266133 ']' 00:21:21.808 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # kill -0 266133 00:21:21.808 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # uname 00:21:21.808 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:21.808 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 266133 00:21:21.808 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:21.808 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:21.808 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 266133' 00:21:21.808 killing process with pid 266133 00:21:21.808 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@969 -- # kill 266133 00:21:21.808 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@974 -- # wait 266133 00:21:22.374 14:15:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:22.374 14:15:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:22.374 14:15:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:22.374 14:15:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:22.374 14:15:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:22.374 14:15:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:22.374 14:15:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:22.374 14:15:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:24.276 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:24.276 00:21:24.276 real 0m11.547s 00:21:24.276 user 0m31.876s 00:21:24.276 sys 0m3.292s 00:21:24.276 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:24.276 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:24.276 ************************************ 00:21:24.276 END TEST nvmf_shutdown_tc1 00:21:24.276 ************************************ 00:21:24.276 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:21:24.276 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:21:24.276 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:24.276 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:24.276 ************************************ 00:21:24.276 START TEST nvmf_shutdown_tc2 00:21:24.276 ************************************ 00:21:24.276 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc2 00:21:24.276 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:21:24.276 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:21:24.276 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:24.276 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:24.276 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:24.276 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:24.276 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:24.276 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:24.276 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:24.276 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:24.276 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:24.276 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:24.276 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:24.276 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:24.276 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:24.276 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:24.276 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:24.276 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:24.276 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:24.276 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:24.276 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:24.276 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:21:24.276 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:24.276 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:21:24.276 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:21:24.276 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:21:24.276 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:21:24.276 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:21:24.276 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:24.276 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:24.276 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:24.276 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:24.276 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:24.276 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:24.276 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:24.276 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:24.276 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:24.276 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:24.276 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:24.276 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:24.276 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:24.276 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:24.276 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:24.276 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:24.276 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:24.276 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:24.276 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:24.276 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:21:24.276 Found 0000:09:00.0 (0x8086 - 0x159b) 00:21:24.276 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:24.276 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:24.276 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:24.277 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:24.277 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:24.277 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:24.277 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:21:24.277 Found 0000:09:00.1 (0x8086 - 0x159b) 00:21:24.277 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:24.277 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:24.277 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:24.277 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:24.277 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:24.277 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:24.277 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:24.277 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:24.277 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:24.277 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:24.277 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:24.277 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:24.277 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:24.277 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:24.277 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:24.277 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:21:24.277 Found net devices under 0000:09:00.0: cvl_0_0 00:21:24.277 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:24.277 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:24.277 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:24.277 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:24.277 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:24.277 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:24.277 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:24.277 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:24.277 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:21:24.277 Found net devices under 0000:09:00.1: cvl_0_1 00:21:24.277 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:24.277 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:24.277 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:21:24.277 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:24.277 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:24.277 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:24.277 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:24.277 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:24.277 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:24.277 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:24.277 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:24.277 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:24.277 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:24.277 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:24.277 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:24.277 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:24.277 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:24.277 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:24.277 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:24.277 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:24.277 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:24.277 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:24.277 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:24.535 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:24.535 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:24.535 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:24.535 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:24.535 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.203 ms 00:21:24.535 00:21:24.535 --- 10.0.0.2 ping statistics --- 00:21:24.535 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:24.535 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:21:24.535 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:24.535 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:24.535 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.174 ms 00:21:24.535 00:21:24.535 --- 10.0.0.1 ping statistics --- 00:21:24.535 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:24.535 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:21:24.535 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:24.535 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:21:24.535 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:24.535 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:24.535 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:24.535 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:24.535 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:24.535 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:24.535 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:24.535 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:21:24.535 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:24.535 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:24.535 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:24.535 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=267468 00:21:24.535 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:24.535 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 267468 00:21:24.535 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 267468 ']' 00:21:24.535 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:24.535 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:24.535 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:24.535 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:24.535 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:24.535 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:24.535 [2024-07-26 14:15:32.410977] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:21:24.535 [2024-07-26 14:15:32.411073] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:24.535 EAL: No free 2048 kB hugepages reported on node 1 00:21:24.535 [2024-07-26 14:15:32.473965] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:24.793 [2024-07-26 14:15:32.575894] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:24.793 [2024-07-26 14:15:32.575942] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:24.794 [2024-07-26 14:15:32.575964] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:24.794 [2024-07-26 14:15:32.575975] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:24.794 [2024-07-26 14:15:32.575984] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:24.794 [2024-07-26 14:15:32.576062] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:24.794 [2024-07-26 14:15:32.576169] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:24.794 [2024-07-26 14:15:32.576243] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:21:24.794 [2024-07-26 14:15:32.576245] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:24.794 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:24.794 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:21:24.794 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:24.794 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:24.794 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:24.794 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:24.794 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:24.794 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.794 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:24.794 [2024-07-26 14:15:32.735081] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:24.794 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.794 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:21:24.794 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:21:24.794 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:24.794 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:24.794 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:24.794 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:24.794 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:24.794 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:24.794 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:24.794 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:24.794 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:24.794 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:24.794 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:24.794 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:24.794 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:24.794 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:24.794 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:24.794 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:24.794 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:24.794 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:24.794 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:24.794 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:24.794 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:24.794 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:24.794 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:24.794 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:21:24.794 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.794 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:24.794 Malloc1 00:21:25.051 [2024-07-26 14:15:32.824645] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:25.051 Malloc2 00:21:25.051 Malloc3 00:21:25.051 Malloc4 00:21:25.051 Malloc5 00:21:25.051 Malloc6 00:21:25.309 Malloc7 00:21:25.309 Malloc8 00:21:25.309 Malloc9 00:21:25.309 Malloc10 00:21:25.309 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.309 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:21:25.309 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:25.309 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:25.309 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=267552 00:21:25.309 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 267552 /var/tmp/bdevperf.sock 00:21:25.310 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 267552 ']' 00:21:25.310 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:25.310 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:25.310 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:21:25.310 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:25.310 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:21:25.310 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:25.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:25.310 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:21:25.310 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:25.310 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:25.310 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:25.310 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:25.310 { 00:21:25.310 "params": { 00:21:25.310 "name": "Nvme$subsystem", 00:21:25.310 "trtype": "$TEST_TRANSPORT", 00:21:25.310 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:25.310 "adrfam": "ipv4", 00:21:25.310 "trsvcid": "$NVMF_PORT", 00:21:25.310 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:25.310 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:25.310 "hdgst": ${hdgst:-false}, 00:21:25.310 "ddgst": ${ddgst:-false} 00:21:25.310 }, 00:21:25.310 "method": "bdev_nvme_attach_controller" 00:21:25.310 } 00:21:25.310 EOF 00:21:25.310 )") 00:21:25.310 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:25.310 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:25.310 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:25.310 { 00:21:25.310 "params": { 00:21:25.310 "name": "Nvme$subsystem", 00:21:25.310 "trtype": "$TEST_TRANSPORT", 00:21:25.310 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:25.310 "adrfam": "ipv4", 00:21:25.310 "trsvcid": "$NVMF_PORT", 00:21:25.310 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:25.310 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:25.310 "hdgst": ${hdgst:-false}, 00:21:25.310 "ddgst": ${ddgst:-false} 00:21:25.310 }, 00:21:25.310 "method": "bdev_nvme_attach_controller" 00:21:25.310 } 00:21:25.310 EOF 00:21:25.310 )") 00:21:25.310 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:25.310 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:25.310 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:25.310 { 00:21:25.310 "params": { 00:21:25.310 "name": "Nvme$subsystem", 00:21:25.310 "trtype": "$TEST_TRANSPORT", 00:21:25.310 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:25.310 "adrfam": "ipv4", 00:21:25.310 "trsvcid": "$NVMF_PORT", 00:21:25.310 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:25.310 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:25.310 "hdgst": ${hdgst:-false}, 00:21:25.310 "ddgst": ${ddgst:-false} 00:21:25.310 }, 00:21:25.310 "method": "bdev_nvme_attach_controller" 00:21:25.310 } 00:21:25.310 EOF 00:21:25.310 )") 00:21:25.310 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:25.310 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:25.310 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:25.310 { 00:21:25.310 "params": { 00:21:25.310 "name": "Nvme$subsystem", 00:21:25.310 "trtype": "$TEST_TRANSPORT", 00:21:25.310 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:25.310 "adrfam": "ipv4", 00:21:25.310 "trsvcid": "$NVMF_PORT", 00:21:25.310 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:25.310 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:25.310 "hdgst": ${hdgst:-false}, 00:21:25.310 "ddgst": ${ddgst:-false} 00:21:25.310 }, 00:21:25.310 "method": "bdev_nvme_attach_controller" 00:21:25.310 } 00:21:25.310 EOF 00:21:25.310 )") 00:21:25.310 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:25.310 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:25.310 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:25.310 { 00:21:25.310 "params": { 00:21:25.310 "name": "Nvme$subsystem", 00:21:25.310 "trtype": "$TEST_TRANSPORT", 00:21:25.310 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:25.310 "adrfam": "ipv4", 00:21:25.310 "trsvcid": "$NVMF_PORT", 00:21:25.310 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:25.310 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:25.310 "hdgst": ${hdgst:-false}, 00:21:25.310 "ddgst": ${ddgst:-false} 00:21:25.310 }, 00:21:25.310 "method": "bdev_nvme_attach_controller" 00:21:25.310 } 00:21:25.310 EOF 00:21:25.310 )") 00:21:25.310 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:25.310 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:25.310 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:25.310 { 00:21:25.310 "params": { 00:21:25.310 "name": "Nvme$subsystem", 00:21:25.310 "trtype": "$TEST_TRANSPORT", 00:21:25.310 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:25.310 "adrfam": "ipv4", 00:21:25.310 "trsvcid": "$NVMF_PORT", 00:21:25.310 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:25.310 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:25.310 "hdgst": ${hdgst:-false}, 00:21:25.310 "ddgst": ${ddgst:-false} 00:21:25.310 }, 00:21:25.310 "method": "bdev_nvme_attach_controller" 00:21:25.310 } 00:21:25.310 EOF 00:21:25.310 )") 00:21:25.310 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:25.310 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:25.310 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:25.310 { 00:21:25.310 "params": { 00:21:25.310 "name": "Nvme$subsystem", 00:21:25.310 "trtype": "$TEST_TRANSPORT", 00:21:25.310 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:25.310 "adrfam": "ipv4", 00:21:25.310 "trsvcid": "$NVMF_PORT", 00:21:25.310 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:25.310 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:25.310 "hdgst": ${hdgst:-false}, 00:21:25.310 "ddgst": ${ddgst:-false} 00:21:25.310 }, 00:21:25.310 "method": "bdev_nvme_attach_controller" 00:21:25.310 } 00:21:25.310 EOF 00:21:25.310 )") 00:21:25.310 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:25.310 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:25.310 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:25.310 { 00:21:25.310 "params": { 00:21:25.310 "name": "Nvme$subsystem", 00:21:25.310 "trtype": "$TEST_TRANSPORT", 00:21:25.310 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:25.310 "adrfam": "ipv4", 00:21:25.310 "trsvcid": "$NVMF_PORT", 00:21:25.310 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:25.310 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:25.310 "hdgst": ${hdgst:-false}, 00:21:25.310 "ddgst": ${ddgst:-false} 00:21:25.310 }, 00:21:25.310 "method": "bdev_nvme_attach_controller" 00:21:25.310 } 00:21:25.310 EOF 00:21:25.310 )") 00:21:25.310 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:25.310 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:25.310 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:25.310 { 00:21:25.310 "params": { 00:21:25.310 "name": "Nvme$subsystem", 00:21:25.310 "trtype": "$TEST_TRANSPORT", 00:21:25.310 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:25.310 "adrfam": "ipv4", 00:21:25.310 "trsvcid": "$NVMF_PORT", 00:21:25.310 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:25.310 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:25.310 "hdgst": ${hdgst:-false}, 00:21:25.310 "ddgst": ${ddgst:-false} 00:21:25.310 }, 00:21:25.310 "method": "bdev_nvme_attach_controller" 00:21:25.310 } 00:21:25.310 EOF 00:21:25.310 )") 00:21:25.310 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:25.310 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:25.310 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:25.310 { 00:21:25.311 "params": { 00:21:25.311 "name": "Nvme$subsystem", 00:21:25.311 "trtype": "$TEST_TRANSPORT", 00:21:25.311 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:25.311 "adrfam": "ipv4", 00:21:25.311 "trsvcid": "$NVMF_PORT", 00:21:25.311 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:25.311 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:25.311 "hdgst": ${hdgst:-false}, 00:21:25.311 "ddgst": ${ddgst:-false} 00:21:25.311 }, 00:21:25.311 "method": "bdev_nvme_attach_controller" 00:21:25.311 } 00:21:25.311 EOF 00:21:25.311 )") 00:21:25.311 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:25.311 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:21:25.311 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:21:25.311 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:25.311 "params": { 00:21:25.311 "name": "Nvme1", 00:21:25.311 "trtype": "tcp", 00:21:25.311 "traddr": "10.0.0.2", 00:21:25.311 "adrfam": "ipv4", 00:21:25.311 "trsvcid": "4420", 00:21:25.311 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:25.311 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:25.311 "hdgst": false, 00:21:25.311 "ddgst": false 00:21:25.311 }, 00:21:25.311 "method": "bdev_nvme_attach_controller" 00:21:25.311 },{ 00:21:25.311 "params": { 00:21:25.311 "name": "Nvme2", 00:21:25.311 "trtype": "tcp", 00:21:25.311 "traddr": "10.0.0.2", 00:21:25.311 "adrfam": "ipv4", 00:21:25.311 "trsvcid": "4420", 00:21:25.311 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:25.311 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:25.311 "hdgst": false, 00:21:25.311 "ddgst": false 00:21:25.311 }, 00:21:25.311 "method": "bdev_nvme_attach_controller" 00:21:25.311 },{ 00:21:25.311 "params": { 00:21:25.311 "name": "Nvme3", 00:21:25.311 "trtype": "tcp", 00:21:25.311 "traddr": "10.0.0.2", 00:21:25.311 "adrfam": "ipv4", 00:21:25.311 "trsvcid": "4420", 00:21:25.311 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:25.311 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:25.311 "hdgst": false, 00:21:25.311 "ddgst": false 00:21:25.311 }, 00:21:25.311 "method": "bdev_nvme_attach_controller" 00:21:25.311 },{ 00:21:25.311 "params": { 00:21:25.311 "name": "Nvme4", 00:21:25.311 "trtype": "tcp", 00:21:25.311 "traddr": "10.0.0.2", 00:21:25.311 "adrfam": "ipv4", 00:21:25.311 "trsvcid": "4420", 00:21:25.311 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:25.311 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:25.311 "hdgst": false, 00:21:25.311 "ddgst": false 00:21:25.311 }, 00:21:25.311 "method": "bdev_nvme_attach_controller" 00:21:25.311 },{ 00:21:25.311 "params": { 00:21:25.311 "name": "Nvme5", 00:21:25.311 "trtype": "tcp", 00:21:25.311 "traddr": "10.0.0.2", 00:21:25.311 "adrfam": "ipv4", 00:21:25.311 "trsvcid": "4420", 00:21:25.311 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:25.311 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:25.311 "hdgst": false, 00:21:25.311 "ddgst": false 00:21:25.311 }, 00:21:25.311 "method": "bdev_nvme_attach_controller" 00:21:25.311 },{ 00:21:25.311 "params": { 00:21:25.311 "name": "Nvme6", 00:21:25.311 "trtype": "tcp", 00:21:25.311 "traddr": "10.0.0.2", 00:21:25.311 "adrfam": "ipv4", 00:21:25.311 "trsvcid": "4420", 00:21:25.311 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:25.311 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:25.311 "hdgst": false, 00:21:25.311 "ddgst": false 00:21:25.311 }, 00:21:25.311 "method": "bdev_nvme_attach_controller" 00:21:25.311 },{ 00:21:25.311 "params": { 00:21:25.311 "name": "Nvme7", 00:21:25.311 "trtype": "tcp", 00:21:25.311 "traddr": "10.0.0.2", 00:21:25.311 "adrfam": "ipv4", 00:21:25.311 "trsvcid": "4420", 00:21:25.311 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:25.311 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:25.311 "hdgst": false, 00:21:25.311 "ddgst": false 00:21:25.311 }, 00:21:25.311 "method": "bdev_nvme_attach_controller" 00:21:25.311 },{ 00:21:25.311 "params": { 00:21:25.311 "name": "Nvme8", 00:21:25.311 "trtype": "tcp", 00:21:25.311 "traddr": "10.0.0.2", 00:21:25.311 "adrfam": "ipv4", 00:21:25.311 "trsvcid": "4420", 00:21:25.311 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:25.311 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:25.311 "hdgst": false, 00:21:25.311 "ddgst": false 00:21:25.311 }, 00:21:25.311 "method": "bdev_nvme_attach_controller" 00:21:25.311 },{ 00:21:25.311 "params": { 00:21:25.311 "name": "Nvme9", 00:21:25.311 "trtype": "tcp", 00:21:25.311 "traddr": "10.0.0.2", 00:21:25.311 "adrfam": "ipv4", 00:21:25.311 "trsvcid": "4420", 00:21:25.311 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:25.311 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:25.311 "hdgst": false, 00:21:25.311 "ddgst": false 00:21:25.311 }, 00:21:25.311 "method": "bdev_nvme_attach_controller" 00:21:25.311 },{ 00:21:25.311 "params": { 00:21:25.311 "name": "Nvme10", 00:21:25.311 "trtype": "tcp", 00:21:25.311 "traddr": "10.0.0.2", 00:21:25.311 "adrfam": "ipv4", 00:21:25.311 "trsvcid": "4420", 00:21:25.311 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:25.311 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:25.311 "hdgst": false, 00:21:25.311 "ddgst": false 00:21:25.311 }, 00:21:25.311 "method": "bdev_nvme_attach_controller" 00:21:25.311 }' 00:21:25.311 [2024-07-26 14:15:33.313893] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:21:25.311 [2024-07-26 14:15:33.313986] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid267552 ] 00:21:25.569 EAL: No free 2048 kB hugepages reported on node 1 00:21:25.569 [2024-07-26 14:15:33.377442] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:25.569 [2024-07-26 14:15:33.489201] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:27.464 Running I/O for 10 seconds... 00:21:27.464 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:27.464 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:21:27.464 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:27.464 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.464 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:27.464 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.464 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:21:27.464 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:21:27.464 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:21:27.464 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:21:27.464 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:21:27.464 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:21:27.464 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:27.464 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:27.464 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.464 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:27.464 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:27.721 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.721 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:21:27.721 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:21:27.721 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:21:27.979 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:21:27.979 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:27.979 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:27.979 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:27.979 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.979 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:27.979 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.979 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:21:27.979 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:21:27.979 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:21:28.237 14:15:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:21:28.237 14:15:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:28.237 14:15:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:28.237 14:15:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:28.237 14:15:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.237 14:15:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:28.237 14:15:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.237 14:15:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:21:28.237 14:15:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:21:28.237 14:15:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:21:28.237 14:15:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:21:28.237 14:15:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:21:28.237 14:15:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 267552 00:21:28.237 14:15:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 267552 ']' 00:21:28.237 14:15:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 267552 00:21:28.237 14:15:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:21:28.237 14:15:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:28.237 14:15:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 267552 00:21:28.237 14:15:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:28.237 14:15:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:28.237 14:15:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 267552' 00:21:28.237 killing process with pid 267552 00:21:28.237 14:15:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 267552 00:21:28.237 14:15:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 267552 00:21:28.237 Received shutdown signal, test time was about 0.944569 seconds 00:21:28.237 00:21:28.237 Latency(us) 00:21:28.237 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:28.237 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:28.237 Verification LBA range: start 0x0 length 0x400 00:21:28.237 Nvme1n1 : 0.91 210.10 13.13 0.00 0.00 301093.42 22039.51 260978.92 00:21:28.237 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:28.237 Verification LBA range: start 0x0 length 0x400 00:21:28.237 Nvme2n1 : 0.91 210.85 13.18 0.00 0.00 293804.06 18155.90 259425.47 00:21:28.237 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:28.237 Verification LBA range: start 0x0 length 0x400 00:21:28.237 Nvme3n1 : 0.94 272.09 17.01 0.00 0.00 223201.47 15728.64 248551.35 00:21:28.237 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:28.237 Verification LBA range: start 0x0 length 0x400 00:21:28.237 Nvme4n1 : 0.94 271.26 16.95 0.00 0.00 219274.81 17961.72 260978.92 00:21:28.237 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:28.237 Verification LBA range: start 0x0 length 0x400 00:21:28.237 Nvme5n1 : 0.93 206.92 12.93 0.00 0.00 281211.39 22233.69 262532.36 00:21:28.237 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:28.237 Verification LBA range: start 0x0 length 0x400 00:21:28.237 Nvme6n1 : 0.93 274.51 17.16 0.00 0.00 207363.22 17379.18 254765.13 00:21:28.237 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:28.237 Verification LBA range: start 0x0 length 0x400 00:21:28.237 Nvme7n1 : 0.90 213.65 13.35 0.00 0.00 259148.86 35729.26 256318.58 00:21:28.237 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:28.237 Verification LBA range: start 0x0 length 0x400 00:21:28.237 Nvme8n1 : 0.90 212.91 13.31 0.00 0.00 254049.60 18252.99 254765.13 00:21:28.237 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:28.237 Verification LBA range: start 0x0 length 0x400 00:21:28.237 Nvme9n1 : 0.94 204.90 12.81 0.00 0.00 259765.79 21456.97 296708.17 00:21:28.237 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:28.237 Verification LBA range: start 0x0 length 0x400 00:21:28.237 Nvme10n1 : 0.92 207.95 13.00 0.00 0.00 249717.44 18155.90 260978.92 00:21:28.237 =================================================================================================================== 00:21:28.237 Total : 2285.14 142.82 0.00 0.00 251385.75 15728.64 296708.17 00:21:28.494 14:15:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:21:29.864 14:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 267468 00:21:29.864 14:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:21:29.864 14:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:21:29.864 14:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:29.864 14:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:29.864 14:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:21:29.864 14:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:29.864 14:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:21:29.864 14:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:29.864 14:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:21:29.864 14:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:29.864 14:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:29.864 rmmod nvme_tcp 00:21:29.864 rmmod nvme_fabrics 00:21:29.864 rmmod nvme_keyring 00:21:29.864 14:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:29.864 14:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:21:29.864 14:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:21:29.864 14:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 267468 ']' 00:21:29.864 14:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 267468 00:21:29.864 14:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 267468 ']' 00:21:29.864 14:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 267468 00:21:29.865 14:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:21:29.865 14:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:29.865 14:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 267468 00:21:29.865 14:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:29.865 14:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:29.865 14:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 267468' 00:21:29.865 killing process with pid 267468 00:21:29.865 14:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 267468 00:21:29.865 14:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 267468 00:21:30.123 14:15:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:30.123 14:15:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:30.123 14:15:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:30.123 14:15:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:30.123 14:15:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:30.123 14:15:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:30.123 14:15:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:30.123 14:15:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:32.659 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:32.659 00:21:32.659 real 0m7.977s 00:21:32.659 user 0m24.652s 00:21:32.659 sys 0m1.494s 00:21:32.659 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:32.659 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:32.659 ************************************ 00:21:32.659 END TEST nvmf_shutdown_tc2 00:21:32.659 ************************************ 00:21:32.659 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:21:32.659 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:21:32.659 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:32.659 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:32.659 ************************************ 00:21:32.659 START TEST nvmf_shutdown_tc3 00:21:32.659 ************************************ 00:21:32.659 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc3 00:21:32.659 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:21:32.659 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:21:32.659 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:32.659 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:32.659 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:32.659 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:32.659 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:32.659 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:32.659 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:32.659 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:32.659 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:32.659 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:32.659 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:32.659 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:32.659 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:32.659 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:32.659 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:32.659 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:32.659 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:32.659 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:32.659 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:32.659 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:21:32.659 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:32.659 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:21:32.659 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:21:32.659 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:21:32.659 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:21:32.659 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:21:32.659 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:32.659 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:32.659 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:32.659 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:32.659 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:32.659 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:32.659 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:32.659 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:32.659 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:32.659 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:32.659 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:32.659 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:32.659 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:32.659 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:32.659 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:32.659 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:32.659 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:32.659 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:32.659 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:32.659 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:21:32.659 Found 0000:09:00.0 (0x8086 - 0x159b) 00:21:32.659 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:32.659 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:32.659 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:32.659 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:32.659 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:32.659 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:32.659 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:21:32.659 Found 0000:09:00.1 (0x8086 - 0x159b) 00:21:32.659 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:32.659 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:32.659 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:32.659 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:32.659 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:32.659 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:32.659 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:32.659 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:32.659 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:32.659 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:32.659 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:32.659 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:32.659 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:32.659 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:32.659 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:32.659 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:21:32.659 Found net devices under 0000:09:00.0: cvl_0_0 00:21:32.659 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:32.659 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:32.659 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:32.659 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:32.659 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:32.659 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:32.659 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:32.659 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:32.660 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:21:32.660 Found net devices under 0000:09:00.1: cvl_0_1 00:21:32.660 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:32.660 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:32.660 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:21:32.660 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:32.660 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:32.660 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:32.660 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:32.660 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:32.660 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:32.660 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:32.660 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:32.660 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:32.660 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:32.660 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:32.660 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:32.660 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:32.660 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:32.660 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:32.660 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:32.660 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:32.660 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:32.660 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:32.660 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:32.660 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:32.660 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:32.660 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:32.660 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:32.660 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.262 ms 00:21:32.660 00:21:32.660 --- 10.0.0.2 ping statistics --- 00:21:32.660 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:32.660 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:21:32.660 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:32.660 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:32.660 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.155 ms 00:21:32.660 00:21:32.660 --- 10.0.0.1 ping statistics --- 00:21:32.660 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:32.660 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:21:32.660 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:32.660 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:21:32.660 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:32.660 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:32.660 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:32.660 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:32.660 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:32.660 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:32.660 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:32.660 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:21:32.660 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:32.660 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:32.660 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:32.660 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=268585 00:21:32.660 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:32.660 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 268585 00:21:32.660 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 268585 ']' 00:21:32.660 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:32.660 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:32.660 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:32.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:32.660 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:32.660 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:32.660 [2024-07-26 14:15:40.440950] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:21:32.660 [2024-07-26 14:15:40.441016] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:32.660 EAL: No free 2048 kB hugepages reported on node 1 00:21:32.660 [2024-07-26 14:15:40.501321] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:32.660 [2024-07-26 14:15:40.602466] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:32.660 [2024-07-26 14:15:40.602539] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:32.660 [2024-07-26 14:15:40.602563] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:32.660 [2024-07-26 14:15:40.602573] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:32.660 [2024-07-26 14:15:40.602583] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:32.660 [2024-07-26 14:15:40.602668] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:32.660 [2024-07-26 14:15:40.602730] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:32.660 [2024-07-26 14:15:40.602798] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:32.660 [2024-07-26 14:15:40.602795] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:21:32.918 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:32.918 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:21:32.918 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:32.918 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:32.918 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:32.918 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:32.918 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:32.918 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.918 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:32.918 [2024-07-26 14:15:40.764064] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:32.918 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.918 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:21:32.918 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:21:32.918 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:32.918 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:32.918 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:32.918 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:32.918 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:32.918 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:32.918 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:32.918 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:32.918 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:32.918 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:32.918 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:32.918 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:32.918 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:32.918 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:32.918 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:32.918 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:32.918 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:32.918 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:32.918 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:32.918 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:32.918 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:32.918 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:32.918 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:32.918 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:21:32.918 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.918 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:32.918 Malloc1 00:21:32.918 [2024-07-26 14:15:40.853344] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:32.918 Malloc2 00:21:32.918 Malloc3 00:21:33.176 Malloc4 00:21:33.176 Malloc5 00:21:33.176 Malloc6 00:21:33.176 Malloc7 00:21:33.176 Malloc8 00:21:33.435 Malloc9 00:21:33.435 Malloc10 00:21:33.435 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.435 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:21:33.435 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:33.435 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:33.435 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=268659 00:21:33.435 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 268659 /var/tmp/bdevperf.sock 00:21:33.435 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 268659 ']' 00:21:33.435 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:33.435 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:33.435 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:21:33.435 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:33.435 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:21:33.435 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:33.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:33.435 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:21:33.435 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:33.435 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:33.435 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:33.435 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:33.435 { 00:21:33.435 "params": { 00:21:33.435 "name": "Nvme$subsystem", 00:21:33.436 "trtype": "$TEST_TRANSPORT", 00:21:33.436 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:33.436 "adrfam": "ipv4", 00:21:33.436 "trsvcid": "$NVMF_PORT", 00:21:33.436 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:33.436 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:33.436 "hdgst": ${hdgst:-false}, 00:21:33.436 "ddgst": ${ddgst:-false} 00:21:33.436 }, 00:21:33.436 "method": "bdev_nvme_attach_controller" 00:21:33.436 } 00:21:33.436 EOF 00:21:33.436 )") 00:21:33.436 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:33.436 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:33.436 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:33.436 { 00:21:33.436 "params": { 00:21:33.436 "name": "Nvme$subsystem", 00:21:33.436 "trtype": "$TEST_TRANSPORT", 00:21:33.436 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:33.436 "adrfam": "ipv4", 00:21:33.436 "trsvcid": "$NVMF_PORT", 00:21:33.436 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:33.436 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:33.436 "hdgst": ${hdgst:-false}, 00:21:33.436 "ddgst": ${ddgst:-false} 00:21:33.436 }, 00:21:33.436 "method": "bdev_nvme_attach_controller" 00:21:33.436 } 00:21:33.436 EOF 00:21:33.436 )") 00:21:33.436 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:33.436 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:33.436 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:33.436 { 00:21:33.436 "params": { 00:21:33.436 "name": "Nvme$subsystem", 00:21:33.436 "trtype": "$TEST_TRANSPORT", 00:21:33.436 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:33.436 "adrfam": "ipv4", 00:21:33.436 "trsvcid": "$NVMF_PORT", 00:21:33.436 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:33.436 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:33.436 "hdgst": ${hdgst:-false}, 00:21:33.436 "ddgst": ${ddgst:-false} 00:21:33.436 }, 00:21:33.436 "method": "bdev_nvme_attach_controller" 00:21:33.436 } 00:21:33.436 EOF 00:21:33.436 )") 00:21:33.436 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:33.436 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:33.436 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:33.436 { 00:21:33.436 "params": { 00:21:33.436 "name": "Nvme$subsystem", 00:21:33.436 "trtype": "$TEST_TRANSPORT", 00:21:33.436 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:33.436 "adrfam": "ipv4", 00:21:33.436 "trsvcid": "$NVMF_PORT", 00:21:33.436 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:33.436 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:33.436 "hdgst": ${hdgst:-false}, 00:21:33.436 "ddgst": ${ddgst:-false} 00:21:33.436 }, 00:21:33.436 "method": "bdev_nvme_attach_controller" 00:21:33.436 } 00:21:33.436 EOF 00:21:33.436 )") 00:21:33.436 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:33.436 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:33.436 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:33.436 { 00:21:33.436 "params": { 00:21:33.436 "name": "Nvme$subsystem", 00:21:33.436 "trtype": "$TEST_TRANSPORT", 00:21:33.436 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:33.436 "adrfam": "ipv4", 00:21:33.436 "trsvcid": "$NVMF_PORT", 00:21:33.436 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:33.436 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:33.436 "hdgst": ${hdgst:-false}, 00:21:33.436 "ddgst": ${ddgst:-false} 00:21:33.436 }, 00:21:33.436 "method": "bdev_nvme_attach_controller" 00:21:33.436 } 00:21:33.436 EOF 00:21:33.436 )") 00:21:33.436 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:33.436 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:33.436 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:33.436 { 00:21:33.436 "params": { 00:21:33.436 "name": "Nvme$subsystem", 00:21:33.436 "trtype": "$TEST_TRANSPORT", 00:21:33.436 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:33.436 "adrfam": "ipv4", 00:21:33.436 "trsvcid": "$NVMF_PORT", 00:21:33.436 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:33.436 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:33.436 "hdgst": ${hdgst:-false}, 00:21:33.436 "ddgst": ${ddgst:-false} 00:21:33.436 }, 00:21:33.436 "method": "bdev_nvme_attach_controller" 00:21:33.436 } 00:21:33.436 EOF 00:21:33.436 )") 00:21:33.436 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:33.436 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:33.436 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:33.436 { 00:21:33.436 "params": { 00:21:33.436 "name": "Nvme$subsystem", 00:21:33.436 "trtype": "$TEST_TRANSPORT", 00:21:33.436 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:33.436 "adrfam": "ipv4", 00:21:33.436 "trsvcid": "$NVMF_PORT", 00:21:33.436 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:33.436 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:33.436 "hdgst": ${hdgst:-false}, 00:21:33.436 "ddgst": ${ddgst:-false} 00:21:33.436 }, 00:21:33.436 "method": "bdev_nvme_attach_controller" 00:21:33.436 } 00:21:33.436 EOF 00:21:33.436 )") 00:21:33.436 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:33.436 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:33.436 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:33.436 { 00:21:33.436 "params": { 00:21:33.436 "name": "Nvme$subsystem", 00:21:33.436 "trtype": "$TEST_TRANSPORT", 00:21:33.436 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:33.436 "adrfam": "ipv4", 00:21:33.436 "trsvcid": "$NVMF_PORT", 00:21:33.436 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:33.436 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:33.436 "hdgst": ${hdgst:-false}, 00:21:33.436 "ddgst": ${ddgst:-false} 00:21:33.436 }, 00:21:33.436 "method": "bdev_nvme_attach_controller" 00:21:33.436 } 00:21:33.436 EOF 00:21:33.436 )") 00:21:33.436 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:33.436 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:33.436 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:33.436 { 00:21:33.436 "params": { 00:21:33.436 "name": "Nvme$subsystem", 00:21:33.436 "trtype": "$TEST_TRANSPORT", 00:21:33.436 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:33.436 "adrfam": "ipv4", 00:21:33.436 "trsvcid": "$NVMF_PORT", 00:21:33.436 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:33.436 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:33.436 "hdgst": ${hdgst:-false}, 00:21:33.436 "ddgst": ${ddgst:-false} 00:21:33.436 }, 00:21:33.436 "method": "bdev_nvme_attach_controller" 00:21:33.436 } 00:21:33.436 EOF 00:21:33.436 )") 00:21:33.436 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:33.436 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:33.436 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:33.436 { 00:21:33.436 "params": { 00:21:33.436 "name": "Nvme$subsystem", 00:21:33.436 "trtype": "$TEST_TRANSPORT", 00:21:33.436 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:33.436 "adrfam": "ipv4", 00:21:33.436 "trsvcid": "$NVMF_PORT", 00:21:33.436 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:33.436 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:33.436 "hdgst": ${hdgst:-false}, 00:21:33.436 "ddgst": ${ddgst:-false} 00:21:33.436 }, 00:21:33.436 "method": "bdev_nvme_attach_controller" 00:21:33.436 } 00:21:33.436 EOF 00:21:33.436 )") 00:21:33.436 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:33.436 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:21:33.436 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:21:33.436 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:33.436 "params": { 00:21:33.436 "name": "Nvme1", 00:21:33.436 "trtype": "tcp", 00:21:33.436 "traddr": "10.0.0.2", 00:21:33.436 "adrfam": "ipv4", 00:21:33.436 "trsvcid": "4420", 00:21:33.436 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:33.437 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:33.437 "hdgst": false, 00:21:33.437 "ddgst": false 00:21:33.437 }, 00:21:33.437 "method": "bdev_nvme_attach_controller" 00:21:33.437 },{ 00:21:33.437 "params": { 00:21:33.437 "name": "Nvme2", 00:21:33.437 "trtype": "tcp", 00:21:33.437 "traddr": "10.0.0.2", 00:21:33.437 "adrfam": "ipv4", 00:21:33.437 "trsvcid": "4420", 00:21:33.437 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:33.437 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:33.437 "hdgst": false, 00:21:33.437 "ddgst": false 00:21:33.437 }, 00:21:33.437 "method": "bdev_nvme_attach_controller" 00:21:33.437 },{ 00:21:33.437 "params": { 00:21:33.437 "name": "Nvme3", 00:21:33.437 "trtype": "tcp", 00:21:33.437 "traddr": "10.0.0.2", 00:21:33.437 "adrfam": "ipv4", 00:21:33.437 "trsvcid": "4420", 00:21:33.437 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:33.437 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:33.437 "hdgst": false, 00:21:33.437 "ddgst": false 00:21:33.437 }, 00:21:33.437 "method": "bdev_nvme_attach_controller" 00:21:33.437 },{ 00:21:33.437 "params": { 00:21:33.437 "name": "Nvme4", 00:21:33.437 "trtype": "tcp", 00:21:33.437 "traddr": "10.0.0.2", 00:21:33.437 "adrfam": "ipv4", 00:21:33.437 "trsvcid": "4420", 00:21:33.437 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:33.437 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:33.437 "hdgst": false, 00:21:33.437 "ddgst": false 00:21:33.437 }, 00:21:33.437 "method": "bdev_nvme_attach_controller" 00:21:33.437 },{ 00:21:33.437 "params": { 00:21:33.437 "name": "Nvme5", 00:21:33.437 "trtype": "tcp", 00:21:33.437 "traddr": "10.0.0.2", 00:21:33.437 "adrfam": "ipv4", 00:21:33.437 "trsvcid": "4420", 00:21:33.437 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:33.437 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:33.437 "hdgst": false, 00:21:33.437 "ddgst": false 00:21:33.437 }, 00:21:33.437 "method": "bdev_nvme_attach_controller" 00:21:33.437 },{ 00:21:33.437 "params": { 00:21:33.437 "name": "Nvme6", 00:21:33.437 "trtype": "tcp", 00:21:33.437 "traddr": "10.0.0.2", 00:21:33.437 "adrfam": "ipv4", 00:21:33.437 "trsvcid": "4420", 00:21:33.437 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:33.437 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:33.437 "hdgst": false, 00:21:33.437 "ddgst": false 00:21:33.437 }, 00:21:33.437 "method": "bdev_nvme_attach_controller" 00:21:33.437 },{ 00:21:33.437 "params": { 00:21:33.437 "name": "Nvme7", 00:21:33.437 "trtype": "tcp", 00:21:33.437 "traddr": "10.0.0.2", 00:21:33.437 "adrfam": "ipv4", 00:21:33.437 "trsvcid": "4420", 00:21:33.437 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:33.437 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:33.437 "hdgst": false, 00:21:33.437 "ddgst": false 00:21:33.437 }, 00:21:33.437 "method": "bdev_nvme_attach_controller" 00:21:33.437 },{ 00:21:33.437 "params": { 00:21:33.437 "name": "Nvme8", 00:21:33.437 "trtype": "tcp", 00:21:33.437 "traddr": "10.0.0.2", 00:21:33.437 "adrfam": "ipv4", 00:21:33.437 "trsvcid": "4420", 00:21:33.437 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:33.437 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:33.437 "hdgst": false, 00:21:33.437 "ddgst": false 00:21:33.437 }, 00:21:33.437 "method": "bdev_nvme_attach_controller" 00:21:33.437 },{ 00:21:33.437 "params": { 00:21:33.437 "name": "Nvme9", 00:21:33.437 "trtype": "tcp", 00:21:33.437 "traddr": "10.0.0.2", 00:21:33.437 "adrfam": "ipv4", 00:21:33.437 "trsvcid": "4420", 00:21:33.437 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:33.437 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:33.437 "hdgst": false, 00:21:33.437 "ddgst": false 00:21:33.437 }, 00:21:33.437 "method": "bdev_nvme_attach_controller" 00:21:33.437 },{ 00:21:33.437 "params": { 00:21:33.437 "name": "Nvme10", 00:21:33.437 "trtype": "tcp", 00:21:33.437 "traddr": "10.0.0.2", 00:21:33.437 "adrfam": "ipv4", 00:21:33.437 "trsvcid": "4420", 00:21:33.437 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:33.437 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:33.437 "hdgst": false, 00:21:33.437 "ddgst": false 00:21:33.437 }, 00:21:33.437 "method": "bdev_nvme_attach_controller" 00:21:33.437 }' 00:21:33.437 [2024-07-26 14:15:41.374192] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:21:33.437 [2024-07-26 14:15:41.374283] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid268659 ] 00:21:33.437 EAL: No free 2048 kB hugepages reported on node 1 00:21:33.437 [2024-07-26 14:15:41.439622] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:33.695 [2024-07-26 14:15:41.551725] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:35.066 Running I/O for 10 seconds... 00:21:35.067 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:35.067 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:21:35.067 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:35.067 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.067 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:35.343 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.343 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:35.343 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:21:35.343 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:21:35.343 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:21:35.343 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:21:35.343 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:21:35.343 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:21:35.343 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:35.343 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:35.343 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:35.343 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.343 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:35.343 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.343 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:21:35.343 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:21:35.343 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:21:35.612 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:21:35.612 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:35.612 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:35.612 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:35.612 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.612 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:35.612 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.612 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:21:35.612 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:21:35.612 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:21:35.870 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:21:35.870 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:35.870 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:35.871 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:35.871 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.871 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:35.871 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.871 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:21:35.871 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:21:35.871 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:21:35.871 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:21:35.871 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:21:35.871 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 268585 00:21:35.871 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 268585 ']' 00:21:35.871 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 268585 00:21:35.871 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # uname 00:21:35.871 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:35.871 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 268585 00:21:36.143 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:36.143 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:36.143 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 268585' 00:21:36.143 killing process with pid 268585 00:21:36.143 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@969 -- # kill 268585 00:21:36.143 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@974 -- # wait 268585 00:21:36.143 [2024-07-26 14:15:43.899789] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f0410 is same with the state(5) to be set 00:21:36.143 [2024-07-26 14:15:43.899864] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f0410 is same with the state(5) to be set 00:21:36.143 [2024-07-26 14:15:43.899879] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f0410 is same with the state(5) to be set 00:21:36.143 [2024-07-26 14:15:43.899896] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f0410 is same with the state(5) to be set 00:21:36.143 [2024-07-26 14:15:43.899908] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f0410 is same with the state(5) to be set 00:21:36.143 [2024-07-26 14:15:43.899920] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f0410 is same with the state(5) to be set 00:21:36.143 [2024-07-26 14:15:43.899932] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f0410 is same with the state(5) to be set 00:21:36.144 [2024-07-26 14:15:43.899944] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f0410 is same with the state(5) to be set 00:21:36.144 [2024-07-26 14:15:43.899956] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f0410 is same with the state(5) to be set 00:21:36.144 [2024-07-26 14:15:43.899981] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f0410 is same with the state(5) to be set 00:21:36.144 [2024-07-26 14:15:43.899994] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f0410 is same with the state(5) to be set 00:21:36.144 [2024-07-26 14:15:43.900006] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f0410 is same with the state(5) to be set 00:21:36.144 [2024-07-26 14:15:43.900018] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f0410 is same with the state(5) to be set 00:21:36.144 [2024-07-26 14:15:43.900031] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f0410 is same with the state(5) to be set 00:21:36.144 [2024-07-26 14:15:43.900043] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f0410 is same with the state(5) to be set 00:21:36.144 [2024-07-26 14:15:43.900055] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f0410 is same with the state(5) to be set 00:21:36.144 [2024-07-26 14:15:43.900067] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f0410 is same with the state(5) to be set 00:21:36.144 [2024-07-26 14:15:43.900079] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f0410 is same with the state(5) to be set 00:21:36.144 [2024-07-26 14:15:43.900091] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f0410 is same with the state(5) to be set 00:21:36.144 [2024-07-26 14:15:43.900104] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f0410 is same with the state(5) to be set 00:21:36.144 [2024-07-26 14:15:43.900116] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f0410 is same with the state(5) to be set 00:21:36.144 [2024-07-26 14:15:43.900127] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f0410 is same with the state(5) to be set 00:21:36.144 [2024-07-26 14:15:43.900139] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f0410 is same with the state(5) to be set 00:21:36.144 [2024-07-26 14:15:43.900151] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f0410 is same with the state(5) to be set 00:21:36.144 [2024-07-26 14:15:43.900164] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f0410 is same with the state(5) to be set 00:21:36.144 [2024-07-26 14:15:43.900175] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f0410 is same with the state(5) to be set 00:21:36.144 [2024-07-26 14:15:43.900187] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f0410 is same with the state(5) to be set 00:21:36.144 [2024-07-26 14:15:43.900199] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f0410 is same with the state(5) to be set 00:21:36.144 [2024-07-26 14:15:43.900211] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f0410 is same with the state(5) to be set 00:21:36.144 [2024-07-26 14:15:43.900223] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f0410 is same with the state(5) to be set 00:21:36.144 [2024-07-26 14:15:43.900235] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f0410 is same with the state(5) to be set 00:21:36.144 [2024-07-26 14:15:43.900247] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f0410 is same with the state(5) to be set 00:21:36.144 [2024-07-26 14:15:43.900259] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f0410 is same with the state(5) to be set 00:21:36.144 [2024-07-26 14:15:43.900271] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f0410 is same with the state(5) to be set 00:21:36.144 [2024-07-26 14:15:43.900283] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f0410 is same with the state(5) to be set 00:21:36.144 [2024-07-26 14:15:43.900295] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f0410 is same with the state(5) to be set 00:21:36.144 [2024-07-26 14:15:43.900307] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f0410 is same with the state(5) to be set 00:21:36.144 [2024-07-26 14:15:43.900323] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f0410 is same with the state(5) to be set 00:21:36.144 [2024-07-26 14:15:43.900335] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f0410 is same with the state(5) to be set 00:21:36.144 [2024-07-26 14:15:43.900347] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f0410 is same with the state(5) to be set 00:21:36.144 [2024-07-26 14:15:43.900360] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f0410 is same with the state(5) to be set 00:21:36.144 [2024-07-26 14:15:43.900372] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f0410 is same with the state(5) to be set 00:21:36.144 [2024-07-26 14:15:43.900384] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f0410 is same with the state(5) to be set 00:21:36.144 [2024-07-26 14:15:43.900397] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f0410 is same with the state(5) to be set 00:21:36.144 [2024-07-26 14:15:43.900409] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f0410 is same with the state(5) to be set 00:21:36.144 [2024-07-26 14:15:43.900421] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f0410 is same with the state(5) to be set 00:21:36.144 [2024-07-26 14:15:43.900433] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f0410 is same with the state(5) to be set 00:21:36.144 [2024-07-26 14:15:43.900446] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f0410 is same with the state(5) to be set 00:21:36.144 [2024-07-26 14:15:43.900458] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f0410 is same with the state(5) to be set 00:21:36.144 [2024-07-26 14:15:43.900470] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f0410 is same with the state(5) to be set 00:21:36.144 [2024-07-26 14:15:43.900482] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f0410 is same with the state(5) to be set 00:21:36.144 [2024-07-26 14:15:43.900494] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f0410 is same with the state(5) to be set 00:21:36.144 [2024-07-26 14:15:43.900506] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f0410 is same with the state(5) to be set 00:21:36.144 [2024-07-26 14:15:43.900533] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f0410 is same with the state(5) to be set 00:21:36.144 [2024-07-26 14:15:43.900548] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f0410 is same with the state(5) to be set 00:21:36.144 [2024-07-26 14:15:43.901755] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12de220 is same with the state(5) to be set 00:21:36.144 [2024-07-26 14:15:43.901778] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12de220 is same with the state(5) to be set 00:21:36.144 [2024-07-26 14:15:43.901791] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12de220 is same with the state(5) to be set 00:21:36.144 [2024-07-26 14:15:43.901804] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12de220 is same with the state(5) to be set 00:21:36.144 [2024-07-26 14:15:43.901827] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12de220 is same with the state(5) to be set 00:21:36.144 [2024-07-26 14:15:43.901840] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12de220 is same with the state(5) to be set 00:21:36.144 [2024-07-26 14:15:43.901852] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12de220 is same with the state(5) to be set 00:21:36.144 [2024-07-26 14:15:43.901865] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12de220 is same with the state(5) to be set 00:21:36.144 [2024-07-26 14:15:43.901877] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12de220 is same with the state(5) to be set 00:21:36.144 [2024-07-26 14:15:43.901894] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12de220 is same with the state(5) to be set 00:21:36.144 [2024-07-26 14:15:43.901907] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12de220 is same with the state(5) to be set 00:21:36.144 [2024-07-26 14:15:43.901919] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12de220 is same with the state(5) to be set 00:21:36.144 [2024-07-26 14:15:43.901932] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12de220 is same with the state(5) to be set 00:21:36.144 [2024-07-26 14:15:43.901944] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12de220 is same with the state(5) to be set 00:21:36.144 [2024-07-26 14:15:43.901956] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12de220 is same with the state(5) to be set 00:21:36.144 [2024-07-26 14:15:43.901969] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12de220 is same with the state(5) to be set 00:21:36.144 [2024-07-26 14:15:43.901982] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12de220 is same with the state(5) to be set 00:21:36.144 [2024-07-26 14:15:43.901994] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12de220 is same with the state(5) to be set 00:21:36.144 [2024-07-26 14:15:43.902007] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12de220 is same with the state(5) to be set 00:21:36.144 [2024-07-26 14:15:43.902019] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12de220 is same with the state(5) to be set 00:21:36.144 [2024-07-26 14:15:43.902031] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12de220 is same with the state(5) to be set 00:21:36.144 [2024-07-26 14:15:43.902044] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12de220 is same with the state(5) to be set 00:21:36.144 [2024-07-26 14:15:43.902056] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12de220 is same with the state(5) to be set 00:21:36.144 [2024-07-26 14:15:43.902068] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12de220 is same with the state(5) to be set 00:21:36.144 [2024-07-26 14:15:43.902080] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12de220 is same with the state(5) to be set 00:21:36.144 [2024-07-26 14:15:43.902093] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12de220 is same with the state(5) to be set 00:21:36.144 [2024-07-26 14:15:43.902105] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12de220 is same with the state(5) to be set 00:21:36.144 [2024-07-26 14:15:43.902117] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12de220 is same with the state(5) to be set 00:21:36.144 [2024-07-26 14:15:43.902130] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12de220 is same with the state(5) to be set 00:21:36.144 [2024-07-26 14:15:43.902143] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12de220 is same with the state(5) to be set 00:21:36.144 [2024-07-26 14:15:43.902155] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12de220 is same with the state(5) to be set 00:21:36.144 [2024-07-26 14:15:43.902168] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12de220 is same with the state(5) to be set 00:21:36.144 [2024-07-26 14:15:43.902180] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12de220 is same with the state(5) to be set 00:21:36.145 [2024-07-26 14:15:43.902193] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12de220 is same with the state(5) to be set 00:21:36.145 [2024-07-26 14:15:43.902206] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12de220 is same with the state(5) to be set 00:21:36.145 [2024-07-26 14:15:43.902218] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12de220 is same with the state(5) to be set 00:21:36.145 [2024-07-26 14:15:43.902234] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12de220 is same with the state(5) to be set 00:21:36.145 [2024-07-26 14:15:43.902246] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12de220 is same with the state(5) to be set 00:21:36.145 [2024-07-26 14:15:43.902259] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12de220 is same with the state(5) to be set 00:21:36.145 [2024-07-26 14:15:43.902271] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12de220 is same with the state(5) to be set 00:21:36.145 [2024-07-26 14:15:43.902284] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12de220 is same with the state(5) to be set 00:21:36.145 [2024-07-26 14:15:43.902296] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12de220 is same with the state(5) to be set 00:21:36.145 [2024-07-26 14:15:43.902308] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12de220 is same with the state(5) to be set 00:21:36.145 [2024-07-26 14:15:43.902321] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12de220 is same with the state(5) to be set 00:21:36.145 [2024-07-26 14:15:43.902334] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12de220 is same with the state(5) to be set 00:21:36.145 [2024-07-26 14:15:43.902346] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12de220 is same with the state(5) to be set 00:21:36.145 [2024-07-26 14:15:43.902358] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12de220 is same with the state(5) to be set 00:21:36.145 [2024-07-26 14:15:43.902371] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12de220 is same with the state(5) to be set 00:21:36.145 [2024-07-26 14:15:43.902383] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12de220 is same with the state(5) to be set 00:21:36.145 [2024-07-26 14:15:43.902395] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12de220 is same with the state(5) to be set 00:21:36.145 [2024-07-26 14:15:43.902408] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12de220 is same with the state(5) to be set 00:21:36.145 [2024-07-26 14:15:43.902420] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12de220 is same with the state(5) to be set 00:21:36.145 [2024-07-26 14:15:43.902433] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12de220 is same with the state(5) to be set 00:21:36.145 [2024-07-26 14:15:43.902445] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12de220 is same with the state(5) to be set 00:21:36.145 [2024-07-26 14:15:43.902458] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12de220 is same with the state(5) to be set 00:21:36.145 [2024-07-26 14:15:43.902470] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12de220 is same with the state(5) to be set 00:21:36.145 [2024-07-26 14:15:43.902483] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12de220 is same with the state(5) to be set 00:21:36.145 [2024-07-26 14:15:43.902495] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12de220 is same with the state(5) to be set 00:21:36.145 [2024-07-26 14:15:43.902519] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12de220 is same with the state(5) to be set 00:21:36.145 [2024-07-26 14:15:43.902541] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12de220 is same with the state(5) to be set 00:21:36.145 [2024-07-26 14:15:43.902555] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12de220 is same with the state(5) to be set 00:21:36.145 [2024-07-26 14:15:43.902568] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12de220 is same with the state(5) to be set 00:21:36.145 [2024-07-26 14:15:43.902580] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12de220 is same with the state(5) to be set 00:21:36.145 [2024-07-26 14:15:43.905327] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12df080 is same with the state(5) to be set 00:21:36.145 [2024-07-26 14:15:43.905362] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12df080 is same with the state(5) to be set 00:21:36.145 [2024-07-26 14:15:43.905380] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12df080 is same with the state(5) to be set 00:21:36.145 [2024-07-26 14:15:43.905392] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12df080 is same with the state(5) to be set 00:21:36.145 [2024-07-26 14:15:43.905405] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12df080 is same with the state(5) to be set 00:21:36.145 [2024-07-26 14:15:43.905417] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12df080 is same with the state(5) to be set 00:21:36.145 [2024-07-26 14:15:43.905430] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12df080 is same with the state(5) to be set 00:21:36.145 [2024-07-26 14:15:43.905442] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12df080 is same with the state(5) to be set 00:21:36.145 [2024-07-26 14:15:43.905454] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12df080 is same with the state(5) to be set 00:21:36.145 [2024-07-26 14:15:43.905467] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12df080 is same with the state(5) to be set 00:21:36.145 [2024-07-26 14:15:43.905480] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12df080 is same with the state(5) to be set 00:21:36.145 [2024-07-26 14:15:43.905492] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12df080 is same with the state(5) to be set 00:21:36.145 [2024-07-26 14:15:43.905504] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12df080 is same with the state(5) to be set 00:21:36.145 [2024-07-26 14:15:43.905521] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12df080 is same with the state(5) to be set 00:21:36.145 [2024-07-26 14:15:43.905544] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12df080 is same with the state(5) to be set 00:21:36.145 [2024-07-26 14:15:43.905558] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12df080 is same with the state(5) to be set 00:21:36.145 [2024-07-26 14:15:43.905570] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12df080 is same with the state(5) to be set 00:21:36.145 [2024-07-26 14:15:43.905583] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12df080 is same with the state(5) to be set 00:21:36.145 [2024-07-26 14:15:43.905595] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12df080 is same with the state(5) to be set 00:21:36.145 [2024-07-26 14:15:43.905608] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12df080 is same with the state(5) to be set 00:21:36.145 [2024-07-26 14:15:43.905621] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12df080 is same with the state(5) to be set 00:21:36.145 [2024-07-26 14:15:43.905633] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12df080 is same with the state(5) to be set 00:21:36.145 [2024-07-26 14:15:43.905645] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12df080 is same with the state(5) to be set 00:21:36.145 [2024-07-26 14:15:43.905666] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12df080 is same with the state(5) to be set 00:21:36.145 [2024-07-26 14:15:43.905679] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12df080 is same with the state(5) to be set 00:21:36.145 [2024-07-26 14:15:43.905692] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12df080 is same with the state(5) to be set 00:21:36.145 [2024-07-26 14:15:43.905704] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12df080 is same with the state(5) to be set 00:21:36.145 [2024-07-26 14:15:43.905724] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12df080 is same with the state(5) to be set 00:21:36.145 [2024-07-26 14:15:43.905738] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12df080 is same with the state(5) to be set 00:21:36.145 [2024-07-26 14:15:43.905752] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12df080 is same with the state(5) to be set 00:21:36.145 [2024-07-26 14:15:43.905765] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12df080 is same with the state(5) to be set 00:21:36.145 [2024-07-26 14:15:43.905777] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12df080 is same with the state(5) to be set 00:21:36.145 [2024-07-26 14:15:43.905790] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12df080 is same with the state(5) to be set 00:21:36.145 [2024-07-26 14:15:43.905803] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12df080 is same with the state(5) to be set 00:21:36.145 [2024-07-26 14:15:43.905824] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12df080 is same with the state(5) to be set 00:21:36.145 [2024-07-26 14:15:43.905836] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12df080 is same with the state(5) to be set 00:21:36.145 [2024-07-26 14:15:43.905849] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12df080 is same with the state(5) to be set 00:21:36.145 [2024-07-26 14:15:43.905861] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12df080 is same with the state(5) to be set 00:21:36.145 [2024-07-26 14:15:43.905873] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12df080 is same with the state(5) to be set 00:21:36.145 [2024-07-26 14:15:43.905886] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12df080 is same with the state(5) to be set 00:21:36.145 [2024-07-26 14:15:43.905899] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12df080 is same with the state(5) to be set 00:21:36.145 [2024-07-26 14:15:43.905912] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12df080 is same with the state(5) to be set 00:21:36.145 [2024-07-26 14:15:43.905924] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12df080 is same with the state(5) to be set 00:21:36.145 [2024-07-26 14:15:43.905936] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12df080 is same with the state(5) to be set 00:21:36.145 [2024-07-26 14:15:43.905949] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12df080 is same with the state(5) to be set 00:21:36.146 [2024-07-26 14:15:43.905961] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12df080 is same with the state(5) to be set 00:21:36.146 [2024-07-26 14:15:43.905974] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12df080 is same with the state(5) to be set 00:21:36.146 [2024-07-26 14:15:43.905987] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12df080 is same with the state(5) to be set 00:21:36.146 [2024-07-26 14:15:43.905999] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12df080 is same with the state(5) to be set 00:21:36.146 [2024-07-26 14:15:43.906012] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12df080 is same with the state(5) to be set 00:21:36.146 [2024-07-26 14:15:43.906024] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12df080 is same with the state(5) to be set 00:21:36.146 [2024-07-26 14:15:43.906037] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12df080 is same with the state(5) to be set 00:21:36.146 [2024-07-26 14:15:43.906050] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12df080 is same with the state(5) to be set 00:21:36.146 [2024-07-26 14:15:43.906063] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12df080 is same with the state(5) to be set 00:21:36.146 [2024-07-26 14:15:43.906079] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12df080 is same with the state(5) to be set 00:21:36.146 [2024-07-26 14:15:43.906092] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12df080 is same with the state(5) to be set 00:21:36.146 [2024-07-26 14:15:43.906105] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12df080 is same with the state(5) to be set 00:21:36.146 [2024-07-26 14:15:43.906117] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12df080 is same with the state(5) to be set 00:21:36.146 [2024-07-26 14:15:43.906129] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12df080 is same with the state(5) to be set 00:21:36.146 [2024-07-26 14:15:43.906148] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12df080 is same with [2024-07-26 14:15:43.906141] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsthe state(5) to be set 00:21:36.146 id:0 cdw10:00000000 cdw11:00000000 00:21:36.146 [2024-07-26 14:15:43.906175] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12df080 is same with the state(5) to be set 00:21:36.146 [2024-07-26 14:15:43.906182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.146 [2024-07-26 14:15:43.906190] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12df080 is same with the state(5) to be set 00:21:36.146 [2024-07-26 14:15:43.906201] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 ns[2024-07-26 14:15:43.906203] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12df080 is same with id:0 cdw10:00000000 cdw11:00000000 00:21:36.146 the state(5) to be set 00:21:36.146 [2024-07-26 14:15:43.906217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.146 [2024-07-26 14:15:43.906232] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:36.146 [2024-07-26 14:15:43.906245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.146 [2024-07-26 14:15:43.906259] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:36.146 [2024-07-26 14:15:43.906273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.146 [2024-07-26 14:15:43.906286] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2644b50 is same with the state(5) to be set 00:21:36.146 [2024-07-26 14:15:43.906343] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:36.146 [2024-07-26 14:15:43.906381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.146 [2024-07-26 14:15:43.906398] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:36.146 [2024-07-26 14:15:43.906411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.146 [2024-07-26 14:15:43.906425] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:36.146 [2024-07-26 14:15:43.906438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.146 [2024-07-26 14:15:43.906453] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:36.146 [2024-07-26 14:15:43.906466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.146 [2024-07-26 14:15:43.906484] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2643400 is same with the state(5) to be set 00:21:36.146 [2024-07-26 14:15:43.906551] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:36.146 [2024-07-26 14:15:43.906572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.146 [2024-07-26 14:15:43.906587] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:36.146 [2024-07-26 14:15:43.906600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.146 [2024-07-26 14:15:43.906614] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:36.146 [2024-07-26 14:15:43.906627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.146 [2024-07-26 14:15:43.906641] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:36.146 [2024-07-26 14:15:43.906655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.146 [2024-07-26 14:15:43.906668] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f57e0 is same with the state(5) to be set 00:21:36.146 [2024-07-26 14:15:43.906732] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:36.146 [2024-07-26 14:15:43.906753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.146 [2024-07-26 14:15:43.906768] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:36.146 [2024-07-26 14:15:43.906782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.146 [2024-07-26 14:15:43.906795] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:36.146 [2024-07-26 14:15:43.906808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.146 [2024-07-26 14:15:43.906822] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:36.146 [2024-07-26 14:15:43.906835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.146 [2024-07-26 14:15:43.906848] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27d8910 is same with the state(5) to be set 00:21:36.146 [2024-07-26 14:15:43.906915] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:36.146 [2024-07-26 14:15:43.906952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.146 [2024-07-26 14:15:43.906968] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:36.146 [2024-07-26 14:15:43.906982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.146 [2024-07-26 14:15:43.906996] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:36.146 [2024-07-26 14:15:43.907009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.146 [2024-07-26 14:15:43.907023] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:36.146 [2024-07-26 14:15:43.907040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.146 [2024-07-26 14:15:43.907054] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x264ac80 is same with the state(5) to be set 00:21:36.146 [2024-07-26 14:15:43.907099] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:36.146 [2024-07-26 14:15:43.907119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.146 [2024-07-26 14:15:43.907134] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:36.146 [2024-07-26 14:15:43.907149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.146 [2024-07-26 14:15:43.907164] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:36.146 [2024-07-26 14:15:43.907177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.146 [2024-07-26 14:15:43.907192] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:36.146 [2024-07-26 14:15:43.907206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.146 [2024-07-26 14:15:43.907220] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2620830 is same with the state(5) to be set 00:21:36.146 [2024-07-26 14:15:43.907320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.146 [2024-07-26 14:15:43.907343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.146 [2024-07-26 14:15:43.907369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.146 [2024-07-26 14:15:43.907385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.147 [2024-07-26 14:15:43.907402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.147 [2024-07-26 14:15:43.907416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.147 [2024-07-26 14:15:43.907431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.147 [2024-07-26 14:15:43.907445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.147 [2024-07-26 14:15:43.907461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.147 [2024-07-26 14:15:43.907474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.147 [2024-07-26 14:15:43.907490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.147 [2024-07-26 14:15:43.907504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.147 [2024-07-26 14:15:43.907525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.147 [2024-07-26 14:15:43.907548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.147 [2024-07-26 14:15:43.907560] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12df540 is same with the state(5) to be set 00:21:36.147 [2024-07-26 14:15:43.907570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.147 [2024-07-26 14:15:43.907585] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12df540 is same with [2024-07-26 14:15:43.907586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:21:36.147 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.147 [2024-07-26 14:15:43.907600] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12df540 is same with the state(5) to be set 00:21:36.147 [2024-07-26 14:15:43.907604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.147 [2024-07-26 14:15:43.907613] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12df540 is same with the state(5) to be set 00:21:36.147 [2024-07-26 14:15:43.907619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.147 [2024-07-26 14:15:43.907626] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12df540 is same with the state(5) to be set 00:21:36.147 [2024-07-26 14:15:43.907635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.147 [2024-07-26 14:15:43.907639] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12df540 is same with the state(5) to be set 00:21:36.147 [2024-07-26 14:15:43.907649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.147 [2024-07-26 14:15:43.907651] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12df540 is same with the state(5) to be set 00:21:36.147 [2024-07-26 14:15:43.907665] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12df540 is same with the state(5) to be set 00:21:36.147 [2024-07-26 14:15:43.907666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.147 [2024-07-26 14:15:43.907677] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12df540 is same with the state(5) to be set 00:21:36.147 [2024-07-26 14:15:43.907681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.147 [2024-07-26 14:15:43.907690] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12df540 is same with the state(5) to be set 00:21:36.147 [2024-07-26 14:15:43.907697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.147 [2024-07-26 14:15:43.907703] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12df540 is same with the state(5) to be set 00:21:36.147 [2024-07-26 14:15:43.907711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.147 [2024-07-26 14:15:43.907715] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12df540 is same with the state(5) to be set 00:21:36.147 [2024-07-26 14:15:43.907728] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12df540 is same with [2024-07-26 14:15:43.907728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:1the state(5) to be set 00:21:36.147 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.147 [2024-07-26 14:15:43.907742] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12df540 is same with [2024-07-26 14:15:43.907744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:21:36.147 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.147 [2024-07-26 14:15:43.907762] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12df540 is same with the state(5) to be set 00:21:36.147 [2024-07-26 14:15:43.907765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.147 [2024-07-26 14:15:43.907776] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12df540 is same with the state(5) to be set 00:21:36.147 [2024-07-26 14:15:43.907779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.147 [2024-07-26 14:15:43.907788] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12df540 is same with the state(5) to be set 00:21:36.147 [2024-07-26 14:15:43.907796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.147 [2024-07-26 14:15:43.907801] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12df540 is same with the state(5) to be set 00:21:36.147 [2024-07-26 14:15:43.907810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.147 [2024-07-26 14:15:43.907813] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12df540 is same with the state(5) to be set 00:21:36.147 [2024-07-26 14:15:43.907827] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12df540 is same with the state(5) to be set 00:21:36.147 [2024-07-26 14:15:43.907830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.147 [2024-07-26 14:15:43.907854] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12df540 is same with the state(5) to be set 00:21:36.147 [2024-07-26 14:15:43.907860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.147 [2024-07-26 14:15:43.907867] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12df540 is same with the state(5) to be set 00:21:36.147 [2024-07-26 14:15:43.907876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.147 [2024-07-26 14:15:43.907879] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12df540 is same with the state(5) to be set 00:21:36.147 [2024-07-26 14:15:43.907889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.147 [2024-07-26 14:15:43.907892] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12df540 is same with the state(5) to be set 00:21:36.147 [2024-07-26 14:15:43.907904] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12df540 is same with [2024-07-26 14:15:43.907905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:1the state(5) to be set 00:21:36.147 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.147 [2024-07-26 14:15:43.907917] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12df540 is same with the state(5) to be set 00:21:36.147 [2024-07-26 14:15:43.907920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.147 [2024-07-26 14:15:43.907929] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12df540 is same with the state(5) to be set 00:21:36.147 [2024-07-26 14:15:43.907936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.147 [2024-07-26 14:15:43.907942] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12df540 is same with the state(5) to be set 00:21:36.147 [2024-07-26 14:15:43.907949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-26 14:15:43.907954] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12df540 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.147 the state(5) to be set 00:21:36.147 [2024-07-26 14:15:43.907968] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12df540 is same with the state(5) to be set 00:21:36.147 [2024-07-26 14:15:43.907971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.147 [2024-07-26 14:15:43.907980] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12df540 is same with the state(5) to be set 00:21:36.147 [2024-07-26 14:15:43.907985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.147 [2024-07-26 14:15:43.907992] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12df540 is same with the state(5) to be set 00:21:36.147 [2024-07-26 14:15:43.908001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.147 [2024-07-26 14:15:43.908004] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12df540 is same with the state(5) to be set 00:21:36.147 [2024-07-26 14:15:43.908014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.148 [2024-07-26 14:15:43.908017] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12df540 is same with the state(5) to be set 00:21:36.148 [2024-07-26 14:15:43.908029] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12df540 is same with [2024-07-26 14:15:43.908030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:1the state(5) to be set 00:21:36.148 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.148 [2024-07-26 14:15:43.908043] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12df540 is same with the state(5) to be set 00:21:36.148 [2024-07-26 14:15:43.908050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.148 [2024-07-26 14:15:43.908054] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12df540 is same with the state(5) to be set 00:21:36.148 [2024-07-26 14:15:43.908066] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12df540 is same with [2024-07-26 14:15:43.908067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:1the state(5) to be set 00:21:36.148 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.148 [2024-07-26 14:15:43.908080] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12df540 is same with the state(5) to be set 00:21:36.148 [2024-07-26 14:15:43.908082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.148 [2024-07-26 14:15:43.908092] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12df540 is same with the state(5) to be set 00:21:36.148 [2024-07-26 14:15:43.908098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.148 [2024-07-26 14:15:43.908105] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12df540 is same with the state(5) to be set 00:21:36.148 [2024-07-26 14:15:43.908112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.148 [2024-07-26 14:15:43.908117] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12df540 is same with the state(5) to be set 00:21:36.148 [2024-07-26 14:15:43.908127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:1[2024-07-26 14:15:43.908129] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12df540 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.148 the state(5) to be set 00:21:36.148 [2024-07-26 14:15:43.908142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.148 [2024-07-26 14:15:43.908145] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12df540 is same with the state(5) to be set 00:21:36.148 [2024-07-26 14:15:43.908158] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12df540 is same with [2024-07-26 14:15:43.908158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:1the state(5) to be set 00:21:36.148 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.148 [2024-07-26 14:15:43.908171] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12df540 is same with the state(5) to be set 00:21:36.148 [2024-07-26 14:15:43.908173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.148 [2024-07-26 14:15:43.908184] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12df540 is same with the state(5) to be set 00:21:36.148 [2024-07-26 14:15:43.908189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.148 [2024-07-26 14:15:43.908196] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12df540 is same with the state(5) to be set 00:21:36.148 [2024-07-26 14:15:43.908203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.148 [2024-07-26 14:15:43.908209] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12df540 is same with the state(5) to be set 00:21:36.148 [2024-07-26 14:15:43.908218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.148 [2024-07-26 14:15:43.908221] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12df540 is same with the state(5) to be set 00:21:36.148 [2024-07-26 14:15:43.908232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.148 [2024-07-26 14:15:43.908234] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12df540 is same with the state(5) to be set 00:21:36.148 [2024-07-26 14:15:43.908246] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12df540 is same with the state(5) to be set 00:21:36.148 [2024-07-26 14:15:43.908247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.148 [2024-07-26 14:15:43.908258] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12df540 is same with the state(5) to be set 00:21:36.148 [2024-07-26 14:15:43.908261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.148 [2024-07-26 14:15:43.908270] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12df540 is same with the state(5) to be set 00:21:36.148 [2024-07-26 14:15:43.908277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.148 [2024-07-26 14:15:43.908283] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12df540 is same with the state(5) to be set 00:21:36.148 [2024-07-26 14:15:43.908290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.148 [2024-07-26 14:15:43.908296] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12df540 is same with the state(5) to be set 00:21:36.148 [2024-07-26 14:15:43.908307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:1[2024-07-26 14:15:43.908309] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12df540 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.148 the state(5) to be set 00:21:36.148 [2024-07-26 14:15:43.908322] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12df540 is same with [2024-07-26 14:15:43.908322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:21:36.148 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.148 [2024-07-26 14:15:43.908339] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12df540 is same with the state(5) to be set 00:21:36.148 [2024-07-26 14:15:43.908343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.148 [2024-07-26 14:15:43.908352] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12df540 is same with the state(5) to be set 00:21:36.148 [2024-07-26 14:15:43.908356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.148 [2024-07-26 14:15:43.908364] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12df540 is same with the state(5) to be set 00:21:36.148 [2024-07-26 14:15:43.908372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.148 [2024-07-26 14:15:43.908376] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12df540 is same with the state(5) to be set 00:21:36.148 [2024-07-26 14:15:43.908386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.148 [2024-07-26 14:15:43.908389] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12df540 is same with the state(5) to be set 00:21:36.148 [2024-07-26 14:15:43.908401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.148 [2024-07-26 14:15:43.908415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.148 [2024-07-26 14:15:43.908429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.148 [2024-07-26 14:15:43.908442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.148 [2024-07-26 14:15:43.908458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.148 [2024-07-26 14:15:43.908471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.149 [2024-07-26 14:15:43.908485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.149 [2024-07-26 14:15:43.908498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.149 [2024-07-26 14:15:43.908524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.149 [2024-07-26 14:15:43.908562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.149 [2024-07-26 14:15:43.908580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.149 [2024-07-26 14:15:43.908594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.149 [2024-07-26 14:15:43.908610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.149 [2024-07-26 14:15:43.908623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.149 [2024-07-26 14:15:43.908640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.149 [2024-07-26 14:15:43.908657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.149 [2024-07-26 14:15:43.908674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.149 [2024-07-26 14:15:43.908687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.149 [2024-07-26 14:15:43.908703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.149 [2024-07-26 14:15:43.908717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.149 [2024-07-26 14:15:43.908732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.149 [2024-07-26 14:15:43.908746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.149 [2024-07-26 14:15:43.908761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.149 [2024-07-26 14:15:43.908775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.149 [2024-07-26 14:15:43.908790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.149 [2024-07-26 14:15:43.908804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.149 [2024-07-26 14:15:43.908823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.149 [2024-07-26 14:15:43.908852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.149 [2024-07-26 14:15:43.908868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.149 [2024-07-26 14:15:43.908881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.149 [2024-07-26 14:15:43.908896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.149 [2024-07-26 14:15:43.908909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.149 [2024-07-26 14:15:43.908924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.149 [2024-07-26 14:15:43.908937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.149 [2024-07-26 14:15:43.908953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.149 [2024-07-26 14:15:43.908966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.149 [2024-07-26 14:15:43.908981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.149 [2024-07-26 14:15:43.908994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.149 [2024-07-26 14:15:43.909010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.149 [2024-07-26 14:15:43.909023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.149 [2024-07-26 14:15:43.909042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.149 [2024-07-26 14:15:43.909056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.149 [2024-07-26 14:15:43.909071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.149 [2024-07-26 14:15:43.909085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.149 [2024-07-26 14:15:43.909100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.149 [2024-07-26 14:15:43.909114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.149 [2024-07-26 14:15:43.909129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.149 [2024-07-26 14:15:43.909142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.149 [2024-07-26 14:15:43.909157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.149 [2024-07-26 14:15:43.909170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.149 [2024-07-26 14:15:43.909185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.149 [2024-07-26 14:15:43.909200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.149 [2024-07-26 14:15:43.909230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.149 [2024-07-26 14:15:43.909244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.149 [2024-07-26 14:15:43.909260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.149 [2024-07-26 14:15:43.909274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.149 [2024-07-26 14:15:43.909289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.149 [2024-07-26 14:15:43.909303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.149 [2024-07-26 14:15:43.909318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.149 [2024-07-26 14:15:43.909333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.149 [2024-07-26 14:15:43.909348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.149 [2024-07-26 14:15:43.909362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.149 [2024-07-26 14:15:43.909447] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x27ac420 was disconnected and freed. reset controller. 00:21:36.149 [2024-07-26 14:15:43.909695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.149 [2024-07-26 14:15:43.909718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.149 [2024-07-26 14:15:43.909744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.149 [2024-07-26 14:15:43.909761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.149 [2024-07-26 14:15:43.909777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.149 [2024-07-26 14:15:43.909792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.149 [2024-07-26 14:15:43.909807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.149 [2024-07-26 14:15:43.909828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.149 [2024-07-26 14:15:43.909829] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12dfa00 is same with the state(5) to be set 00:21:36.149 [2024-07-26 14:15:43.909844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.149 [2024-07-26 14:15:43.909859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.149 [2024-07-26 14:15:43.909862] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12dfa00 is same with the state(5) to be set 00:21:36.149 [2024-07-26 14:15:43.909875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.149 [2024-07-26 14:15:43.909877] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12dfa00 is same with the state(5) to be set 00:21:36.149 [2024-07-26 14:15:43.909889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.149 [2024-07-26 14:15:43.909891] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12dfa00 is same with the state(5) to be set 00:21:36.150 [2024-07-26 14:15:43.909904] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12dfa00 is same with the state(5) to be set 00:21:36.150 [2024-07-26 14:15:43.909905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.150 [2024-07-26 14:15:43.909917] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12dfa00 is same with the state(5) to be set 00:21:36.150 [2024-07-26 14:15:43.909920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.150 [2024-07-26 14:15:43.909930] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12dfa00 is same with the state(5) to be set 00:21:36.150 [2024-07-26 14:15:43.909937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.150 [2024-07-26 14:15:43.909943] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12dfa00 is same with the state(5) to be set 00:21:36.150 [2024-07-26 14:15:43.909950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.150 [2024-07-26 14:15:43.909956] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12dfa00 is same with the state(5) to be set 00:21:36.150 [2024-07-26 14:15:43.909967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128[2024-07-26 14:15:43.909968] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12dfa00 is same with SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.150 the state(5) to be set 00:21:36.150 [2024-07-26 14:15:43.909982] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12dfa00 is same with [2024-07-26 14:15:43.909982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:21:36.150 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.150 [2024-07-26 14:15:43.910002] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12dfa00 is same with the state(5) to be set 00:21:36.150 [2024-07-26 14:15:43.910006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.150 [2024-07-26 14:15:43.910016] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12dfa00 is same with the state(5) to be set 00:21:36.150 [2024-07-26 14:15:43.910020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.150 [2024-07-26 14:15:43.910029] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12dfa00 is same with the state(5) to be set 00:21:36.150 [2024-07-26 14:15:43.910036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.150 [2024-07-26 14:15:43.910042] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12dfa00 is same with the state(5) to be set 00:21:36.150 [2024-07-26 14:15:43.910050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.150 [2024-07-26 14:15:43.910054] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12dfa00 is same with the state(5) to be set 00:21:36.150 [2024-07-26 14:15:43.910066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128[2024-07-26 14:15:43.910067] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12dfa00 is same with SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.150 the state(5) to be set 00:21:36.150 [2024-07-26 14:15:43.910081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-26 14:15:43.910081] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12dfa00 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.150 the state(5) to be set 00:21:36.150 [2024-07-26 14:15:43.910097] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12dfa00 is same with the state(5) to be set 00:21:36.150 [2024-07-26 14:15:43.910099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.150 [2024-07-26 14:15:43.910109] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12dfa00 is same with the state(5) to be set 00:21:36.150 [2024-07-26 14:15:43.910113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.150 [2024-07-26 14:15:43.910122] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12dfa00 is same with the state(5) to be set 00:21:36.150 [2024-07-26 14:15:43.910129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.150 [2024-07-26 14:15:43.910135] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12dfa00 is same with the state(5) to be set 00:21:36.150 [2024-07-26 14:15:43.910158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.150 [2024-07-26 14:15:43.910163] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12dfa00 is same with the state(5) to be set 00:21:36.150 [2024-07-26 14:15:43.910175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128[2024-07-26 14:15:43.910176] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12dfa00 is same with SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.150 the state(5) to be set 00:21:36.150 [2024-07-26 14:15:43.910190] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12dfa00 is same with [2024-07-26 14:15:43.910190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:21:36.150 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.150 [2024-07-26 14:15:43.910207] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12dfa00 is same with the state(5) to be set 00:21:36.150 [2024-07-26 14:15:43.910210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.150 [2024-07-26 14:15:43.910219] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12dfa00 is same with the state(5) to be set 00:21:36.150 [2024-07-26 14:15:43.910224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.150 [2024-07-26 14:15:43.910232] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12dfa00 is same with the state(5) to be set 00:21:36.150 [2024-07-26 14:15:43.910239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.150 [2024-07-26 14:15:43.910251] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12dfa00 is same with [2024-07-26 14:15:43.910253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:21:36.150 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.150 [2024-07-26 14:15:43.910266] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12dfa00 is same with the state(5) to be set 00:21:36.150 [2024-07-26 14:15:43.910269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.150 [2024-07-26 14:15:43.910279] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12dfa00 is same with the state(5) to be set 00:21:36.150 [2024-07-26 14:15:43.910283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.150 [2024-07-26 14:15:43.910291] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12dfa00 is same with the state(5) to be set 00:21:36.150 [2024-07-26 14:15:43.910299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.150 [2024-07-26 14:15:43.910303] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12dfa00 is same with the state(5) to be set 00:21:36.150 [2024-07-26 14:15:43.910312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.150 [2024-07-26 14:15:43.910315] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12dfa00 is same with the state(5) to be set 00:21:36.150 [2024-07-26 14:15:43.910327] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12dfa00 is same with [2024-07-26 14:15:43.910328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:12the state(5) to be set 00:21:36.150 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.150 [2024-07-26 14:15:43.910341] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12dfa00 is same with [2024-07-26 14:15:43.910343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:21:36.150 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.150 [2024-07-26 14:15:43.910355] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12dfa00 is same with the state(5) to be set 00:21:36.150 [2024-07-26 14:15:43.910364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.150 [2024-07-26 14:15:43.910368] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12dfa00 is same with the state(5) to be set 00:21:36.150 [2024-07-26 14:15:43.910379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-26 14:15:43.910380] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12dfa00 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.150 the state(5) to be set 00:21:36.150 [2024-07-26 14:15:43.910396] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12dfa00 is same with the state(5) to be set 00:21:36.150 [2024-07-26 14:15:43.910399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.150 [2024-07-26 14:15:43.910408] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12dfa00 is same with the state(5) to be set 00:21:36.150 [2024-07-26 14:15:43.910413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.150 [2024-07-26 14:15:43.910420] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12dfa00 is same with the state(5) to be set 00:21:36.150 [2024-07-26 14:15:43.910429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.150 [2024-07-26 14:15:43.910433] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12dfa00 is same with the state(5) to be set 00:21:36.150 [2024-07-26 14:15:43.910442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.150 [2024-07-26 14:15:43.910445] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12dfa00 is same with the state(5) to be set 00:21:36.150 [2024-07-26 14:15:43.910457] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12dfa00 is same with [2024-07-26 14:15:43.910458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:12the state(5) to be set 00:21:36.150 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.150 [2024-07-26 14:15:43.910471] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12dfa00 is same with [2024-07-26 14:15:43.910472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:21:36.150 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.150 [2024-07-26 14:15:43.910485] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12dfa00 is same with the state(5) to be set 00:21:36.150 [2024-07-26 14:15:43.910488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.150 [2024-07-26 14:15:43.910497] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12dfa00 is same with the state(5) to be set 00:21:36.151 [2024-07-26 14:15:43.910502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.151 [2024-07-26 14:15:43.910518] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12dfa00 is same with the state(5) to be set 00:21:36.151 [2024-07-26 14:15:43.910522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.151 [2024-07-26 14:15:43.910552] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12dfa00 is same with the state(5) to be set 00:21:36.151 [2024-07-26 14:15:43.910558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.151 [2024-07-26 14:15:43.910566] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12dfa00 is same with the state(5) to be set 00:21:36.151 [2024-07-26 14:15:43.910576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.151 [2024-07-26 14:15:43.910579] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12dfa00 is same with the state(5) to be set 00:21:36.151 [2024-07-26 14:15:43.910590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-26 14:15:43.910592] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12dfa00 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.151 the state(5) to be set 00:21:36.151 [2024-07-26 14:15:43.910609] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12dfa00 is same with the state(5) to be set 00:21:36.151 [2024-07-26 14:15:43.910612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.151 [2024-07-26 14:15:43.910621] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12dfa00 is same with the state(5) to be set 00:21:36.151 [2024-07-26 14:15:43.910627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.151 [2024-07-26 14:15:43.910634] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12dfa00 is same with the state(5) to be set 00:21:36.151 [2024-07-26 14:15:43.910643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.151 [2024-07-26 14:15:43.910647] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12dfa00 is same with the state(5) to be set 00:21:36.151 [2024-07-26 14:15:43.910657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.151 [2024-07-26 14:15:43.910660] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12dfa00 is same with the state(5) to be set 00:21:36.151 [2024-07-26 14:15:43.910672] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12dfa00 is same with [2024-07-26 14:15:43.910672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:12the state(5) to be set 00:21:36.151 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.151 [2024-07-26 14:15:43.910686] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12dfa00 is same with the state(5) to be set 00:21:36.151 [2024-07-26 14:15:43.910688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.151 [2024-07-26 14:15:43.910704] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12dfa00 is same with [2024-07-26 14:15:43.910705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:12the state(5) to be set 00:21:36.151 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.151 [2024-07-26 14:15:43.910719] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12dfa00 is same with the state(5) to be set 00:21:36.151 [2024-07-26 14:15:43.910721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.151 [2024-07-26 14:15:43.910731] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12dfa00 is same with the state(5) to be set 00:21:36.151 [2024-07-26 14:15:43.910737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.151 [2024-07-26 14:15:43.910751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.151 [2024-07-26 14:15:43.910766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.151 [2024-07-26 14:15:43.910780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.151 [2024-07-26 14:15:43.910795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.151 [2024-07-26 14:15:43.910808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.151 [2024-07-26 14:15:43.910831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.151 [2024-07-26 14:15:43.910862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.151 [2024-07-26 14:15:43.910879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.151 [2024-07-26 14:15:43.910893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.151 [2024-07-26 14:15:43.910912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.151 [2024-07-26 14:15:43.910928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.151 [2024-07-26 14:15:43.910943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.151 [2024-07-26 14:15:43.910956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.151 [2024-07-26 14:15:43.910971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.151 [2024-07-26 14:15:43.910984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.151 [2024-07-26 14:15:43.910999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.151 [2024-07-26 14:15:43.911012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.151 [2024-07-26 14:15:43.911027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.151 [2024-07-26 14:15:43.911040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.151 [2024-07-26 14:15:43.911054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.151 [2024-07-26 14:15:43.911067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.151 [2024-07-26 14:15:43.911082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.151 [2024-07-26 14:15:43.911094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.151 [2024-07-26 14:15:43.911109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.151 [2024-07-26 14:15:43.911122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.151 [2024-07-26 14:15:43.911137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.151 [2024-07-26 14:15:43.911150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.151 [2024-07-26 14:15:43.911165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.151 [2024-07-26 14:15:43.911178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.151 [2024-07-26 14:15:43.911193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.151 [2024-07-26 14:15:43.911206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.151 [2024-07-26 14:15:43.911224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.151 [2024-07-26 14:15:43.911238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.151 [2024-07-26 14:15:43.911253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.151 [2024-07-26 14:15:43.911266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.151 [2024-07-26 14:15:43.911281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.151 [2024-07-26 14:15:43.911294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.151 [2024-07-26 14:15:43.911309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.151 [2024-07-26 14:15:43.911322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.151 [2024-07-26 14:15:43.911338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.151 [2024-07-26 14:15:43.911350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.151 [2024-07-26 14:15:43.911370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.151 [2024-07-26 14:15:43.911384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.151 [2024-07-26 14:15:43.911399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.151 [2024-07-26 14:15:43.911412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.151 [2024-07-26 14:15:43.911427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.151 [2024-07-26 14:15:43.911445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.151 [2024-07-26 14:15:43.911460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.151 [2024-07-26 14:15:43.911474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.151 [2024-07-26 14:15:43.911489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.152 [2024-07-26 14:15:43.911501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.152 [2024-07-26 14:15:43.911526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.152 [2024-07-26 14:15:43.911561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.152 [2024-07-26 14:15:43.911577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.152 [2024-07-26 14:15:43.911591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.152 [2024-07-26 14:15:43.911607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.152 [2024-07-26 14:15:43.911624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.152 [2024-07-26 14:15:43.911640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.152 [2024-07-26 14:15:43.911654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.152 [2024-07-26 14:15:43.911669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.152 [2024-07-26 14:15:43.911683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.152 [2024-07-26 14:15:43.911698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.152 [2024-07-26 14:15:43.911712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.152 [2024-07-26 14:15:43.911727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.152 [2024-07-26 14:15:43.911741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.152 [2024-07-26 14:15:43.911774] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1109bd0 is same with the state(5) to be set 00:21:36.152 [2024-07-26 14:15:43.911805] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1109bd0 is same with the state(5) to be set 00:21:36.152 [2024-07-26 14:15:43.911828] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1109bd0 is same with the state(5) to be set 00:21:36.152 [2024-07-26 14:15:43.911827] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x261a340 was disconnected and freed. reset controller. 00:21:36.152 [2024-07-26 14:15:43.911841] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1109bd0 is same with the state(5) to be set 00:21:36.152 [2024-07-26 14:15:43.911853] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1109bd0 is same with the state(5) to be set 00:21:36.152 [2024-07-26 14:15:43.911866] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1109bd0 is same with the state(5) to be set 00:21:36.152 [2024-07-26 14:15:43.911879] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1109bd0 is same with the state(5) to be set 00:21:36.152 [2024-07-26 14:15:43.911891] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1109bd0 is same with the state(5) to be set 00:21:36.152 [2024-07-26 14:15:43.911903] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1109bd0 is same with the state(5) to be set 00:21:36.152 [2024-07-26 14:15:43.911915] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1109bd0 is same with the state(5) to be set 00:21:36.152 [2024-07-26 14:15:43.911927] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1109bd0 is same with the state(5) to be set 00:21:36.152 [2024-07-26 14:15:43.911940] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1109bd0 is same with the state(5) to be set 00:21:36.152 [2024-07-26 14:15:43.911952] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1109bd0 is same with the state(5) to be set 00:21:36.152 [2024-07-26 14:15:43.911964] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1109bd0 is same with the state(5) to be set 00:21:36.152 [2024-07-26 14:15:43.911976] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1109bd0 is same with the state(5) to be set 00:21:36.152 [2024-07-26 14:15:43.911989] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1109bd0 is same with the state(5) to be set 00:21:36.152 [2024-07-26 14:15:43.912001] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1109bd0 is same with the state(5) to be set 00:21:36.152 [2024-07-26 14:15:43.912029] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1109bd0 is same with the state(5) to be set 00:21:36.152 [2024-07-26 14:15:43.912043] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1109bd0 is same with the state(5) to be set 00:21:36.152 [2024-07-26 14:15:43.912055] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1109bd0 is same with the state(5) to be set 00:21:36.152 [2024-07-26 14:15:43.912067] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1109bd0 is same with the state(5) to be set 00:21:36.152 [2024-07-26 14:15:43.912079] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1109bd0 is same with the state(5) to be set 00:21:36.152 [2024-07-26 14:15:43.912092] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1109bd0 is same with the state(5) to be set 00:21:36.152 [2024-07-26 14:15:43.912104] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1109bd0 is same with the state(5) to be set 00:21:36.152 [2024-07-26 14:15:43.912116] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1109bd0 is same with the state(5) to be set 00:21:36.152 [2024-07-26 14:15:43.912129] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1109bd0 is same with the state(5) to be set 00:21:36.152 [2024-07-26 14:15:43.912141] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1109bd0 is same with the state(5) to be set 00:21:36.152 [2024-07-26 14:15:43.912154] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1109bd0 is same with the state(5) to be set 00:21:36.152 [2024-07-26 14:15:43.912151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.152 [2024-07-26 14:15:43.912167] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1109bd0 is same with the state(5) to be set 00:21:36.152 [2024-07-26 14:15:43.912174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.152 [2024-07-26 14:15:43.912180] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1109bd0 is same with the state(5) to be set 00:21:36.152 [2024-07-26 14:15:43.912193] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1109bd0 is same with the state(5) to be set 00:21:36.152 [2024-07-26 14:15:43.912195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.152 [2024-07-26 14:15:43.912206] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1109bd0 is same with the state(5) to be set 00:21:36.152 [2024-07-26 14:15:43.912211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.152 [2024-07-26 14:15:43.912219] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1109bd0 is same with the state(5) to be set 00:21:36.152 [2024-07-26 14:15:43.912228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.152 [2024-07-26 14:15:43.912232] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1109bd0 is same with the state(5) to be set 00:21:36.152 [2024-07-26 14:15:43.912242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.152 [2024-07-26 14:15:43.912245] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1109bd0 is same with the state(5) to be set 00:21:36.152 [2024-07-26 14:15:43.912257] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1109bd0 is same with [2024-07-26 14:15:43.912258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128the state(5) to be set 00:21:36.152 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.152 [2024-07-26 14:15:43.912274] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1109bd0 is same with the state(5) to be set 00:21:36.152 [2024-07-26 14:15:43.912281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.152 [2024-07-26 14:15:43.912289] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1109bd0 is same with the state(5) to be set 00:21:36.152 [2024-07-26 14:15:43.912297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.152 [2024-07-26 14:15:43.912302] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1109bd0 is same with the state(5) to be set 00:21:36.152 [2024-07-26 14:15:43.912312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.152 [2024-07-26 14:15:43.912315] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1109bd0 is same with the state(5) to be set 00:21:36.152 [2024-07-26 14:15:43.912328] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1109bd0 is same with [2024-07-26 14:15:43.912329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128the state(5) to be set 00:21:36.152 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.152 [2024-07-26 14:15:43.912343] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1109bd0 is same with the state(5) to be set 00:21:36.152 [2024-07-26 14:15:43.912344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.152 [2024-07-26 14:15:43.912356] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1109bd0 is same with the state(5) to be set 00:21:36.152 [2024-07-26 14:15:43.912361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.152 [2024-07-26 14:15:43.912369] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1109bd0 is same with the state(5) to be set 00:21:36.152 [2024-07-26 14:15:43.912375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.152 [2024-07-26 14:15:43.912381] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1109bd0 is same with the state(5) to be set 00:21:36.152 [2024-07-26 14:15:43.912391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.152 [2024-07-26 14:15:43.912394] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1109bd0 is same with the state(5) to be set 00:21:36.152 [2024-07-26 14:15:43.912405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-26 14:15:43.912407] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1109bd0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.152 the state(5) to be set 00:21:36.152 [2024-07-26 14:15:43.912422] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1109bd0 is same with the state(5) to be set 00:21:36.152 [2024-07-26 14:15:43.912423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.152 [2024-07-26 14:15:43.912434] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1109bd0 is same with the state(5) to be set 00:21:36.153 [2024-07-26 14:15:43.912437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.153 [2024-07-26 14:15:43.912447] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1109bd0 is same with the state(5) to be set 00:21:36.153 [2024-07-26 14:15:43.912453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.153 [2024-07-26 14:15:43.912459] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1109bd0 is same with the state(5) to be set 00:21:36.153 [2024-07-26 14:15:43.912471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-26 14:15:43.912473] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1109bd0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.153 the state(5) to be set 00:21:36.153 [2024-07-26 14:15:43.912488] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1109bd0 is same with the state(5) to be set 00:21:36.153 [2024-07-26 14:15:43.912490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.153 [2024-07-26 14:15:43.912500] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1109bd0 is same with the state(5) to be set 00:21:36.153 [2024-07-26 14:15:43.912504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.153 [2024-07-26 14:15:43.912522] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1109bd0 is same with the state(5) to be set 00:21:36.153 [2024-07-26 14:15:43.912535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.153 [2024-07-26 14:15:43.912543] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1109bd0 is same with the state(5) to be set 00:21:36.153 [2024-07-26 14:15:43.912551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.153 [2024-07-26 14:15:43.912556] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1109bd0 is same with the state(5) to be set 00:21:36.153 [2024-07-26 14:15:43.912567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:12[2024-07-26 14:15:43.912570] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1109bd0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.153 the state(5) to be set 00:21:36.153 [2024-07-26 14:15:43.912583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-26 14:15:43.912585] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1109bd0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.153 the state(5) to be set 00:21:36.153 [2024-07-26 14:15:43.912598] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1109bd0 is same with the state(5) to be set 00:21:36.153 [2024-07-26 14:15:43.912601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.153 [2024-07-26 14:15:43.912611] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1109bd0 is same with the state(5) to be set 00:21:36.153 [2024-07-26 14:15:43.912614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.153 [2024-07-26 14:15:43.912623] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1109bd0 is same with the state(5) to be set 00:21:36.153 [2024-07-26 14:15:43.912631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.153 [2024-07-26 14:15:43.912636] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1109bd0 is same with the state(5) to be set 00:21:36.153 [2024-07-26 14:15:43.912645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.153 [2024-07-26 14:15:43.912661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.153 [2024-07-26 14:15:43.912675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.153 [2024-07-26 14:15:43.912697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.153 [2024-07-26 14:15:43.912711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.153 [2024-07-26 14:15:43.912727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.153 [2024-07-26 14:15:43.912741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.153 [2024-07-26 14:15:43.912757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.153 [2024-07-26 14:15:43.912770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.153 [2024-07-26 14:15:43.912785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.153 [2024-07-26 14:15:43.912804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.153 [2024-07-26 14:15:43.912823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.153 [2024-07-26 14:15:43.912837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.153 [2024-07-26 14:15:43.912852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.153 [2024-07-26 14:15:43.912866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.153 [2024-07-26 14:15:43.912882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.153 [2024-07-26 14:15:43.912896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.153 [2024-07-26 14:15:43.912911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.153 [2024-07-26 14:15:43.912925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.153 [2024-07-26 14:15:43.912941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.153 [2024-07-26 14:15:43.912954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.153 [2024-07-26 14:15:43.912970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.153 [2024-07-26 14:15:43.912984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.153 [2024-07-26 14:15:43.912999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.153 [2024-07-26 14:15:43.913013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.153 [2024-07-26 14:15:43.913029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.153 [2024-07-26 14:15:43.913042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.153 [2024-07-26 14:15:43.913058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.153 [2024-07-26 14:15:43.913074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.153 [2024-07-26 14:15:43.913092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.153 [2024-07-26 14:15:43.913106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.153 [2024-07-26 14:15:43.913122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.153 [2024-07-26 14:15:43.913135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.153 [2024-07-26 14:15:43.913151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.153 [2024-07-26 14:15:43.913164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.153 [2024-07-26 14:15:43.913180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.153 [2024-07-26 14:15:43.913193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.153 [2024-07-26 14:15:43.913209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.153 [2024-07-26 14:15:43.913223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.154 [2024-07-26 14:15:43.913238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.154 [2024-07-26 14:15:43.913252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.154 [2024-07-26 14:15:43.913268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.154 [2024-07-26 14:15:43.913283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.154 [2024-07-26 14:15:43.913298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.154 [2024-07-26 14:15:43.913312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.154 [2024-07-26 14:15:43.913328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.154 [2024-07-26 14:15:43.913342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.154 [2024-07-26 14:15:43.913358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.154 [2024-07-26 14:15:43.913371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.154 [2024-07-26 14:15:43.913372] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110a090 is same with the state(5) to be set 00:21:36.154 [2024-07-26 14:15:43.913387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.154 [2024-07-26 14:15:43.913398] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110a090 is same with the state(5) to be set 00:21:36.154 [2024-07-26 14:15:43.913401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.154 [2024-07-26 14:15:43.913411] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110a090 is same with the state(5) to be set 00:21:36.154 [2024-07-26 14:15:43.913417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.154 [2024-07-26 14:15:43.913429] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110a090 is same with the state(5) to be set 00:21:36.154 [2024-07-26 14:15:43.913433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.154 [2024-07-26 14:15:43.913442] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110a090 is same with the state(5) to be set 00:21:36.154 [2024-07-26 14:15:43.913449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.154 [2024-07-26 14:15:43.913455] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110a090 is same with the state(5) to be set 00:21:36.154 [2024-07-26 14:15:43.913463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.154 [2024-07-26 14:15:43.913468] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110a090 is same with the state(5) to be set 00:21:36.154 [2024-07-26 14:15:43.913479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:12[2024-07-26 14:15:43.913480] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110a090 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.154 the state(5) to be set 00:21:36.154 [2024-07-26 14:15:43.913495] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110a090 is same with [2024-07-26 14:15:43.913495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:21:36.154 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.154 [2024-07-26 14:15:43.913509] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110a090 is same with the state(5) to be set 00:21:36.154 [2024-07-26 14:15:43.913524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.154 [2024-07-26 14:15:43.913543] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110a090 is same with the state(5) to be set 00:21:36.154 [2024-07-26 14:15:43.913546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.154 [2024-07-26 14:15:43.913557] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110a090 is same with the state(5) to be set 00:21:36.154 [2024-07-26 14:15:43.913563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.154 [2024-07-26 14:15:43.913570] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110a090 is same with the state(5) to be set 00:21:36.154 [2024-07-26 14:15:43.913578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.154 [2024-07-26 14:15:43.913582] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110a090 is same with the state(5) to be set 00:21:36.154 [2024-07-26 14:15:43.913595] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110a090 is same with [2024-07-26 14:15:43.913594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:12the state(5) to be set 00:21:36.154 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.154 [2024-07-26 14:15:43.913610] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110a090 is same with the state(5) to be set 00:21:36.154 [2024-07-26 14:15:43.913611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.154 [2024-07-26 14:15:43.913622] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110a090 is same with the state(5) to be set 00:21:36.154 [2024-07-26 14:15:43.913628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.154 [2024-07-26 14:15:43.913638] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110a090 is same with the state(5) to be set 00:21:36.154 [2024-07-26 14:15:43.913643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.154 [2024-07-26 14:15:43.913651] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110a090 is same with the state(5) to be set 00:21:36.154 [2024-07-26 14:15:43.913659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.154 [2024-07-26 14:15:43.913664] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110a090 is same with the state(5) to be set 00:21:36.154 [2024-07-26 14:15:43.913673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.154 [2024-07-26 14:15:43.913677] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110a090 is same with the state(5) to be set 00:21:36.154 [2024-07-26 14:15:43.913689] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110a090 is same with [2024-07-26 14:15:43.913689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:12the state(5) to be set 00:21:36.154 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.154 [2024-07-26 14:15:43.913703] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110a090 is same with the state(5) to be set 00:21:36.154 [2024-07-26 14:15:43.913705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.154 [2024-07-26 14:15:43.913716] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110a090 is same with the state(5) to be set 00:21:36.154 [2024-07-26 14:15:43.913723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.154 [2024-07-26 14:15:43.913728] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110a090 is same with the state(5) to be set 00:21:36.154 [2024-07-26 14:15:43.913737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.154 [2024-07-26 14:15:43.913741] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110a090 is same with the state(5) to be set 00:21:36.154 [2024-07-26 14:15:43.913754] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110a090 is same with [2024-07-26 14:15:43.913754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:12the state(5) to be set 00:21:36.154 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.154 [2024-07-26 14:15:43.913768] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110a090 is same with the state(5) to be set 00:21:36.154 [2024-07-26 14:15:43.913769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.154 [2024-07-26 14:15:43.913780] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110a090 is same with the state(5) to be set 00:21:36.154 [2024-07-26 14:15:43.913786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.154 [2024-07-26 14:15:43.913793] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110a090 is same with the state(5) to be set 00:21:36.154 [2024-07-26 14:15:43.913801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.154 [2024-07-26 14:15:43.913806] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110a090 is same with the state(5) to be set 00:21:36.154 [2024-07-26 14:15:43.913829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.154 [2024-07-26 14:15:43.913833] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110a090 is same with the state(5) to be set 00:21:36.154 [2024-07-26 14:15:43.913843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.154 [2024-07-26 14:15:43.913846] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110a090 is same with the state(5) to be set 00:21:36.154 [2024-07-26 14:15:43.913859] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110a090 is same with [2024-07-26 14:15:43.913859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:12the state(5) to be set 00:21:36.154 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.154 [2024-07-26 14:15:43.913874] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110a090 is same with the state(5) to be set 00:21:36.154 [2024-07-26 14:15:43.913876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.154 [2024-07-26 14:15:43.913887] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110a090 is same with the state(5) to be set 00:21:36.154 [2024-07-26 14:15:43.913892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.154 [2024-07-26 14:15:43.913899] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110a090 is same with the state(5) to be set 00:21:36.154 [2024-07-26 14:15:43.913906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.154 [2024-07-26 14:15:43.913912] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110a090 is same with the state(5) to be set 00:21:36.155 [2024-07-26 14:15:43.913923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:12[2024-07-26 14:15:43.913925] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110a090 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.155 the state(5) to be set 00:21:36.155 [2024-07-26 14:15:43.913939] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110a090 is same with [2024-07-26 14:15:43.913939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:21:36.155 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.155 [2024-07-26 14:15:43.913952] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110a090 is same with the state(5) to be set 00:21:36.155 [2024-07-26 14:15:43.913957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.155 [2024-07-26 14:15:43.913965] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110a090 is same with the state(5) to be set 00:21:36.155 [2024-07-26 14:15:43.913971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.155 [2024-07-26 14:15:43.913977] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110a090 is same with the state(5) to be set 00:21:36.155 [2024-07-26 14:15:43.913987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.155 [2024-07-26 14:15:43.913990] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110a090 is same with the state(5) to be set 00:21:36.155 [2024-07-26 14:15:43.914003] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110a090 is same with the state(5) to be set 00:21:36.155 [2024-07-26 14:15:43.914008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.155 [2024-07-26 14:15:43.914015] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110a090 is same with the state(5) to be set 00:21:36.155 [2024-07-26 14:15:43.914027] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110a090 is same with the state(5) to be set 00:21:36.155 [2024-07-26 14:15:43.914028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.155 [2024-07-26 14:15:43.914039] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110a090 is same with the state(5) to be set 00:21:36.155 [2024-07-26 14:15:43.914044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.155 [2024-07-26 14:15:43.914052] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110a090 is same with the state(5) to be set 00:21:36.155 [2024-07-26 14:15:43.914060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.155 [2024-07-26 14:15:43.914064] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110a090 is same with the state(5) to be set 00:21:36.155 [2024-07-26 14:15:43.914074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-26 14:15:43.914076] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110a090 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.155 the state(5) to be set 00:21:36.155 [2024-07-26 14:15:43.914090] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110a090 is same with the state(5) to be set 00:21:36.155 [2024-07-26 14:15:43.914092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.155 [2024-07-26 14:15:43.914102] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110a090 is same with the state(5) to be set 00:21:36.155 [2024-07-26 14:15:43.914106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.155 [2024-07-26 14:15:43.914115] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110a090 is same with the state(5) to be set 00:21:36.155 [2024-07-26 14:15:43.914123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.155 [2024-07-26 14:15:43.914133] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110a090 is same with the state(5) to be set 00:21:36.155 [2024-07-26 14:15:43.914137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.155 [2024-07-26 14:15:43.914146] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110a090 is same with the state(5) to be set 00:21:36.155 [2024-07-26 14:15:43.914153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.155 [2024-07-26 14:15:43.914159] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110a090 is same with the state(5) to be set 00:21:36.155 [2024-07-26 14:15:43.914167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.155 [2024-07-26 14:15:43.914172] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110a090 is same with the state(5) to be set 00:21:36.155 [2024-07-26 14:15:43.914183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:12[2024-07-26 14:15:43.914184] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110a090 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.155 the state(5) to be set 00:21:36.155 [2024-07-26 14:15:43.914199] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110a090 is same with [2024-07-26 14:15:43.914199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:21:36.155 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.155 [2024-07-26 14:15:43.914215] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110a090 is same with the state(5) to be set 00:21:36.155 [2024-07-26 14:15:43.914228] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110a090 is same with the state(5) to be set 00:21:36.155 [2024-07-26 14:15:43.914240] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110a090 is same with the state(5) to be set 00:21:36.155 [2024-07-26 14:15:43.914727] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x27a5960 was disconnected and freed. reset controller. 00:21:36.155 [2024-07-26 14:15:43.918576] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:36.155 [2024-07-26 14:15:43.918611] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:21:36.155 [2024-07-26 14:15:43.918638] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2643400 (9): Bad file descriptor 00:21:36.155 [2024-07-26 14:15:43.918660] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2620830 (9): Bad file descriptor 00:21:36.155 [2024-07-26 14:15:43.918681] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2644b50 (9): Bad file descriptor 00:21:36.155 [2024-07-26 14:15:43.918709] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x26f57e0 (9): Bad file descriptor 00:21:36.155 [2024-07-26 14:15:43.918765] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:36.155 [2024-07-26 14:15:43.918786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.155 [2024-07-26 14:15:43.918803] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:36.155 [2024-07-26 14:15:43.918823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.155 [2024-07-26 14:15:43.918838] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:36.155 [2024-07-26 14:15:43.918852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.155 [2024-07-26 14:15:43.918866] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:36.155 [2024-07-26 14:15:43.918880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.155 [2024-07-26 14:15:43.918893] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27057b0 is same with the state(5) to be set 00:21:36.155 [2024-07-26 14:15:43.918939] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:36.155 [2024-07-26 14:15:43.918959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.155 [2024-07-26 14:15:43.918974] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:36.155 [2024-07-26 14:15:43.918988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.155 [2024-07-26 14:15:43.919002] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:36.155 [2024-07-26 14:15:43.919016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.155 [2024-07-26 14:15:43.919030] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:36.155 [2024-07-26 14:15:43.919049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.155 [2024-07-26 14:15:43.919063] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27e0fd0 is same with the state(5) to be set 00:21:36.155 [2024-07-26 14:15:43.919092] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x27d8910 (9): Bad file descriptor 00:21:36.155 [2024-07-26 14:15:43.919137] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:36.155 [2024-07-26 14:15:43.919156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.155 [2024-07-26 14:15:43.919172] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:36.155 [2024-07-26 14:15:43.919185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.155 [2024-07-26 14:15:43.919200] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:36.155 [2024-07-26 14:15:43.919213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.155 [2024-07-26 14:15:43.919227] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:36.155 [2024-07-26 14:15:43.919240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.155 [2024-07-26 14:15:43.919253] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27e41d0 is same with the state(5) to be set 00:21:36.155 [2024-07-26 14:15:43.919282] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x264ac80 (9): Bad file descriptor 00:21:36.155 [2024-07-26 14:15:43.919331] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:36.155 [2024-07-26 14:15:43.919352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.155 [2024-07-26 14:15:43.919367] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:36.156 [2024-07-26 14:15:43.919381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.156 [2024-07-26 14:15:43.919396] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:36.156 [2024-07-26 14:15:43.919409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.156 [2024-07-26 14:15:43.919424] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:36.156 [2024-07-26 14:15:43.919437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.156 [2024-07-26 14:15:43.919450] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2122610 is same with the state(5) to be set 00:21:36.156 [2024-07-26 14:15:43.920442] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:21:36.156 [2024-07-26 14:15:43.920908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.156 [2024-07-26 14:15:43.920932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.156 [2024-07-26 14:15:43.920956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.156 [2024-07-26 14:15:43.920979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.156 [2024-07-26 14:15:43.920997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.156 [2024-07-26 14:15:43.921011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.156 [2024-07-26 14:15:43.921027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.156 [2024-07-26 14:15:43.921041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.156 [2024-07-26 14:15:43.921057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.156 [2024-07-26 14:15:43.921071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.156 [2024-07-26 14:15:43.921087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.156 [2024-07-26 14:15:43.921101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.156 [2024-07-26 14:15:43.921117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.156 [2024-07-26 14:15:43.921131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.156 [2024-07-26 14:15:43.921147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.156 [2024-07-26 14:15:43.921161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.156 [2024-07-26 14:15:43.921176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.156 [2024-07-26 14:15:43.921190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.156 [2024-07-26 14:15:43.921206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.156 [2024-07-26 14:15:43.921220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.156 [2024-07-26 14:15:43.921236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.156 [2024-07-26 14:15:43.921250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.156 [2024-07-26 14:15:43.921266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.156 [2024-07-26 14:15:43.921280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.156 [2024-07-26 14:15:43.921295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.156 [2024-07-26 14:15:43.921311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.156 [2024-07-26 14:15:43.921327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.156 [2024-07-26 14:15:43.921341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.156 [2024-07-26 14:15:43.921360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.156 [2024-07-26 14:15:43.921375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.156 [2024-07-26 14:15:43.921391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.156 [2024-07-26 14:15:43.921404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.156 [2024-07-26 14:15:43.921421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.156 [2024-07-26 14:15:43.921435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.156 [2024-07-26 14:15:43.921451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.156 [2024-07-26 14:15:43.921464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.156 [2024-07-26 14:15:43.921480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.156 [2024-07-26 14:15:43.921494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.156 [2024-07-26 14:15:43.921521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.156 [2024-07-26 14:15:43.921543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.156 [2024-07-26 14:15:43.921560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.156 [2024-07-26 14:15:43.921574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.156 [2024-07-26 14:15:43.921589] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ae690 is same with the state(5) to be set 00:21:36.156 [2024-07-26 14:15:43.921664] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x27ae690 was disconnected and freed. reset controller. 00:21:36.156 [2024-07-26 14:15:43.922726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.156 [2024-07-26 14:15:43.922756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2620830 with addr=10.0.0.2, port=4420 00:21:36.156 [2024-07-26 14:15:43.922774] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2620830 is same with the state(5) to be set 00:21:36.156 [2024-07-26 14:15:43.922867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.156 [2024-07-26 14:15:43.922891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2643400 with addr=10.0.0.2, port=4420 00:21:36.156 [2024-07-26 14:15:43.922907] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2643400 is same with the state(5) to be set 00:21:36.156 [2024-07-26 14:15:43.922990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.156 [2024-07-26 14:15:43.923014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x27d8910 with addr=10.0.0.2, port=4420 00:21:36.156 [2024-07-26 14:15:43.923031] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27d8910 is same with the state(5) to be set 00:21:36.156 [2024-07-26 14:15:43.923111] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:36.156 [2024-07-26 14:15:43.924129] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:36.156 [2024-07-26 14:15:43.924210] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:36.156 [2024-07-26 14:15:43.924286] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:36.156 [2024-07-26 14:15:43.924462] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:21:36.156 [2024-07-26 14:15:43.924533] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2620830 (9): Bad file descriptor 00:21:36.156 [2024-07-26 14:15:43.924559] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2643400 (9): Bad file descriptor 00:21:36.156 [2024-07-26 14:15:43.924578] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x27d8910 (9): Bad file descriptor 00:21:36.156 [2024-07-26 14:15:43.924763] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:36.156 [2024-07-26 14:15:43.924846] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:36.156 [2024-07-26 14:15:43.924965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.156 [2024-07-26 14:15:43.924991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x264ac80 with addr=10.0.0.2, port=4420 00:21:36.156 [2024-07-26 14:15:43.925008] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x264ac80 is same with the state(5) to be set 00:21:36.156 [2024-07-26 14:15:43.925024] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:36.156 [2024-07-26 14:15:43.925038] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:36.156 [2024-07-26 14:15:43.925054] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:36.156 [2024-07-26 14:15:43.925076] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:21:36.156 [2024-07-26 14:15:43.925090] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:21:36.156 [2024-07-26 14:15:43.925103] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:21:36.156 [2024-07-26 14:15:43.925121] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:21:36.156 [2024-07-26 14:15:43.925135] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:21:36.156 [2024-07-26 14:15:43.925148] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:21:36.156 [2024-07-26 14:15:43.925493] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:36.157 [2024-07-26 14:15:43.925526] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:36.157 [2024-07-26 14:15:43.925550] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:36.157 [2024-07-26 14:15:43.925567] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x264ac80 (9): Bad file descriptor 00:21:36.157 [2024-07-26 14:15:43.925629] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:21:36.157 [2024-07-26 14:15:43.925648] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:21:36.157 [2024-07-26 14:15:43.925662] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:21:36.157 [2024-07-26 14:15:43.925723] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:36.157 [2024-07-26 14:15:43.928641] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x27057b0 (9): Bad file descriptor 00:21:36.157 [2024-07-26 14:15:43.928682] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x27e0fd0 (9): Bad file descriptor 00:21:36.157 [2024-07-26 14:15:43.928715] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x27e41d0 (9): Bad file descriptor 00:21:36.157 [2024-07-26 14:15:43.928757] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2122610 (9): Bad file descriptor 00:21:36.157 [2024-07-26 14:15:43.928904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.157 [2024-07-26 14:15:43.928928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.157 [2024-07-26 14:15:43.928959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.157 [2024-07-26 14:15:43.928975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.157 [2024-07-26 14:15:43.928992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.157 [2024-07-26 14:15:43.929007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.157 [2024-07-26 14:15:43.929023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.157 [2024-07-26 14:15:43.929037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.157 [2024-07-26 14:15:43.929053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.157 [2024-07-26 14:15:43.929068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.157 [2024-07-26 14:15:43.929084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.157 [2024-07-26 14:15:43.929097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.157 [2024-07-26 14:15:43.929113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.157 [2024-07-26 14:15:43.929127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.157 [2024-07-26 14:15:43.929143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.157 [2024-07-26 14:15:43.929157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.157 [2024-07-26 14:15:43.929173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.157 [2024-07-26 14:15:43.929187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.157 [2024-07-26 14:15:43.929203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.157 [2024-07-26 14:15:43.929217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.157 [2024-07-26 14:15:43.929233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.157 [2024-07-26 14:15:43.929248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.157 [2024-07-26 14:15:43.929263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.157 [2024-07-26 14:15:43.929278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.157 [2024-07-26 14:15:43.929298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.157 [2024-07-26 14:15:43.929314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.157 [2024-07-26 14:15:43.929330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.157 [2024-07-26 14:15:43.929345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.157 [2024-07-26 14:15:43.929361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.157 [2024-07-26 14:15:43.929375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.157 [2024-07-26 14:15:43.929391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.157 [2024-07-26 14:15:43.929405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.157 [2024-07-26 14:15:43.929421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.157 [2024-07-26 14:15:43.929435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.157 [2024-07-26 14:15:43.929451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.157 [2024-07-26 14:15:43.929464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.157 [2024-07-26 14:15:43.929480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.157 [2024-07-26 14:15:43.929494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.157 [2024-07-26 14:15:43.929510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.157 [2024-07-26 14:15:43.929525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.157 [2024-07-26 14:15:43.929549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.157 [2024-07-26 14:15:43.929564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.157 [2024-07-26 14:15:43.929580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.157 [2024-07-26 14:15:43.929594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.157 [2024-07-26 14:15:43.929609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.157 [2024-07-26 14:15:43.929624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.157 [2024-07-26 14:15:43.929639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.157 [2024-07-26 14:15:43.929653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.157 [2024-07-26 14:15:43.929670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.157 [2024-07-26 14:15:43.929688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.157 [2024-07-26 14:15:43.929705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.157 [2024-07-26 14:15:43.929719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.157 [2024-07-26 14:15:43.929734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.157 [2024-07-26 14:15:43.929749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.157 [2024-07-26 14:15:43.929765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.157 [2024-07-26 14:15:43.929778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.157 [2024-07-26 14:15:43.929795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.157 [2024-07-26 14:15:43.929820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.157 [2024-07-26 14:15:43.929836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.157 [2024-07-26 14:15:43.929851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.158 [2024-07-26 14:15:43.929868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.158 [2024-07-26 14:15:43.929882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.158 [2024-07-26 14:15:43.929898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.158 [2024-07-26 14:15:43.929912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.158 [2024-07-26 14:15:43.929928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.158 [2024-07-26 14:15:43.929941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.158 [2024-07-26 14:15:43.929957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.158 [2024-07-26 14:15:43.929970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.158 [2024-07-26 14:15:43.929986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.158 [2024-07-26 14:15:43.930001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.158 [2024-07-26 14:15:43.930016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.158 [2024-07-26 14:15:43.930030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.158 [2024-07-26 14:15:43.930046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.158 [2024-07-26 14:15:43.930060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.158 [2024-07-26 14:15:43.930080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.158 [2024-07-26 14:15:43.930095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.158 [2024-07-26 14:15:43.930110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.158 [2024-07-26 14:15:43.930124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.158 [2024-07-26 14:15:43.930140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.158 [2024-07-26 14:15:43.930154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.158 [2024-07-26 14:15:43.930170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.158 [2024-07-26 14:15:43.930184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.158 [2024-07-26 14:15:43.930200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.158 [2024-07-26 14:15:43.930213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.158 [2024-07-26 14:15:43.930229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.158 [2024-07-26 14:15:43.930243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.158 [2024-07-26 14:15:43.930258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.158 [2024-07-26 14:15:43.930272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.158 [2024-07-26 14:15:43.930289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.158 [2024-07-26 14:15:43.930302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.158 [2024-07-26 14:15:43.930319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.158 [2024-07-26 14:15:43.930333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.158 [2024-07-26 14:15:43.930349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.158 [2024-07-26 14:15:43.930362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.158 [2024-07-26 14:15:43.930379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.158 [2024-07-26 14:15:43.930394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.158 [2024-07-26 14:15:43.930410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.158 [2024-07-26 14:15:43.930424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.158 [2024-07-26 14:15:43.930440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.158 [2024-07-26 14:15:43.930457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.158 [2024-07-26 14:15:43.930473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.158 [2024-07-26 14:15:43.930487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.158 [2024-07-26 14:15:43.930504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.158 [2024-07-26 14:15:43.930526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.158 [2024-07-26 14:15:43.930548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.158 [2024-07-26 14:15:43.930562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.158 [2024-07-26 14:15:43.930579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.158 [2024-07-26 14:15:43.930593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.158 [2024-07-26 14:15:43.930608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.158 [2024-07-26 14:15:43.930622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.158 [2024-07-26 14:15:43.930638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.158 [2024-07-26 14:15:43.930652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.158 [2024-07-26 14:15:43.930668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.158 [2024-07-26 14:15:43.930682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.158 [2024-07-26 14:15:43.930698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.158 [2024-07-26 14:15:43.930712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.158 [2024-07-26 14:15:43.930728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.158 [2024-07-26 14:15:43.930742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.158 [2024-07-26 14:15:43.930758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.158 [2024-07-26 14:15:43.930772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.158 [2024-07-26 14:15:43.930789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.158 [2024-07-26 14:15:43.930804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.158 [2024-07-26 14:15:43.930824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.158 [2024-07-26 14:15:43.930838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.158 [2024-07-26 14:15:43.930858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.158 [2024-07-26 14:15:43.930873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.158 [2024-07-26 14:15:43.930889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.158 [2024-07-26 14:15:43.930903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.158 [2024-07-26 14:15:43.930918] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ad490 is same with the state(5) to be set 00:21:36.158 [2024-07-26 14:15:43.932201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.158 [2024-07-26 14:15:43.932226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.158 [2024-07-26 14:15:43.932247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.158 [2024-07-26 14:15:43.932262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.158 [2024-07-26 14:15:43.932278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.158 [2024-07-26 14:15:43.932292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.158 [2024-07-26 14:15:43.932309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.159 [2024-07-26 14:15:43.932323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.159 [2024-07-26 14:15:43.932338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.159 [2024-07-26 14:15:43.932353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.159 [2024-07-26 14:15:43.932368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.159 [2024-07-26 14:15:43.932382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.159 [2024-07-26 14:15:43.932398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.159 [2024-07-26 14:15:43.932412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.159 [2024-07-26 14:15:43.932428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.159 [2024-07-26 14:15:43.932442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.159 [2024-07-26 14:15:43.932458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.159 [2024-07-26 14:15:43.932472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.159 [2024-07-26 14:15:43.932489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.159 [2024-07-26 14:15:43.932502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.159 [2024-07-26 14:15:43.932536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.159 [2024-07-26 14:15:43.932553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.159 [2024-07-26 14:15:43.932569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.159 [2024-07-26 14:15:43.932584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.159 [2024-07-26 14:15:43.932600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.159 [2024-07-26 14:15:43.932615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.159 [2024-07-26 14:15:43.932631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.159 [2024-07-26 14:15:43.932645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.159 [2024-07-26 14:15:43.932661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.159 [2024-07-26 14:15:43.932675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.159 [2024-07-26 14:15:43.932691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.159 [2024-07-26 14:15:43.932704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.159 [2024-07-26 14:15:43.932720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.159 [2024-07-26 14:15:43.932734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.159 [2024-07-26 14:15:43.932750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.159 [2024-07-26 14:15:43.932764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.159 [2024-07-26 14:15:43.932780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.159 [2024-07-26 14:15:43.932793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.159 [2024-07-26 14:15:43.932818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.159 [2024-07-26 14:15:43.932832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.159 [2024-07-26 14:15:43.932848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.159 [2024-07-26 14:15:43.932862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.159 [2024-07-26 14:15:43.932878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.159 [2024-07-26 14:15:43.932892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.159 [2024-07-26 14:15:43.932908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.159 [2024-07-26 14:15:43.932926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.159 [2024-07-26 14:15:43.932943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.159 [2024-07-26 14:15:43.932957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.159 [2024-07-26 14:15:43.932973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.159 [2024-07-26 14:15:43.932987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.159 [2024-07-26 14:15:43.933003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.159 [2024-07-26 14:15:43.933016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.159 [2024-07-26 14:15:43.933032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.159 [2024-07-26 14:15:43.933047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.159 [2024-07-26 14:15:43.933062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.159 [2024-07-26 14:15:43.933076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.159 [2024-07-26 14:15:43.933092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.159 [2024-07-26 14:15:43.933108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.159 [2024-07-26 14:15:43.933124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.159 [2024-07-26 14:15:43.933138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.159 [2024-07-26 14:15:43.933155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.159 [2024-07-26 14:15:43.933168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.159 [2024-07-26 14:15:43.933185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.159 [2024-07-26 14:15:43.933198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.159 [2024-07-26 14:15:43.933214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.159 [2024-07-26 14:15:43.933228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.159 [2024-07-26 14:15:43.933244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.159 [2024-07-26 14:15:43.933258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.159 [2024-07-26 14:15:43.933274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.159 [2024-07-26 14:15:43.933288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.159 [2024-07-26 14:15:43.933304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.159 [2024-07-26 14:15:43.933322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.159 [2024-07-26 14:15:43.933339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.159 [2024-07-26 14:15:43.933353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.159 [2024-07-26 14:15:43.933369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.159 [2024-07-26 14:15:43.933383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.159 [2024-07-26 14:15:43.933399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.159 [2024-07-26 14:15:43.933413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.159 [2024-07-26 14:15:43.933429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.159 [2024-07-26 14:15:43.933443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.159 [2024-07-26 14:15:43.933458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.159 [2024-07-26 14:15:43.933472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.159 [2024-07-26 14:15:43.933488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.159 [2024-07-26 14:15:43.933502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.159 [2024-07-26 14:15:43.933532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.160 [2024-07-26 14:15:43.933548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.160 [2024-07-26 14:15:43.933565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.160 [2024-07-26 14:15:43.933579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.160 [2024-07-26 14:15:43.933595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.160 [2024-07-26 14:15:43.933610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.160 [2024-07-26 14:15:43.933626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.160 [2024-07-26 14:15:43.933640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.160 [2024-07-26 14:15:43.933656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.160 [2024-07-26 14:15:43.933670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.160 [2024-07-26 14:15:43.933686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.160 [2024-07-26 14:15:43.933700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.160 [2024-07-26 14:15:43.933724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.160 [2024-07-26 14:15:43.933739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.160 [2024-07-26 14:15:43.933755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.160 [2024-07-26 14:15:43.933770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.160 [2024-07-26 14:15:43.933786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.160 [2024-07-26 14:15:43.933799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.160 [2024-07-26 14:15:43.933821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.160 [2024-07-26 14:15:43.933834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.160 [2024-07-26 14:15:43.933850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.160 [2024-07-26 14:15:43.933864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.160 [2024-07-26 14:15:43.933880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.160 [2024-07-26 14:15:43.933894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.160 [2024-07-26 14:15:43.933910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.160 [2024-07-26 14:15:43.933924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.160 [2024-07-26 14:15:43.933940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.160 [2024-07-26 14:15:43.933953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.160 [2024-07-26 14:15:43.933969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.160 [2024-07-26 14:15:43.933982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.160 [2024-07-26 14:15:43.933999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.160 [2024-07-26 14:15:43.934012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.160 [2024-07-26 14:15:43.934028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.160 [2024-07-26 14:15:43.934041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.160 [2024-07-26 14:15:43.934057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.160 [2024-07-26 14:15:43.934071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.160 [2024-07-26 14:15:43.934087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.160 [2024-07-26 14:15:43.934106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.160 [2024-07-26 14:15:43.934123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.160 [2024-07-26 14:15:43.934137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.160 [2024-07-26 14:15:43.934153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.160 [2024-07-26 14:15:43.934166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.160 [2024-07-26 14:15:43.934183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.160 [2024-07-26 14:15:43.934197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.160 [2024-07-26 14:15:43.934211] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x261b830 is same with the state(5) to be set 00:21:36.160 [2024-07-26 14:15:43.935476] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:21:36.160 [2024-07-26 14:15:43.935507] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:21:36.160 [2024-07-26 14:15:43.935867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.160 [2024-07-26 14:15:43.935899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2644b50 with addr=10.0.0.2, port=4420 00:21:36.160 [2024-07-26 14:15:43.935916] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2644b50 is same with the state(5) to be set 00:21:36.160 [2024-07-26 14:15:43.936006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.160 [2024-07-26 14:15:43.936031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x26f57e0 with addr=10.0.0.2, port=4420 00:21:36.160 [2024-07-26 14:15:43.936047] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f57e0 is same with the state(5) to be set 00:21:36.160 [2024-07-26 14:15:43.936644] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:21:36.160 [2024-07-26 14:15:43.936670] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:21:36.160 [2024-07-26 14:15:43.936689] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:36.160 [2024-07-26 14:15:43.936705] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:21:36.160 [2024-07-26 14:15:43.936759] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2644b50 (9): Bad file descriptor 00:21:36.160 [2024-07-26 14:15:43.936783] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x26f57e0 (9): Bad file descriptor 00:21:36.160 [2024-07-26 14:15:43.936928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.160 [2024-07-26 14:15:43.936955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x27d8910 with addr=10.0.0.2, port=4420 00:21:36.160 [2024-07-26 14:15:43.936972] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27d8910 is same with the state(5) to be set 00:21:36.160 [2024-07-26 14:15:43.937063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.160 [2024-07-26 14:15:43.937087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2643400 with addr=10.0.0.2, port=4420 00:21:36.160 [2024-07-26 14:15:43.937103] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2643400 is same with the state(5) to be set 00:21:36.160 [2024-07-26 14:15:43.937189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.160 [2024-07-26 14:15:43.937213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2620830 with addr=10.0.0.2, port=4420 00:21:36.160 [2024-07-26 14:15:43.937229] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2620830 is same with the state(5) to be set 00:21:36.160 [2024-07-26 14:15:43.937306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.160 [2024-07-26 14:15:43.937330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x264ac80 with addr=10.0.0.2, port=4420 00:21:36.160 [2024-07-26 14:15:43.937345] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x264ac80 is same with the state(5) to be set 00:21:36.160 [2024-07-26 14:15:43.937361] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:21:36.160 [2024-07-26 14:15:43.937374] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:21:36.160 [2024-07-26 14:15:43.937391] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:21:36.160 [2024-07-26 14:15:43.937411] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:21:36.160 [2024-07-26 14:15:43.937425] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:21:36.160 [2024-07-26 14:15:43.937438] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:21:36.160 [2024-07-26 14:15:43.937502] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:36.160 [2024-07-26 14:15:43.937522] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:36.160 [2024-07-26 14:15:43.937550] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x27d8910 (9): Bad file descriptor 00:21:36.160 [2024-07-26 14:15:43.937570] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2643400 (9): Bad file descriptor 00:21:36.160 [2024-07-26 14:15:43.937589] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2620830 (9): Bad file descriptor 00:21:36.160 [2024-07-26 14:15:43.937607] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x264ac80 (9): Bad file descriptor 00:21:36.160 [2024-07-26 14:15:43.937649] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:21:36.160 [2024-07-26 14:15:43.937666] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:21:36.161 [2024-07-26 14:15:43.937680] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:21:36.161 [2024-07-26 14:15:43.937698] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:21:36.161 [2024-07-26 14:15:43.937711] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:21:36.161 [2024-07-26 14:15:43.937724] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:21:36.161 [2024-07-26 14:15:43.937740] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:36.161 [2024-07-26 14:15:43.937754] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:36.161 [2024-07-26 14:15:43.937767] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:36.161 [2024-07-26 14:15:43.937784] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:21:36.161 [2024-07-26 14:15:43.937797] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:21:36.161 [2024-07-26 14:15:43.937810] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:21:36.161 [2024-07-26 14:15:43.937859] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:36.161 [2024-07-26 14:15:43.937877] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:36.161 [2024-07-26 14:15:43.937889] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:36.161 [2024-07-26 14:15:43.937902] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:36.161 [2024-07-26 14:15:43.938772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.161 [2024-07-26 14:15:43.938798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.161 [2024-07-26 14:15:43.938835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.161 [2024-07-26 14:15:43.938851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.161 [2024-07-26 14:15:43.938867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.161 [2024-07-26 14:15:43.938881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.161 [2024-07-26 14:15:43.938898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.161 [2024-07-26 14:15:43.938912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.161 [2024-07-26 14:15:43.938927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.161 [2024-07-26 14:15:43.938941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.161 [2024-07-26 14:15:43.938958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.161 [2024-07-26 14:15:43.938972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.161 [2024-07-26 14:15:43.938988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.161 [2024-07-26 14:15:43.939002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.161 [2024-07-26 14:15:43.939019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.161 [2024-07-26 14:15:43.939033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.161 [2024-07-26 14:15:43.939049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.161 [2024-07-26 14:15:43.939063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.161 [2024-07-26 14:15:43.939079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.161 [2024-07-26 14:15:43.939093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.161 [2024-07-26 14:15:43.939109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.161 [2024-07-26 14:15:43.939123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.161 [2024-07-26 14:15:43.939144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.161 [2024-07-26 14:15:43.939159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.161 [2024-07-26 14:15:43.939175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.161 [2024-07-26 14:15:43.939189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.161 [2024-07-26 14:15:43.939205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.161 [2024-07-26 14:15:43.939219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.161 [2024-07-26 14:15:43.939234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.161 [2024-07-26 14:15:43.939248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.161 [2024-07-26 14:15:43.939264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.161 [2024-07-26 14:15:43.939278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.161 [2024-07-26 14:15:43.939294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.161 [2024-07-26 14:15:43.939307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.161 [2024-07-26 14:15:43.939323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.161 [2024-07-26 14:15:43.939337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.161 [2024-07-26 14:15:43.939354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.161 [2024-07-26 14:15:43.939367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.161 [2024-07-26 14:15:43.939383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.161 [2024-07-26 14:15:43.939397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.161 [2024-07-26 14:15:43.939412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.161 [2024-07-26 14:15:43.939426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.161 [2024-07-26 14:15:43.939442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.161 [2024-07-26 14:15:43.939455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.161 [2024-07-26 14:15:43.939471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.161 [2024-07-26 14:15:43.939485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.161 [2024-07-26 14:15:43.939501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.161 [2024-07-26 14:15:43.939518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.161 [2024-07-26 14:15:43.939544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.161 [2024-07-26 14:15:43.939560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.161 [2024-07-26 14:15:43.939577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.161 [2024-07-26 14:15:43.939591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.161 [2024-07-26 14:15:43.939606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.161 [2024-07-26 14:15:43.939621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.161 [2024-07-26 14:15:43.939637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.161 [2024-07-26 14:15:43.939651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.161 [2024-07-26 14:15:43.939668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.161 [2024-07-26 14:15:43.939682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.161 [2024-07-26 14:15:43.939698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.161 [2024-07-26 14:15:43.939711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.161 [2024-07-26 14:15:43.939727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.161 [2024-07-26 14:15:43.939741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.161 [2024-07-26 14:15:43.939757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.161 [2024-07-26 14:15:43.939771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.161 [2024-07-26 14:15:43.939787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.161 [2024-07-26 14:15:43.939801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.161 [2024-07-26 14:15:43.939817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.162 [2024-07-26 14:15:43.939831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.162 [2024-07-26 14:15:43.939848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.162 [2024-07-26 14:15:43.939862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.162 [2024-07-26 14:15:43.939878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.162 [2024-07-26 14:15:43.939892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.162 [2024-07-26 14:15:43.939911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.162 [2024-07-26 14:15:43.939926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.162 [2024-07-26 14:15:43.939942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.162 [2024-07-26 14:15:43.939957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.162 [2024-07-26 14:15:43.939973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.162 [2024-07-26 14:15:43.939986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.162 [2024-07-26 14:15:43.940003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.162 [2024-07-26 14:15:43.940016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.162 [2024-07-26 14:15:43.940032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.162 [2024-07-26 14:15:43.940046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.162 [2024-07-26 14:15:43.940062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.162 [2024-07-26 14:15:43.940076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.162 [2024-07-26 14:15:43.940092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.162 [2024-07-26 14:15:43.940105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.162 [2024-07-26 14:15:43.940121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.162 [2024-07-26 14:15:43.940135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.162 [2024-07-26 14:15:43.940151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.162 [2024-07-26 14:15:43.940166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.162 [2024-07-26 14:15:43.940182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.162 [2024-07-26 14:15:43.940196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.162 [2024-07-26 14:15:43.940212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.162 [2024-07-26 14:15:43.940225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.162 [2024-07-26 14:15:43.940242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.162 [2024-07-26 14:15:43.940256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.162 [2024-07-26 14:15:43.940273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.162 [2024-07-26 14:15:43.940291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.162 [2024-07-26 14:15:43.940308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.162 [2024-07-26 14:15:43.940322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.162 [2024-07-26 14:15:43.940337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.162 [2024-07-26 14:15:43.940352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.162 [2024-07-26 14:15:43.940368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.162 [2024-07-26 14:15:43.940382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.162 [2024-07-26 14:15:43.940398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.162 [2024-07-26 14:15:43.940412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.162 [2024-07-26 14:15:43.940428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.162 [2024-07-26 14:15:43.940441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.162 [2024-07-26 14:15:43.940457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.162 [2024-07-26 14:15:43.940471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.162 [2024-07-26 14:15:43.940487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.162 [2024-07-26 14:15:43.940500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.162 [2024-07-26 14:15:43.940516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.162 [2024-07-26 14:15:43.940536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.162 [2024-07-26 14:15:43.940554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.162 [2024-07-26 14:15:43.940568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.162 [2024-07-26 14:15:43.940584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.162 [2024-07-26 14:15:43.940598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.162 [2024-07-26 14:15:43.940614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.162 [2024-07-26 14:15:43.940628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.162 [2024-07-26 14:15:43.940644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.162 [2024-07-26 14:15:43.940658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.162 [2024-07-26 14:15:43.940678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.162 [2024-07-26 14:15:43.940692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.162 [2024-07-26 14:15:43.940711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.162 [2024-07-26 14:15:43.940724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.162 [2024-07-26 14:15:43.940740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.162 [2024-07-26 14:15:43.940754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.162 [2024-07-26 14:15:43.940769] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2da6ae0 is same with the state(5) to be set 00:21:36.162 [2024-07-26 14:15:43.942041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.162 [2024-07-26 14:15:43.942065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.162 [2024-07-26 14:15:43.942088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.162 [2024-07-26 14:15:43.942104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.163 [2024-07-26 14:15:43.942120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.163 [2024-07-26 14:15:43.942135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.163 [2024-07-26 14:15:43.942151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.163 [2024-07-26 14:15:43.942165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.163 [2024-07-26 14:15:43.942182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.163 [2024-07-26 14:15:43.942197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.163 [2024-07-26 14:15:43.942213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.163 [2024-07-26 14:15:43.942227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.163 [2024-07-26 14:15:43.942243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.163 [2024-07-26 14:15:43.942258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.163 [2024-07-26 14:15:43.942274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.163 [2024-07-26 14:15:43.942288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.163 [2024-07-26 14:15:43.942304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.163 [2024-07-26 14:15:43.942318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.163 [2024-07-26 14:15:43.942334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.163 [2024-07-26 14:15:43.942353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.163 [2024-07-26 14:15:43.942370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.163 [2024-07-26 14:15:43.942384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.163 [2024-07-26 14:15:43.942401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.163 [2024-07-26 14:15:43.942415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.163 [2024-07-26 14:15:43.942431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.163 [2024-07-26 14:15:43.942445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.163 [2024-07-26 14:15:43.942461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.163 [2024-07-26 14:15:43.942474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.163 [2024-07-26 14:15:43.942490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.163 [2024-07-26 14:15:43.942504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.163 [2024-07-26 14:15:43.942521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.163 [2024-07-26 14:15:43.942547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.163 [2024-07-26 14:15:43.942564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.163 [2024-07-26 14:15:43.942578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.163 [2024-07-26 14:15:43.942594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.163 [2024-07-26 14:15:43.942608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.163 [2024-07-26 14:15:43.942624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.163 [2024-07-26 14:15:43.942638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.163 [2024-07-26 14:15:43.942654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.163 [2024-07-26 14:15:43.942668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.163 [2024-07-26 14:15:43.942684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.163 [2024-07-26 14:15:43.942699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.163 [2024-07-26 14:15:43.942714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.163 [2024-07-26 14:15:43.942728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.163 [2024-07-26 14:15:43.942748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.163 [2024-07-26 14:15:43.942763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.163 [2024-07-26 14:15:43.942779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.163 [2024-07-26 14:15:43.942793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.163 [2024-07-26 14:15:43.942809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.163 [2024-07-26 14:15:43.942823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.163 [2024-07-26 14:15:43.942839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.163 [2024-07-26 14:15:43.942853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.163 [2024-07-26 14:15:43.942869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.163 [2024-07-26 14:15:43.942883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.163 [2024-07-26 14:15:43.942899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.163 [2024-07-26 14:15:43.942913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.163 [2024-07-26 14:15:43.942929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.163 [2024-07-26 14:15:43.942943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.163 [2024-07-26 14:15:43.942960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.163 [2024-07-26 14:15:43.942973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.163 [2024-07-26 14:15:43.942989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.163 [2024-07-26 14:15:43.943003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.163 [2024-07-26 14:15:43.943019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.163 [2024-07-26 14:15:43.943033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.163 [2024-07-26 14:15:43.943049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.163 [2024-07-26 14:15:43.943063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.163 [2024-07-26 14:15:43.943079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.163 [2024-07-26 14:15:43.943093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.163 [2024-07-26 14:15:43.943109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.163 [2024-07-26 14:15:43.943127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.163 [2024-07-26 14:15:43.943144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.163 [2024-07-26 14:15:43.943159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.163 [2024-07-26 14:15:43.943175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.163 [2024-07-26 14:15:43.943189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.163 [2024-07-26 14:15:43.943205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.163 [2024-07-26 14:15:43.943219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.163 [2024-07-26 14:15:43.943235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.163 [2024-07-26 14:15:43.943248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.163 [2024-07-26 14:15:43.943264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.163 [2024-07-26 14:15:43.943278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.163 [2024-07-26 14:15:43.943294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.163 [2024-07-26 14:15:43.943308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.163 [2024-07-26 14:15:43.943324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.164 [2024-07-26 14:15:43.943337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.164 [2024-07-26 14:15:43.943353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.164 [2024-07-26 14:15:43.943367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.164 [2024-07-26 14:15:43.943383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.164 [2024-07-26 14:15:43.943397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.164 [2024-07-26 14:15:43.943413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.164 [2024-07-26 14:15:43.943428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.164 [2024-07-26 14:15:43.943444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.164 [2024-07-26 14:15:43.943458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.164 [2024-07-26 14:15:43.943473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.164 [2024-07-26 14:15:43.943487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.164 [2024-07-26 14:15:43.943507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.164 [2024-07-26 14:15:43.943521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.164 [2024-07-26 14:15:43.943546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.164 [2024-07-26 14:15:43.943561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.164 [2024-07-26 14:15:43.943577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.164 [2024-07-26 14:15:43.943591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.164 [2024-07-26 14:15:43.943607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.164 [2024-07-26 14:15:43.943621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.164 [2024-07-26 14:15:43.943637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.164 [2024-07-26 14:15:43.943651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.164 [2024-07-26 14:15:43.943667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.164 [2024-07-26 14:15:43.943681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.164 [2024-07-26 14:15:43.943697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.164 [2024-07-26 14:15:43.943711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.164 [2024-07-26 14:15:43.943726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.164 [2024-07-26 14:15:43.943740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.164 [2024-07-26 14:15:43.943756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.164 [2024-07-26 14:15:43.943770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.164 [2024-07-26 14:15:43.943785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.164 [2024-07-26 14:15:43.943799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.164 [2024-07-26 14:15:43.943815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.164 [2024-07-26 14:15:43.943829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.164 [2024-07-26 14:15:43.943844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.164 [2024-07-26 14:15:43.943857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.164 [2024-07-26 14:15:43.943873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.164 [2024-07-26 14:15:43.943891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.164 [2024-07-26 14:15:43.943908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.164 [2024-07-26 14:15:43.943922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.164 [2024-07-26 14:15:43.943938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.164 [2024-07-26 14:15:43.943952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.164 [2024-07-26 14:15:43.943969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.164 [2024-07-26 14:15:43.943982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.164 [2024-07-26 14:15:43.943998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.164 [2024-07-26 14:15:43.944012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.164 [2024-07-26 14:15:43.944026] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2f4e390 is same with the state(5) to be set 00:21:36.164 [2024-07-26 14:15:43.945247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.164 [2024-07-26 14:15:43.945271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.164 [2024-07-26 14:15:43.945293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.164 [2024-07-26 14:15:43.945309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.164 [2024-07-26 14:15:43.945325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.164 [2024-07-26 14:15:43.945339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.164 [2024-07-26 14:15:43.945356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.164 [2024-07-26 14:15:43.945371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.164 [2024-07-26 14:15:43.945387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.164 [2024-07-26 14:15:43.945401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.164 [2024-07-26 14:15:43.945418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.164 [2024-07-26 14:15:43.945431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.164 [2024-07-26 14:15:43.945447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.164 [2024-07-26 14:15:43.945461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.164 [2024-07-26 14:15:43.945477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.164 [2024-07-26 14:15:43.945496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.164 [2024-07-26 14:15:43.945513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.164 [2024-07-26 14:15:43.945535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.164 [2024-07-26 14:15:43.945554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.164 [2024-07-26 14:15:43.945569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.164 [2024-07-26 14:15:43.945585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.164 [2024-07-26 14:15:43.945599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.164 [2024-07-26 14:15:43.945615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.164 [2024-07-26 14:15:43.945629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.164 [2024-07-26 14:15:43.945645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.164 [2024-07-26 14:15:43.945659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.164 [2024-07-26 14:15:43.945675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.164 [2024-07-26 14:15:43.945688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.164 [2024-07-26 14:15:43.945704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.164 [2024-07-26 14:15:43.945719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.164 [2024-07-26 14:15:43.945734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.164 [2024-07-26 14:15:43.945748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.165 [2024-07-26 14:15:43.945764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.165 [2024-07-26 14:15:43.945778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.165 [2024-07-26 14:15:43.945795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.165 [2024-07-26 14:15:43.945809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.165 [2024-07-26 14:15:43.945825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.165 [2024-07-26 14:15:43.945839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.165 [2024-07-26 14:15:43.945855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.165 [2024-07-26 14:15:43.945869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.165 [2024-07-26 14:15:43.945894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.165 [2024-07-26 14:15:43.945909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.165 [2024-07-26 14:15:43.945926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.165 [2024-07-26 14:15:43.945940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.165 [2024-07-26 14:15:43.945956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.165 [2024-07-26 14:15:43.945970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.165 [2024-07-26 14:15:43.945986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.165 [2024-07-26 14:15:43.946000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.165 [2024-07-26 14:15:43.946016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.165 [2024-07-26 14:15:43.946030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.165 [2024-07-26 14:15:43.946046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.165 [2024-07-26 14:15:43.946059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.165 [2024-07-26 14:15:43.946077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.165 [2024-07-26 14:15:43.946091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.165 [2024-07-26 14:15:43.946107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.165 [2024-07-26 14:15:43.946122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.165 [2024-07-26 14:15:43.946138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.165 [2024-07-26 14:15:43.946151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.165 [2024-07-26 14:15:43.946168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.165 [2024-07-26 14:15:43.946182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.165 [2024-07-26 14:15:43.946198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.165 [2024-07-26 14:15:43.946211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.165 [2024-07-26 14:15:43.946227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.165 [2024-07-26 14:15:43.946241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.165 [2024-07-26 14:15:43.946258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.165 [2024-07-26 14:15:43.946276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.165 [2024-07-26 14:15:43.946293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.165 [2024-07-26 14:15:43.946307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.165 [2024-07-26 14:15:43.946324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.165 [2024-07-26 14:15:43.946339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.165 [2024-07-26 14:15:43.946355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.165 [2024-07-26 14:15:43.946368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.165 [2024-07-26 14:15:43.946384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.165 [2024-07-26 14:15:43.946398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.165 [2024-07-26 14:15:43.946414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.165 [2024-07-26 14:15:43.946428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.165 [2024-07-26 14:15:43.946444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.165 [2024-07-26 14:15:43.946458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.165 [2024-07-26 14:15:43.946474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.165 [2024-07-26 14:15:43.946488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.165 [2024-07-26 14:15:43.946504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.165 [2024-07-26 14:15:43.946518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.165 [2024-07-26 14:15:43.946547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.165 [2024-07-26 14:15:43.946564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.165 [2024-07-26 14:15:43.946580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.165 [2024-07-26 14:15:43.946594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.165 [2024-07-26 14:15:43.946610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.165 [2024-07-26 14:15:43.946624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.165 [2024-07-26 14:15:43.946641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.165 [2024-07-26 14:15:43.946655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.165 [2024-07-26 14:15:43.946675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.165 [2024-07-26 14:15:43.946690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.165 [2024-07-26 14:15:43.946706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.165 [2024-07-26 14:15:43.946720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.165 [2024-07-26 14:15:43.946736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.165 [2024-07-26 14:15:43.946750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.165 [2024-07-26 14:15:43.946767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.165 [2024-07-26 14:15:43.946781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.165 [2024-07-26 14:15:43.946797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.165 [2024-07-26 14:15:43.946811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.165 [2024-07-26 14:15:43.946827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.165 [2024-07-26 14:15:43.946842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.165 [2024-07-26 14:15:43.946858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.165 [2024-07-26 14:15:43.946872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.165 [2024-07-26 14:15:43.946888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.165 [2024-07-26 14:15:43.946902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.165 [2024-07-26 14:15:43.946918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.165 [2024-07-26 14:15:43.946932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.165 [2024-07-26 14:15:43.946948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.165 [2024-07-26 14:15:43.946962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.165 [2024-07-26 14:15:43.946978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.166 [2024-07-26 14:15:43.946992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.166 [2024-07-26 14:15:43.947008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.166 [2024-07-26 14:15:43.947022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.166 [2024-07-26 14:15:43.947038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.166 [2024-07-26 14:15:43.947055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.166 [2024-07-26 14:15:43.947071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.166 [2024-07-26 14:15:43.947085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.166 [2024-07-26 14:15:43.947102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.166 [2024-07-26 14:15:43.947117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.166 [2024-07-26 14:15:43.947133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.166 [2024-07-26 14:15:43.947147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.166 [2024-07-26 14:15:43.947162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.166 [2024-07-26 14:15:43.947176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.166 [2024-07-26 14:15:43.947192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.166 [2024-07-26 14:15:43.947206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.166 [2024-07-26 14:15:43.947222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.166 [2024-07-26 14:15:43.947236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.166 [2024-07-26 14:15:43.947250] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x30f5e10 is same with the state(5) to be set 00:21:36.166 [2024-07-26 14:15:43.948478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.166 [2024-07-26 14:15:43.948502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.166 [2024-07-26 14:15:43.948524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.166 [2024-07-26 14:15:43.948548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.166 [2024-07-26 14:15:43.948565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.166 [2024-07-26 14:15:43.948580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.166 [2024-07-26 14:15:43.948597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.166 [2024-07-26 14:15:43.948611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.166 [2024-07-26 14:15:43.948627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.166 [2024-07-26 14:15:43.948641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.166 [2024-07-26 14:15:43.948658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.166 [2024-07-26 14:15:43.948677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.166 [2024-07-26 14:15:43.948694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.166 [2024-07-26 14:15:43.948709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.166 [2024-07-26 14:15:43.948725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.166 [2024-07-26 14:15:43.948739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.166 [2024-07-26 14:15:43.948756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.166 [2024-07-26 14:15:43.948771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.166 [2024-07-26 14:15:43.948787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.166 [2024-07-26 14:15:43.948801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.166 [2024-07-26 14:15:43.948817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.166 [2024-07-26 14:15:43.948831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.166 [2024-07-26 14:15:43.948847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.166 [2024-07-26 14:15:43.948861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.166 [2024-07-26 14:15:43.948877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.166 [2024-07-26 14:15:43.948891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.166 [2024-07-26 14:15:43.948907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.166 [2024-07-26 14:15:43.948920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.166 [2024-07-26 14:15:43.948937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.166 [2024-07-26 14:15:43.948950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.166 [2024-07-26 14:15:43.948966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.166 [2024-07-26 14:15:43.948980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.166 [2024-07-26 14:15:43.948996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.166 [2024-07-26 14:15:43.949010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.166 [2024-07-26 14:15:43.949026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.166 [2024-07-26 14:15:43.949040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.166 [2024-07-26 14:15:43.949059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.166 [2024-07-26 14:15:43.949075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.166 [2024-07-26 14:15:43.949091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.166 [2024-07-26 14:15:43.949105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.166 [2024-07-26 14:15:43.949121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.166 [2024-07-26 14:15:43.949135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.166 [2024-07-26 14:15:43.949152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.166 [2024-07-26 14:15:43.949166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.166 [2024-07-26 14:15:43.949182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.166 [2024-07-26 14:15:43.949196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.166 [2024-07-26 14:15:43.949212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.166 [2024-07-26 14:15:43.949226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.166 [2024-07-26 14:15:43.949242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.166 [2024-07-26 14:15:43.949256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.166 [2024-07-26 14:15:43.949273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.166 [2024-07-26 14:15:43.949286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.166 [2024-07-26 14:15:43.949302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.166 [2024-07-26 14:15:43.949316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.166 [2024-07-26 14:15:43.949332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.166 [2024-07-26 14:15:43.949347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.166 [2024-07-26 14:15:43.949363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.167 [2024-07-26 14:15:43.949378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.167 [2024-07-26 14:15:43.949394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.167 [2024-07-26 14:15:43.949407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.167 [2024-07-26 14:15:43.949425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.167 [2024-07-26 14:15:43.949442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.167 [2024-07-26 14:15:43.949459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.167 [2024-07-26 14:15:43.949473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.167 [2024-07-26 14:15:43.949489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.167 [2024-07-26 14:15:43.949504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.167 [2024-07-26 14:15:43.949520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.167 [2024-07-26 14:15:43.949540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.167 [2024-07-26 14:15:43.949557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.167 [2024-07-26 14:15:43.949571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.167 [2024-07-26 14:15:43.949587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.167 [2024-07-26 14:15:43.949601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.167 [2024-07-26 14:15:43.949616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.167 [2024-07-26 14:15:43.949630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.167 [2024-07-26 14:15:43.949646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.167 [2024-07-26 14:15:43.949660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.167 [2024-07-26 14:15:43.949676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.167 [2024-07-26 14:15:43.949690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.167 [2024-07-26 14:15:43.949706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.167 [2024-07-26 14:15:43.949720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.167 [2024-07-26 14:15:43.949736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.167 [2024-07-26 14:15:43.949750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.167 [2024-07-26 14:15:43.949766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.167 [2024-07-26 14:15:43.949780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.167 [2024-07-26 14:15:43.949796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.167 [2024-07-26 14:15:43.949810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.167 [2024-07-26 14:15:43.949831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.167 [2024-07-26 14:15:43.949846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.167 [2024-07-26 14:15:43.949863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.167 [2024-07-26 14:15:43.949877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.167 [2024-07-26 14:15:43.949894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.167 [2024-07-26 14:15:43.949908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.167 [2024-07-26 14:15:43.949924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.167 [2024-07-26 14:15:43.949938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.167 [2024-07-26 14:15:43.949954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.167 [2024-07-26 14:15:43.949967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.167 [2024-07-26 14:15:43.949984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.167 [2024-07-26 14:15:43.949999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.167 [2024-07-26 14:15:43.950015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.167 [2024-07-26 14:15:43.950029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.167 [2024-07-26 14:15:43.950045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.167 [2024-07-26 14:15:43.950059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.167 [2024-07-26 14:15:43.950076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.167 [2024-07-26 14:15:43.950089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.167 [2024-07-26 14:15:43.950105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.167 [2024-07-26 14:15:43.950119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.167 [2024-07-26 14:15:43.950135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.167 [2024-07-26 14:15:43.950148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.167 [2024-07-26 14:15:43.950164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.167 [2024-07-26 14:15:43.950178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.167 [2024-07-26 14:15:43.950194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.167 [2024-07-26 14:15:43.950207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.167 [2024-07-26 14:15:43.950228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.167 [2024-07-26 14:15:43.950243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.167 [2024-07-26 14:15:43.950259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.167 [2024-07-26 14:15:43.950272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.167 [2024-07-26 14:15:43.950289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.167 [2024-07-26 14:15:43.950302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.167 [2024-07-26 14:15:43.950318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.167 [2024-07-26 14:15:43.950332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.167 [2024-07-26 14:15:43.950348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.167 [2024-07-26 14:15:43.950362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.167 [2024-07-26 14:15:43.950379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.167 [2024-07-26 14:15:43.950393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.167 [2024-07-26 14:15:43.950409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.167 [2024-07-26 14:15:43.950423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.167 [2024-07-26 14:15:43.950439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.167 [2024-07-26 14:15:43.950452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.168 [2024-07-26 14:15:43.950467] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27a4450 is same with the state(5) to be set 00:21:36.168 [2024-07-26 14:15:43.952060] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:21:36.168 [2024-07-26 14:15:43.952100] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:21:36.168 [2024-07-26 14:15:43.952122] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:21:36.168 task offset: 24576 on job bdev=Nvme1n1 fails 00:21:36.168 00:21:36.168 Latency(us) 00:21:36.168 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:36.168 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:36.168 Job: Nvme1n1 ended in about 0.85 seconds with error 00:21:36.168 Verification LBA range: start 0x0 length 0x400 00:21:36.168 Nvme1n1 : 0.85 225.16 14.07 75.05 0.00 210607.79 16214.09 239230.67 00:21:36.168 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:36.168 Job: Nvme2n1 ended in about 0.87 seconds with error 00:21:36.168 Verification LBA range: start 0x0 length 0x400 00:21:36.168 Nvme2n1 : 0.87 147.32 9.21 73.66 0.00 280117.85 21748.24 254765.13 00:21:36.168 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:36.168 Job: Nvme3n1 ended in about 0.86 seconds with error 00:21:36.168 Verification LBA range: start 0x0 length 0x400 00:21:36.168 Nvme3n1 : 0.86 223.03 13.94 24.39 0.00 243860.47 16699.54 262532.36 00:21:36.168 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:36.168 Job: Nvme4n1 ended in about 0.85 seconds with error 00:21:36.168 Verification LBA range: start 0x0 length 0x400 00:21:36.168 Nvme4n1 : 0.85 224.84 14.05 74.95 0.00 197114.79 10679.94 250104.79 00:21:36.168 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:36.168 Job: Nvme5n1 ended in about 0.87 seconds with error 00:21:36.168 Verification LBA range: start 0x0 length 0x400 00:21:36.168 Nvme5n1 : 0.87 146.76 9.17 73.38 0.00 262900.50 21554.06 268746.15 00:21:36.168 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:36.168 Job: Nvme6n1 ended in about 0.88 seconds with error 00:21:36.168 Verification LBA range: start 0x0 length 0x400 00:21:36.168 Nvme6n1 : 0.88 145.67 9.10 72.84 0.00 259144.31 23592.96 248551.35 00:21:36.168 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:36.168 Job: Nvme7n1 ended in about 0.88 seconds with error 00:21:36.168 Verification LBA range: start 0x0 length 0x400 00:21:36.168 Nvme7n1 : 0.88 145.14 9.07 72.57 0.00 254314.82 21068.61 246997.90 00:21:36.168 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:36.168 Job: Nvme8n1 ended in about 0.89 seconds with error 00:21:36.168 Verification LBA range: start 0x0 length 0x400 00:21:36.168 Nvme8n1 : 0.89 144.61 9.04 72.30 0.00 249514.67 20291.89 248551.35 00:21:36.168 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:36.168 Job: Nvme9n1 ended in about 0.89 seconds with error 00:21:36.168 Verification LBA range: start 0x0 length 0x400 00:21:36.168 Nvme9n1 : 0.89 144.09 9.01 72.04 0.00 244770.64 20291.89 253211.69 00:21:36.168 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:36.168 Job: Nvme10n1 ended in about 0.86 seconds with error 00:21:36.168 Verification LBA range: start 0x0 length 0x400 00:21:36.168 Nvme10n1 : 0.86 149.64 9.35 74.82 0.00 227856.81 11650.84 281173.71 00:21:36.168 =================================================================================================================== 00:21:36.168 Total : 1696.26 106.02 686.01 0.00 240606.20 10679.94 281173.71 00:21:36.168 [2024-07-26 14:15:43.978466] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:21:36.168 [2024-07-26 14:15:43.978565] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:21:36.168 [2024-07-26 14:15:43.979128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.168 [2024-07-26 14:15:43.979177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x27057b0 with addr=10.0.0.2, port=4420 00:21:36.168 [2024-07-26 14:15:43.979198] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27057b0 is same with the state(5) to be set 00:21:36.168 [2024-07-26 14:15:43.979302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.168 [2024-07-26 14:15:43.979329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2122610 with addr=10.0.0.2, port=4420 00:21:36.168 [2024-07-26 14:15:43.979345] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2122610 is same with the state(5) to be set 00:21:36.168 [2024-07-26 14:15:43.979428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.168 [2024-07-26 14:15:43.979454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x27e0fd0 with addr=10.0.0.2, port=4420 00:21:36.168 [2024-07-26 14:15:43.979470] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27e0fd0 is same with the state(5) to be set 00:21:36.168 [2024-07-26 14:15:43.979563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.168 [2024-07-26 14:15:43.979601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x27e41d0 with addr=10.0.0.2, port=4420 00:21:36.168 [2024-07-26 14:15:43.979618] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27e41d0 is same with the state(5) to be set 00:21:36.168 [2024-07-26 14:15:43.979650] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:36.168 [2024-07-26 14:15:43.979673] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:36.168 [2024-07-26 14:15:43.979692] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:36.168 [2024-07-26 14:15:43.979712] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:36.168 [2024-07-26 14:15:43.979730] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:36.168 [2024-07-26 14:15:43.979748] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:36.168 [2024-07-26 14:15:43.980911] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:21:36.168 [2024-07-26 14:15:43.980939] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:21:36.168 [2024-07-26 14:15:43.980957] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:21:36.168 [2024-07-26 14:15:43.980972] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:36.168 [2024-07-26 14:15:43.980988] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:21:36.168 [2024-07-26 14:15:43.981004] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:21:36.168 [2024-07-26 14:15:43.981094] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x27057b0 (9): Bad file descriptor 00:21:36.168 [2024-07-26 14:15:43.981123] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2122610 (9): Bad file descriptor 00:21:36.168 [2024-07-26 14:15:43.981142] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x27e0fd0 (9): Bad file descriptor 00:21:36.168 [2024-07-26 14:15:43.981160] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x27e41d0 (9): Bad file descriptor 00:21:36.168 [2024-07-26 14:15:43.981589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.168 [2024-07-26 14:15:43.981619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x26f57e0 with addr=10.0.0.2, port=4420 00:21:36.168 [2024-07-26 14:15:43.981635] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f57e0 is same with the state(5) to be set 00:21:36.168 [2024-07-26 14:15:43.981740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.168 [2024-07-26 14:15:43.981767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2644b50 with addr=10.0.0.2, port=4420 00:21:36.168 [2024-07-26 14:15:43.981783] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2644b50 is same with the state(5) to be set 00:21:36.168 [2024-07-26 14:15:43.981871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.168 [2024-07-26 14:15:43.981896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x264ac80 with addr=10.0.0.2, port=4420 00:21:36.168 [2024-07-26 14:15:43.981912] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x264ac80 is same with the state(5) to be set 00:21:36.168 [2024-07-26 14:15:43.981990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.168 [2024-07-26 14:15:43.982015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2620830 with addr=10.0.0.2, port=4420 00:21:36.168 [2024-07-26 14:15:43.982031] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2620830 is same with the state(5) to be set 00:21:36.168 [2024-07-26 14:15:43.982111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.168 [2024-07-26 14:15:43.982137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2643400 with addr=10.0.0.2, port=4420 00:21:36.168 [2024-07-26 14:15:43.982153] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2643400 is same with the state(5) to be set 00:21:36.168 [2024-07-26 14:15:43.982224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.168 [2024-07-26 14:15:43.982250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x27d8910 with addr=10.0.0.2, port=4420 00:21:36.168 [2024-07-26 14:15:43.982265] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27d8910 is same with the state(5) to be set 00:21:36.168 [2024-07-26 14:15:43.982281] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:21:36.168 [2024-07-26 14:15:43.982295] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:21:36.168 [2024-07-26 14:15:43.982312] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:21:36.168 [2024-07-26 14:15:43.982331] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:21:36.168 [2024-07-26 14:15:43.982346] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:21:36.168 [2024-07-26 14:15:43.982359] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:21:36.168 [2024-07-26 14:15:43.982375] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:21:36.168 [2024-07-26 14:15:43.982389] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:21:36.168 [2024-07-26 14:15:43.982402] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:21:36.169 [2024-07-26 14:15:43.982419] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:21:36.169 [2024-07-26 14:15:43.982433] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:21:36.169 [2024-07-26 14:15:43.982446] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:21:36.169 [2024-07-26 14:15:43.982563] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:36.169 [2024-07-26 14:15:43.982586] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:36.169 [2024-07-26 14:15:43.982598] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:36.169 [2024-07-26 14:15:43.982610] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:36.169 [2024-07-26 14:15:43.982626] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x26f57e0 (9): Bad file descriptor 00:21:36.169 [2024-07-26 14:15:43.982646] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2644b50 (9): Bad file descriptor 00:21:36.169 [2024-07-26 14:15:43.982665] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x264ac80 (9): Bad file descriptor 00:21:36.169 [2024-07-26 14:15:43.982682] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2620830 (9): Bad file descriptor 00:21:36.169 [2024-07-26 14:15:43.982700] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2643400 (9): Bad file descriptor 00:21:36.169 [2024-07-26 14:15:43.982717] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x27d8910 (9): Bad file descriptor 00:21:36.169 [2024-07-26 14:15:43.982756] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:21:36.169 [2024-07-26 14:15:43.982777] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:21:36.169 [2024-07-26 14:15:43.982792] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:21:36.169 [2024-07-26 14:15:43.982809] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:21:36.169 [2024-07-26 14:15:43.982830] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:21:36.169 [2024-07-26 14:15:43.982843] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:21:36.169 [2024-07-26 14:15:43.982858] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:21:36.169 [2024-07-26 14:15:43.982872] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:21:36.169 [2024-07-26 14:15:43.982885] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:21:36.169 [2024-07-26 14:15:43.982901] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:36.169 [2024-07-26 14:15:43.982914] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:36.169 [2024-07-26 14:15:43.982927] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:36.169 [2024-07-26 14:15:43.982943] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:21:36.169 [2024-07-26 14:15:43.982956] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:21:36.169 [2024-07-26 14:15:43.982969] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:21:36.169 [2024-07-26 14:15:43.982985] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:21:36.169 [2024-07-26 14:15:43.982998] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:21:36.169 [2024-07-26 14:15:43.983011] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:21:36.169 [2024-07-26 14:15:43.983049] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:36.169 [2024-07-26 14:15:43.983066] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:36.169 [2024-07-26 14:15:43.983078] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:36.169 [2024-07-26 14:15:43.983090] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:36.169 [2024-07-26 14:15:43.983102] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:36.169 [2024-07-26 14:15:43.983113] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:36.737 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:21:36.737 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:21:37.673 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 268659 00:21:37.673 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (268659) - No such process 00:21:37.673 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:21:37.673 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:21:37.673 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:21:37.673 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:37.673 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:37.673 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:21:37.673 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:37.673 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:21:37.673 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:37.673 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:21:37.673 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:37.673 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:37.673 rmmod nvme_tcp 00:21:37.673 rmmod nvme_fabrics 00:21:37.673 rmmod nvme_keyring 00:21:37.673 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:37.673 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:21:37.673 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:21:37.673 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:21:37.673 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:37.673 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:37.673 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:37.673 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:37.673 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:37.673 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:37.673 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:37.673 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:39.578 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:39.578 00:21:39.578 real 0m7.356s 00:21:39.578 user 0m17.583s 00:21:39.578 sys 0m1.482s 00:21:39.578 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:39.578 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:39.578 ************************************ 00:21:39.578 END TEST nvmf_shutdown_tc3 00:21:39.578 ************************************ 00:21:39.578 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:21:39.578 00:21:39.578 real 0m27.117s 00:21:39.578 user 1m14.205s 00:21:39.578 sys 0m6.426s 00:21:39.578 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:39.579 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:39.579 ************************************ 00:21:39.579 END TEST nvmf_shutdown 00:21:39.579 ************************************ 00:21:39.837 14:15:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@66 -- # trap - SIGINT SIGTERM EXIT 00:21:39.837 00:21:39.837 real 10m22.420s 00:21:39.837 user 24m33.207s 00:21:39.837 sys 2m29.862s 00:21:39.837 14:15:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:39.837 14:15:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:39.837 ************************************ 00:21:39.837 END TEST nvmf_target_extra 00:21:39.837 ************************************ 00:21:39.837 14:15:47 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:21:39.837 14:15:47 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:39.837 14:15:47 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:39.837 14:15:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:39.837 ************************************ 00:21:39.837 START TEST nvmf_host 00:21:39.837 ************************************ 00:21:39.837 14:15:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:21:39.837 * Looking for test storage... 00:21:39.837 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:21:39.837 14:15:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:39.837 14:15:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:21:39.837 14:15:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:39.837 14:15:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:39.837 14:15:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:39.837 14:15:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:39.837 14:15:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:39.837 14:15:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:39.837 14:15:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:39.837 14:15:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:39.837 14:15:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:39.837 14:15:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:39.837 14:15:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:39.837 14:15:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:21:39.837 14:15:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:39.837 14:15:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:39.837 14:15:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:39.837 14:15:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:39.837 14:15:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:39.837 14:15:47 nvmf_tcp.nvmf_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:39.837 14:15:47 nvmf_tcp.nvmf_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:39.837 14:15:47 nvmf_tcp.nvmf_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:39.837 14:15:47 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:39.837 14:15:47 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:39.837 14:15:47 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:39.837 14:15:47 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:21:39.837 14:15:47 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:39.837 14:15:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@47 -- # : 0 00:21:39.837 14:15:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:39.837 14:15:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:39.837 14:15:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:39.837 14:15:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:39.837 14:15:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:39.837 14:15:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:39.837 14:15:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:39.837 14:15:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:39.837 14:15:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:21:39.837 14:15:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:21:39.837 14:15:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:21:39.837 14:15:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:21:39.837 14:15:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:39.837 14:15:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:39.837 14:15:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:39.837 ************************************ 00:21:39.837 START TEST nvmf_multicontroller 00:21:39.837 ************************************ 00:21:39.837 14:15:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:21:39.837 * Looking for test storage... 00:21:39.837 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:39.837 14:15:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:39.837 14:15:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:21:39.837 14:15:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:39.838 14:15:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:39.838 14:15:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:39.838 14:15:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:39.838 14:15:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:39.838 14:15:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:39.838 14:15:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:39.838 14:15:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:39.838 14:15:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:39.838 14:15:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:39.838 14:15:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:39.838 14:15:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:21:39.838 14:15:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:39.838 14:15:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:39.838 14:15:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:39.838 14:15:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:39.838 14:15:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:39.838 14:15:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:39.838 14:15:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:39.838 14:15:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:39.838 14:15:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:39.838 14:15:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:39.838 14:15:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:39.838 14:15:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:21:39.838 14:15:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:39.838 14:15:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:21:39.838 14:15:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:39.838 14:15:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:39.838 14:15:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:39.838 14:15:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:39.838 14:15:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:39.838 14:15:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:39.838 14:15:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:39.838 14:15:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:39.838 14:15:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:39.838 14:15:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:39.838 14:15:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:21:39.838 14:15:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:21:39.838 14:15:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:39.838 14:15:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:21:39.838 14:15:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:21:39.838 14:15:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:39.838 14:15:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:39.838 14:15:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:39.838 14:15:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:39.838 14:15:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:39.838 14:15:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:39.838 14:15:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:39.838 14:15:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:39.838 14:15:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:39.838 14:15:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:39.838 14:15:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:21:39.838 14:15:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:42.373 14:15:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:42.373 14:15:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:21:42.373 14:15:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:42.373 14:15:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:42.373 14:15:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:42.373 14:15:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:42.373 14:15:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:42.373 14:15:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:21:42.373 14:15:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:42.373 14:15:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:21:42.373 14:15:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:21:42.373 14:15:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:21:42.373 14:15:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:21:42.373 14:15:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:21:42.373 14:15:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:21:42.373 14:15:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:42.373 14:15:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:42.373 14:15:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:42.373 14:15:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:42.373 14:15:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:42.373 14:15:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:42.373 14:15:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:42.373 14:15:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:42.373 14:15:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:42.373 14:15:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:42.373 14:15:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:42.373 14:15:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:42.373 14:15:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:42.373 14:15:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:42.373 14:15:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:42.373 14:15:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:42.373 14:15:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:42.373 14:15:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:42.373 14:15:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:21:42.373 Found 0000:09:00.0 (0x8086 - 0x159b) 00:21:42.373 14:15:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:42.373 14:15:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:42.373 14:15:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:42.373 14:15:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:42.373 14:15:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:42.373 14:15:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:42.373 14:15:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:21:42.373 Found 0000:09:00.1 (0x8086 - 0x159b) 00:21:42.373 14:15:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:42.373 14:15:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:42.373 14:15:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:42.373 14:15:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:42.373 14:15:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:42.373 14:15:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:42.373 14:15:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:42.373 14:15:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:42.373 14:15:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:42.373 14:15:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:42.373 14:15:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:42.373 14:15:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:42.373 14:15:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:42.374 14:15:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:42.374 14:15:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:42.374 14:15:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:21:42.374 Found net devices under 0000:09:00.0: cvl_0_0 00:21:42.374 14:15:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:42.374 14:15:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:42.374 14:15:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:42.374 14:15:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:42.374 14:15:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:42.374 14:15:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:42.374 14:15:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:42.374 14:15:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:42.374 14:15:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:21:42.374 Found net devices under 0000:09:00.1: cvl_0_1 00:21:42.374 14:15:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:42.374 14:15:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:42.374 14:15:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:21:42.374 14:15:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:42.374 14:15:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:42.374 14:15:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:42.374 14:15:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:42.374 14:15:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:42.374 14:15:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:42.374 14:15:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:42.374 14:15:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:42.374 14:15:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:42.374 14:15:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:42.374 14:15:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:42.374 14:15:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:42.374 14:15:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:42.374 14:15:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:42.374 14:15:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:42.374 14:15:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:42.374 14:15:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:42.374 14:15:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:42.374 14:15:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:42.374 14:15:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:42.374 14:15:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:42.374 14:15:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:42.374 14:15:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:42.374 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:42.374 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.170 ms 00:21:42.374 00:21:42.374 --- 10.0.0.2 ping statistics --- 00:21:42.374 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:42.374 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:21:42.374 14:15:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:42.374 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:42.374 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.147 ms 00:21:42.374 00:21:42.374 --- 10.0.0.1 ping statistics --- 00:21:42.374 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:42.374 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:21:42.374 14:15:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:42.374 14:15:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:21:42.374 14:15:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:42.374 14:15:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:42.374 14:15:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:42.374 14:15:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:42.374 14:15:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:42.374 14:15:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:42.374 14:15:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:42.374 14:15:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:21:42.374 14:15:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:42.374 14:15:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:42.374 14:15:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:42.374 14:15:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=271196 00:21:42.374 14:15:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:42.374 14:15:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 271196 00:21:42.374 14:15:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 271196 ']' 00:21:42.374 14:15:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:42.374 14:15:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:42.374 14:15:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:42.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:42.374 14:15:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:42.374 14:15:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:42.374 [2024-07-26 14:15:50.152282] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:21:42.374 [2024-07-26 14:15:50.152359] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:42.374 EAL: No free 2048 kB hugepages reported on node 1 00:21:42.374 [2024-07-26 14:15:50.215460] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:42.374 [2024-07-26 14:15:50.324696] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:42.374 [2024-07-26 14:15:50.324758] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:42.374 [2024-07-26 14:15:50.324772] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:42.374 [2024-07-26 14:15:50.324784] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:42.374 [2024-07-26 14:15:50.324793] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:42.374 [2024-07-26 14:15:50.324888] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:42.374 [2024-07-26 14:15:50.324952] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:42.374 [2024-07-26 14:15:50.324955] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:42.632 14:15:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:42.632 14:15:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:21:42.632 14:15:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:42.632 14:15:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:42.632 14:15:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:42.632 14:15:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:42.632 14:15:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:42.632 14:15:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.632 14:15:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:42.632 [2024-07-26 14:15:50.462982] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:42.632 14:15:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.632 14:15:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:42.632 14:15:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.632 14:15:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:42.632 Malloc0 00:21:42.632 14:15:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.632 14:15:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:42.632 14:15:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.632 14:15:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:42.632 14:15:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.632 14:15:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:42.632 14:15:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.632 14:15:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:42.632 14:15:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.632 14:15:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:42.632 14:15:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.632 14:15:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:42.632 [2024-07-26 14:15:50.519433] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:42.632 14:15:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.632 14:15:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:42.632 14:15:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.632 14:15:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:42.632 [2024-07-26 14:15:50.527328] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:42.632 14:15:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.632 14:15:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:42.632 14:15:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.632 14:15:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:42.632 Malloc1 00:21:42.632 14:15:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.632 14:15:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:21:42.632 14:15:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.632 14:15:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:42.632 14:15:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.632 14:15:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:21:42.632 14:15:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.632 14:15:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:42.632 14:15:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.632 14:15:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:21:42.632 14:15:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.632 14:15:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:42.632 14:15:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.632 14:15:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:21:42.632 14:15:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.632 14:15:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:42.632 14:15:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.633 14:15:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=271341 00:21:42.633 14:15:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:21:42.633 14:15:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:42.633 14:15:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 271341 /var/tmp/bdevperf.sock 00:21:42.633 14:15:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 271341 ']' 00:21:42.633 14:15:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:42.633 14:15:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:42.633 14:15:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:42.633 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:42.633 14:15:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:42.633 14:15:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:42.890 14:15:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:42.890 14:15:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:21:42.890 14:15:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:21:42.890 14:15:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.890 14:15:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:43.147 NVMe0n1 00:21:43.147 14:15:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.147 14:15:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:43.147 14:15:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:21:43.147 14:15:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.147 14:15:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:43.147 14:15:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.148 1 00:21:43.148 14:15:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:21:43.148 14:15:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:21:43.148 14:15:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:21:43.148 14:15:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:43.148 14:15:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:43.148 14:15:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:43.148 14:15:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:43.148 14:15:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:21:43.148 14:15:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.148 14:15:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:43.148 request: 00:21:43.148 { 00:21:43.148 "name": "NVMe0", 00:21:43.148 "trtype": "tcp", 00:21:43.148 "traddr": "10.0.0.2", 00:21:43.148 "adrfam": "ipv4", 00:21:43.148 "trsvcid": "4420", 00:21:43.148 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:43.148 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:21:43.148 "hostaddr": "10.0.0.2", 00:21:43.148 "hostsvcid": "60000", 00:21:43.148 "prchk_reftag": false, 00:21:43.148 "prchk_guard": false, 00:21:43.148 "hdgst": false, 00:21:43.148 "ddgst": false, 00:21:43.148 "method": "bdev_nvme_attach_controller", 00:21:43.148 "req_id": 1 00:21:43.148 } 00:21:43.148 Got JSON-RPC error response 00:21:43.148 response: 00:21:43.148 { 00:21:43.148 "code": -114, 00:21:43.148 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:21:43.148 } 00:21:43.148 14:15:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:43.148 14:15:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:21:43.148 14:15:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:43.148 14:15:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:43.148 14:15:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:43.148 14:15:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:21:43.148 14:15:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:21:43.148 14:15:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:21:43.148 14:15:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:43.148 14:15:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:43.148 14:15:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:43.148 14:15:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:43.148 14:15:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:21:43.148 14:15:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.148 14:15:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:43.148 request: 00:21:43.148 { 00:21:43.148 "name": "NVMe0", 00:21:43.148 "trtype": "tcp", 00:21:43.148 "traddr": "10.0.0.2", 00:21:43.148 "adrfam": "ipv4", 00:21:43.148 "trsvcid": "4420", 00:21:43.148 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:43.148 "hostaddr": "10.0.0.2", 00:21:43.148 "hostsvcid": "60000", 00:21:43.148 "prchk_reftag": false, 00:21:43.148 "prchk_guard": false, 00:21:43.148 "hdgst": false, 00:21:43.148 "ddgst": false, 00:21:43.148 "method": "bdev_nvme_attach_controller", 00:21:43.148 "req_id": 1 00:21:43.148 } 00:21:43.148 Got JSON-RPC error response 00:21:43.148 response: 00:21:43.148 { 00:21:43.148 "code": -114, 00:21:43.148 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:21:43.148 } 00:21:43.148 14:15:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:43.148 14:15:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:21:43.148 14:15:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:43.148 14:15:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:43.148 14:15:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:43.148 14:15:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:21:43.148 14:15:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:21:43.148 14:15:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:21:43.148 14:15:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:43.148 14:15:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:43.148 14:15:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:43.148 14:15:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:43.148 14:15:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:21:43.148 14:15:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.148 14:15:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:43.148 request: 00:21:43.148 { 00:21:43.148 "name": "NVMe0", 00:21:43.148 "trtype": "tcp", 00:21:43.148 "traddr": "10.0.0.2", 00:21:43.148 "adrfam": "ipv4", 00:21:43.148 "trsvcid": "4420", 00:21:43.148 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:43.148 "hostaddr": "10.0.0.2", 00:21:43.148 "hostsvcid": "60000", 00:21:43.148 "prchk_reftag": false, 00:21:43.148 "prchk_guard": false, 00:21:43.148 "hdgst": false, 00:21:43.148 "ddgst": false, 00:21:43.148 "multipath": "disable", 00:21:43.148 "method": "bdev_nvme_attach_controller", 00:21:43.148 "req_id": 1 00:21:43.148 } 00:21:43.148 Got JSON-RPC error response 00:21:43.148 response: 00:21:43.148 { 00:21:43.148 "code": -114, 00:21:43.148 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:21:43.148 } 00:21:43.148 14:15:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:43.148 14:15:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:21:43.148 14:15:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:43.148 14:15:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:43.148 14:15:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:43.148 14:15:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:21:43.148 14:15:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:21:43.148 14:15:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:21:43.148 14:15:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:43.148 14:15:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:43.148 14:15:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:43.148 14:15:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:43.148 14:15:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:21:43.148 14:15:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.148 14:15:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:43.148 request: 00:21:43.148 { 00:21:43.148 "name": "NVMe0", 00:21:43.148 "trtype": "tcp", 00:21:43.148 "traddr": "10.0.0.2", 00:21:43.148 "adrfam": "ipv4", 00:21:43.148 "trsvcid": "4420", 00:21:43.148 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:43.148 "hostaddr": "10.0.0.2", 00:21:43.148 "hostsvcid": "60000", 00:21:43.148 "prchk_reftag": false, 00:21:43.148 "prchk_guard": false, 00:21:43.148 "hdgst": false, 00:21:43.148 "ddgst": false, 00:21:43.148 "multipath": "failover", 00:21:43.148 "method": "bdev_nvme_attach_controller", 00:21:43.148 "req_id": 1 00:21:43.148 } 00:21:43.148 Got JSON-RPC error response 00:21:43.148 response: 00:21:43.148 { 00:21:43.148 "code": -114, 00:21:43.148 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:21:43.148 } 00:21:43.148 14:15:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:43.148 14:15:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:21:43.148 14:15:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:43.148 14:15:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:43.148 14:15:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:43.148 14:15:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:43.148 14:15:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.148 14:15:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:43.405 00:21:43.405 14:15:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.405 14:15:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:43.405 14:15:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.405 14:15:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:43.405 14:15:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.405 14:15:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:21:43.405 14:15:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.405 14:15:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:43.662 00:21:43.662 14:15:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.662 14:15:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:43.662 14:15:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.662 14:15:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:21:43.662 14:15:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:43.662 14:15:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.662 14:15:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:21:43.662 14:15:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:45.034 0 00:21:45.034 14:15:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:21:45.034 14:15:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.034 14:15:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:45.034 14:15:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.034 14:15:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 271341 00:21:45.034 14:15:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 271341 ']' 00:21:45.034 14:15:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 271341 00:21:45.034 14:15:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:21:45.034 14:15:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:45.034 14:15:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 271341 00:21:45.034 14:15:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:45.034 14:15:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:45.034 14:15:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 271341' 00:21:45.034 killing process with pid 271341 00:21:45.034 14:15:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 271341 00:21:45.034 14:15:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 271341 00:21:45.034 14:15:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:45.034 14:15:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.034 14:15:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:45.034 14:15:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.034 14:15:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:21:45.034 14:15:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.034 14:15:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:45.034 14:15:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.034 14:15:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:21:45.034 14:15:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:45.034 14:15:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:21:45.034 14:15:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:21:45.034 14:15:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:21:45.034 14:15:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:21:45.034 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:21:45.034 [2024-07-26 14:15:50.626551] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:21:45.034 [2024-07-26 14:15:50.626656] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid271341 ] 00:21:45.034 EAL: No free 2048 kB hugepages reported on node 1 00:21:45.034 [2024-07-26 14:15:50.686941] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:45.034 [2024-07-26 14:15:50.794660] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:45.034 [2024-07-26 14:15:51.498633] bdev.c:4633:bdev_name_add: *ERROR*: Bdev name f80fb42d-c38b-48a5-8b0d-e3b9616f825a already exists 00:21:45.034 [2024-07-26 14:15:51.498672] bdev.c:7755:bdev_register: *ERROR*: Unable to add uuid:f80fb42d-c38b-48a5-8b0d-e3b9616f825a alias for bdev NVMe1n1 00:21:45.034 [2024-07-26 14:15:51.498687] bdev_nvme.c:4318:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:21:45.034 Running I/O for 1 seconds... 00:21:45.034 00:21:45.034 Latency(us) 00:21:45.034 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:45.034 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:21:45.034 NVMe0n1 : 1.01 19057.64 74.44 0.00 0.00 6705.15 5291.43 12621.75 00:21:45.034 =================================================================================================================== 00:21:45.034 Total : 19057.64 74.44 0.00 0.00 6705.15 5291.43 12621.75 00:21:45.034 Received shutdown signal, test time was about 1.000000 seconds 00:21:45.034 00:21:45.034 Latency(us) 00:21:45.034 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:45.034 =================================================================================================================== 00:21:45.034 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:45.034 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:21:45.034 14:15:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:45.034 14:15:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:21:45.034 14:15:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:21:45.034 14:15:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:45.034 14:15:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:21:45.034 14:15:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:45.034 14:15:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:21:45.034 14:15:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:45.034 14:15:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:45.034 rmmod nvme_tcp 00:21:45.034 rmmod nvme_fabrics 00:21:45.034 rmmod nvme_keyring 00:21:45.035 14:15:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:45.035 14:15:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:21:45.035 14:15:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:21:45.035 14:15:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 271196 ']' 00:21:45.035 14:15:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 271196 00:21:45.035 14:15:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 271196 ']' 00:21:45.035 14:15:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 271196 00:21:45.035 14:15:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:21:45.035 14:15:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:45.035 14:15:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 271196 00:21:45.292 14:15:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:45.292 14:15:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:45.292 14:15:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 271196' 00:21:45.292 killing process with pid 271196 00:21:45.292 14:15:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 271196 00:21:45.292 14:15:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 271196 00:21:45.552 14:15:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:45.552 14:15:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:45.552 14:15:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:45.552 14:15:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:45.552 14:15:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:45.552 14:15:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:45.552 14:15:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:45.552 14:15:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:47.451 14:15:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:47.451 00:21:47.451 real 0m7.672s 00:21:47.451 user 0m12.242s 00:21:47.451 sys 0m2.294s 00:21:47.451 14:15:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:47.451 14:15:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:47.451 ************************************ 00:21:47.451 END TEST nvmf_multicontroller 00:21:47.452 ************************************ 00:21:47.452 14:15:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:21:47.452 14:15:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:47.452 14:15:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:47.452 14:15:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:47.710 ************************************ 00:21:47.710 START TEST nvmf_aer 00:21:47.710 ************************************ 00:21:47.710 14:15:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:21:47.710 * Looking for test storage... 00:21:47.710 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:47.710 14:15:55 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:47.710 14:15:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:21:47.710 14:15:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:47.710 14:15:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:47.710 14:15:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:47.710 14:15:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:47.710 14:15:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:47.710 14:15:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:47.710 14:15:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:47.710 14:15:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:47.710 14:15:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:47.710 14:15:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:47.710 14:15:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:47.710 14:15:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:21:47.710 14:15:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:47.710 14:15:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:47.710 14:15:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:47.710 14:15:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:47.710 14:15:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:47.710 14:15:55 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:47.710 14:15:55 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:47.710 14:15:55 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:47.711 14:15:55 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:47.711 14:15:55 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:47.711 14:15:55 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:47.711 14:15:55 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:21:47.711 14:15:55 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:47.711 14:15:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:21:47.711 14:15:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:47.711 14:15:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:47.711 14:15:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:47.711 14:15:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:47.711 14:15:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:47.711 14:15:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:47.711 14:15:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:47.711 14:15:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:47.711 14:15:55 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:21:47.711 14:15:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:47.711 14:15:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:47.711 14:15:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:47.711 14:15:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:47.711 14:15:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:47.711 14:15:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:47.711 14:15:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:47.711 14:15:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:47.711 14:15:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:47.711 14:15:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:47.711 14:15:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:21:47.711 14:15:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:49.614 14:15:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:49.614 14:15:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:21:49.614 14:15:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:49.614 14:15:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:49.614 14:15:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:49.614 14:15:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:49.614 14:15:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:49.614 14:15:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:21:49.614 14:15:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:49.614 14:15:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:21:49.614 14:15:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:21:49.614 14:15:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:21:49.614 14:15:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:21:49.614 14:15:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:21:49.614 14:15:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:21:49.614 14:15:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:49.614 14:15:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:49.614 14:15:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:49.614 14:15:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:49.614 14:15:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:49.614 14:15:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:49.614 14:15:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:49.614 14:15:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:49.614 14:15:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:49.614 14:15:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:49.614 14:15:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:49.614 14:15:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:49.614 14:15:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:49.614 14:15:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:49.614 14:15:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:49.614 14:15:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:49.614 14:15:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:49.614 14:15:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:49.614 14:15:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:21:49.614 Found 0000:09:00.0 (0x8086 - 0x159b) 00:21:49.614 14:15:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:49.614 14:15:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:49.614 14:15:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:49.614 14:15:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:49.614 14:15:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:49.614 14:15:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:49.614 14:15:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:21:49.614 Found 0000:09:00.1 (0x8086 - 0x159b) 00:21:49.614 14:15:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:49.614 14:15:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:49.614 14:15:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:49.614 14:15:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:49.614 14:15:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:49.614 14:15:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:49.614 14:15:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:49.614 14:15:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:49.614 14:15:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:49.614 14:15:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:49.614 14:15:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:49.614 14:15:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:49.614 14:15:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:49.614 14:15:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:49.614 14:15:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:49.614 14:15:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:21:49.614 Found net devices under 0000:09:00.0: cvl_0_0 00:21:49.614 14:15:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:49.614 14:15:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:49.614 14:15:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:49.614 14:15:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:49.614 14:15:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:49.614 14:15:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:49.614 14:15:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:49.614 14:15:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:49.614 14:15:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:21:49.614 Found net devices under 0000:09:00.1: cvl_0_1 00:21:49.614 14:15:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:49.614 14:15:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:49.614 14:15:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:21:49.614 14:15:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:49.614 14:15:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:49.614 14:15:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:49.614 14:15:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:49.614 14:15:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:49.614 14:15:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:49.614 14:15:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:49.614 14:15:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:49.614 14:15:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:49.614 14:15:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:49.614 14:15:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:49.615 14:15:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:49.615 14:15:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:49.615 14:15:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:49.615 14:15:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:49.615 14:15:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:49.615 14:15:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:49.615 14:15:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:49.615 14:15:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:49.873 14:15:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:49.873 14:15:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:49.873 14:15:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:49.873 14:15:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:49.873 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:49.873 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.164 ms 00:21:49.873 00:21:49.873 --- 10.0.0.2 ping statistics --- 00:21:49.873 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:49.873 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:21:49.873 14:15:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:49.873 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:49.873 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.107 ms 00:21:49.873 00:21:49.873 --- 10.0.0.1 ping statistics --- 00:21:49.873 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:49.873 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:21:49.873 14:15:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:49.873 14:15:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:21:49.873 14:15:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:49.873 14:15:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:49.873 14:15:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:49.873 14:15:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:49.873 14:15:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:49.873 14:15:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:49.873 14:15:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:49.873 14:15:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:21:49.873 14:15:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:49.873 14:15:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:49.873 14:15:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:49.873 14:15:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=273550 00:21:49.873 14:15:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:49.873 14:15:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 273550 00:21:49.873 14:15:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@831 -- # '[' -z 273550 ']' 00:21:49.873 14:15:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:49.873 14:15:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:49.873 14:15:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:49.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:49.873 14:15:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:49.873 14:15:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:49.873 [2024-07-26 14:15:57.760891] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:21:49.873 [2024-07-26 14:15:57.760956] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:49.873 EAL: No free 2048 kB hugepages reported on node 1 00:21:49.873 [2024-07-26 14:15:57.821087] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:50.132 [2024-07-26 14:15:57.926122] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:50.132 [2024-07-26 14:15:57.926172] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:50.132 [2024-07-26 14:15:57.926197] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:50.132 [2024-07-26 14:15:57.926208] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:50.132 [2024-07-26 14:15:57.926217] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:50.132 [2024-07-26 14:15:57.926296] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:50.132 [2024-07-26 14:15:57.926413] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:50.132 [2024-07-26 14:15:57.926481] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:50.132 [2024-07-26 14:15:57.926484] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:50.132 14:15:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:50.132 14:15:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # return 0 00:21:50.132 14:15:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:50.132 14:15:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:50.132 14:15:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:50.132 14:15:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:50.132 14:15:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:50.132 14:15:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.132 14:15:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:50.132 [2024-07-26 14:15:58.071739] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:50.132 14:15:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.132 14:15:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:21:50.132 14:15:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.132 14:15:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:50.132 Malloc0 00:21:50.132 14:15:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.132 14:15:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:21:50.132 14:15:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.132 14:15:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:50.132 14:15:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.132 14:15:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:50.132 14:15:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.132 14:15:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:50.132 14:15:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.132 14:15:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:50.133 14:15:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.133 14:15:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:50.133 [2024-07-26 14:15:58.124861] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:50.133 14:15:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.133 14:15:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:21:50.133 14:15:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.133 14:15:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:50.133 [ 00:21:50.133 { 00:21:50.133 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:50.133 "subtype": "Discovery", 00:21:50.133 "listen_addresses": [], 00:21:50.133 "allow_any_host": true, 00:21:50.133 "hosts": [] 00:21:50.133 }, 00:21:50.133 { 00:21:50.133 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:50.133 "subtype": "NVMe", 00:21:50.133 "listen_addresses": [ 00:21:50.133 { 00:21:50.133 "trtype": "TCP", 00:21:50.133 "adrfam": "IPv4", 00:21:50.133 "traddr": "10.0.0.2", 00:21:50.133 "trsvcid": "4420" 00:21:50.133 } 00:21:50.133 ], 00:21:50.133 "allow_any_host": true, 00:21:50.133 "hosts": [], 00:21:50.133 "serial_number": "SPDK00000000000001", 00:21:50.133 "model_number": "SPDK bdev Controller", 00:21:50.133 "max_namespaces": 2, 00:21:50.133 "min_cntlid": 1, 00:21:50.133 "max_cntlid": 65519, 00:21:50.133 "namespaces": [ 00:21:50.133 { 00:21:50.133 "nsid": 1, 00:21:50.133 "bdev_name": "Malloc0", 00:21:50.133 "name": "Malloc0", 00:21:50.133 "nguid": "CDC05767A853426CBB49DE8B03812AEE", 00:21:50.133 "uuid": "cdc05767-a853-426c-bb49-de8b03812aee" 00:21:50.133 } 00:21:50.133 ] 00:21:50.133 } 00:21:50.133 ] 00:21:50.133 14:15:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.133 14:15:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:21:50.133 14:15:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:21:50.133 14:15:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=273574 00:21:50.133 14:15:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:21:50.133 14:15:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:21:50.133 14:15:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:21:50.133 14:15:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:50.133 14:15:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:21:50.133 14:15:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:21:50.133 14:15:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:21:50.391 EAL: No free 2048 kB hugepages reported on node 1 00:21:50.391 14:15:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:50.391 14:15:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:21:50.391 14:15:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:21:50.391 14:15:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:21:50.391 14:15:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:50.391 14:15:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:50.391 14:15:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:21:50.391 14:15:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:21:50.391 14:15:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.391 14:15:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:50.391 Malloc1 00:21:50.391 14:15:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.391 14:15:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:21:50.391 14:15:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.391 14:15:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:50.650 14:15:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.650 14:15:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:21:50.650 14:15:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.650 14:15:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:50.650 [ 00:21:50.650 { 00:21:50.650 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:50.650 "subtype": "Discovery", 00:21:50.650 "listen_addresses": [], 00:21:50.650 "allow_any_host": true, 00:21:50.650 "hosts": [] 00:21:50.650 }, 00:21:50.650 { 00:21:50.650 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:50.650 "subtype": "NVMe", 00:21:50.650 "listen_addresses": [ 00:21:50.650 { 00:21:50.650 "trtype": "TCP", 00:21:50.650 "adrfam": "IPv4", 00:21:50.650 "traddr": "10.0.0.2", 00:21:50.650 "trsvcid": "4420" 00:21:50.650 } 00:21:50.650 ], 00:21:50.650 "allow_any_host": true, 00:21:50.650 "hosts": [], 00:21:50.650 "serial_number": "SPDK00000000000001", 00:21:50.650 "model_number": "SPDK bdev Controller", 00:21:50.650 "max_namespaces": 2, 00:21:50.650 "min_cntlid": 1, 00:21:50.650 "max_cntlid": 65519, 00:21:50.650 "namespaces": [ 00:21:50.650 { 00:21:50.650 "nsid": 1, 00:21:50.650 "bdev_name": "Malloc0", 00:21:50.650 "name": "Malloc0", 00:21:50.650 "nguid": "CDC05767A853426CBB49DE8B03812AEE", 00:21:50.650 "uuid": "cdc05767-a853-426c-bb49-de8b03812aee" 00:21:50.650 }, 00:21:50.650 { 00:21:50.650 "nsid": 2, 00:21:50.650 "bdev_name": "Malloc1", 00:21:50.650 "name": "Malloc1", 00:21:50.650 "nguid": "C59C50FF01CD48368E84D38E35D9F1B8", 00:21:50.650 "uuid": "c59c50ff-01cd-4836-8e84-d38e35d9f1b8" 00:21:50.650 } 00:21:50.650 ] 00:21:50.650 } 00:21:50.650 ] 00:21:50.650 14:15:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.650 14:15:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 273574 00:21:50.650 Asynchronous Event Request test 00:21:50.650 Attaching to 10.0.0.2 00:21:50.650 Attached to 10.0.0.2 00:21:50.650 Registering asynchronous event callbacks... 00:21:50.650 Starting namespace attribute notice tests for all controllers... 00:21:50.650 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:21:50.650 aer_cb - Changed Namespace 00:21:50.650 Cleaning up... 00:21:50.650 14:15:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:21:50.650 14:15:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.650 14:15:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:50.650 14:15:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.650 14:15:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:21:50.650 14:15:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.650 14:15:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:50.650 14:15:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.650 14:15:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:50.650 14:15:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.650 14:15:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:50.650 14:15:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.650 14:15:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:21:50.650 14:15:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:21:50.650 14:15:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:50.650 14:15:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:21:50.650 14:15:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:50.650 14:15:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:21:50.650 14:15:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:50.650 14:15:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:50.650 rmmod nvme_tcp 00:21:50.650 rmmod nvme_fabrics 00:21:50.650 rmmod nvme_keyring 00:21:50.650 14:15:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:50.650 14:15:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:21:50.650 14:15:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:21:50.650 14:15:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 273550 ']' 00:21:50.650 14:15:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 273550 00:21:50.650 14:15:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@950 -- # '[' -z 273550 ']' 00:21:50.650 14:15:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # kill -0 273550 00:21:50.650 14:15:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # uname 00:21:50.650 14:15:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:50.650 14:15:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 273550 00:21:50.650 14:15:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:50.650 14:15:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:50.650 14:15:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@968 -- # echo 'killing process with pid 273550' 00:21:50.650 killing process with pid 273550 00:21:50.650 14:15:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@969 -- # kill 273550 00:21:50.650 14:15:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@974 -- # wait 273550 00:21:50.911 14:15:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:50.911 14:15:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:50.911 14:15:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:50.911 14:15:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:50.911 14:15:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:50.911 14:15:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:50.911 14:15:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:50.911 14:15:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:53.445 14:16:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:53.445 00:21:53.445 real 0m5.409s 00:21:53.445 user 0m4.284s 00:21:53.445 sys 0m1.884s 00:21:53.445 14:16:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:53.445 14:16:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:53.445 ************************************ 00:21:53.445 END TEST nvmf_aer 00:21:53.445 ************************************ 00:21:53.445 14:16:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:21:53.445 14:16:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:53.445 14:16:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:53.445 14:16:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:53.445 ************************************ 00:21:53.445 START TEST nvmf_async_init 00:21:53.445 ************************************ 00:21:53.445 14:16:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:21:53.445 * Looking for test storage... 00:21:53.445 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:53.445 14:16:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:53.445 14:16:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:21:53.445 14:16:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:53.445 14:16:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:53.445 14:16:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:53.445 14:16:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:53.445 14:16:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:53.445 14:16:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:53.445 14:16:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:53.445 14:16:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:53.445 14:16:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:53.445 14:16:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:53.445 14:16:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:53.445 14:16:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:21:53.445 14:16:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:53.446 14:16:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:53.446 14:16:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:53.446 14:16:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:53.446 14:16:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:53.446 14:16:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:53.446 14:16:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:53.446 14:16:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:53.446 14:16:01 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:53.446 14:16:01 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:53.446 14:16:01 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:53.446 14:16:01 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:21:53.446 14:16:01 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:53.446 14:16:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:21:53.446 14:16:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:53.446 14:16:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:53.446 14:16:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:53.446 14:16:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:53.446 14:16:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:53.446 14:16:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:53.446 14:16:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:53.446 14:16:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:53.446 14:16:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:21:53.446 14:16:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:21:53.446 14:16:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:21:53.446 14:16:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:21:53.446 14:16:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:21:53.446 14:16:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:21:53.446 14:16:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=7064068bf0034c9b8e919d0354bd2b8e 00:21:53.446 14:16:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:21:53.446 14:16:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:53.446 14:16:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:53.446 14:16:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:53.446 14:16:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:53.446 14:16:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:53.446 14:16:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:53.446 14:16:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:53.446 14:16:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:53.446 14:16:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:53.446 14:16:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:53.446 14:16:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:21:53.446 14:16:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:55.348 14:16:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:55.348 14:16:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:21:55.348 14:16:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:55.348 14:16:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:55.348 14:16:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:55.348 14:16:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:55.348 14:16:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:55.348 14:16:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:21:55.348 14:16:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:55.348 14:16:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:21:55.348 14:16:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:21:55.348 14:16:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:21:55.348 14:16:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:21:55.348 14:16:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:21:55.348 14:16:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:21:55.348 14:16:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:55.348 14:16:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:55.348 14:16:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:55.348 14:16:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:55.348 14:16:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:55.348 14:16:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:55.348 14:16:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:55.348 14:16:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:55.348 14:16:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:55.348 14:16:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:55.348 14:16:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:55.348 14:16:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:55.348 14:16:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:55.348 14:16:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:55.348 14:16:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:55.348 14:16:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:55.348 14:16:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:55.348 14:16:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:55.348 14:16:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:21:55.348 Found 0000:09:00.0 (0x8086 - 0x159b) 00:21:55.348 14:16:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:55.348 14:16:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:55.348 14:16:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:55.348 14:16:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:55.348 14:16:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:55.348 14:16:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:55.348 14:16:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:21:55.348 Found 0000:09:00.1 (0x8086 - 0x159b) 00:21:55.348 14:16:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:55.348 14:16:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:55.348 14:16:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:55.348 14:16:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:55.348 14:16:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:55.348 14:16:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:55.348 14:16:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:55.348 14:16:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:55.348 14:16:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:55.348 14:16:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:55.348 14:16:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:55.348 14:16:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:55.348 14:16:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:55.348 14:16:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:55.348 14:16:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:55.348 14:16:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:21:55.348 Found net devices under 0000:09:00.0: cvl_0_0 00:21:55.348 14:16:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:55.348 14:16:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:55.348 14:16:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:55.348 14:16:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:55.348 14:16:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:55.348 14:16:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:55.348 14:16:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:55.348 14:16:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:55.348 14:16:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:21:55.348 Found net devices under 0000:09:00.1: cvl_0_1 00:21:55.348 14:16:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:55.348 14:16:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:55.348 14:16:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:21:55.348 14:16:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:55.348 14:16:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:55.348 14:16:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:55.348 14:16:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:55.348 14:16:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:55.348 14:16:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:55.348 14:16:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:55.348 14:16:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:55.348 14:16:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:55.348 14:16:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:55.348 14:16:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:55.348 14:16:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:55.348 14:16:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:55.348 14:16:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:55.348 14:16:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:55.348 14:16:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:55.348 14:16:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:55.348 14:16:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:55.348 14:16:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:55.348 14:16:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:55.349 14:16:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:55.349 14:16:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:55.349 14:16:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:55.349 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:55.349 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.271 ms 00:21:55.349 00:21:55.349 --- 10.0.0.2 ping statistics --- 00:21:55.349 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:55.349 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:21:55.349 14:16:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:55.349 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:55.349 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:21:55.349 00:21:55.349 --- 10.0.0.1 ping statistics --- 00:21:55.349 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:55.349 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:21:55.349 14:16:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:55.349 14:16:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:21:55.349 14:16:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:55.349 14:16:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:55.349 14:16:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:55.349 14:16:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:55.349 14:16:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:55.349 14:16:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:55.349 14:16:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:55.349 14:16:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:21:55.349 14:16:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:55.349 14:16:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:55.349 14:16:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:55.349 14:16:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=275628 00:21:55.349 14:16:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:21:55.349 14:16:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 275628 00:21:55.349 14:16:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@831 -- # '[' -z 275628 ']' 00:21:55.349 14:16:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:55.349 14:16:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:55.349 14:16:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:55.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:55.349 14:16:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:55.349 14:16:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:55.349 [2024-07-26 14:16:03.180084] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:21:55.349 [2024-07-26 14:16:03.180155] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:55.349 EAL: No free 2048 kB hugepages reported on node 1 00:21:55.349 [2024-07-26 14:16:03.242434] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:55.349 [2024-07-26 14:16:03.348009] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:55.349 [2024-07-26 14:16:03.348052] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:55.349 [2024-07-26 14:16:03.348075] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:55.349 [2024-07-26 14:16:03.348086] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:55.349 [2024-07-26 14:16:03.348099] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:55.349 [2024-07-26 14:16:03.348123] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:55.607 14:16:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:55.607 14:16:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # return 0 00:21:55.607 14:16:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:55.607 14:16:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:55.607 14:16:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:55.607 14:16:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:55.607 14:16:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:21:55.607 14:16:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.607 14:16:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:55.607 [2024-07-26 14:16:03.490963] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:55.607 14:16:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.607 14:16:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:21:55.607 14:16:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.607 14:16:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:55.607 null0 00:21:55.607 14:16:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.607 14:16:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:21:55.607 14:16:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.607 14:16:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:55.607 14:16:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.607 14:16:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:21:55.607 14:16:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.607 14:16:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:55.607 14:16:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.607 14:16:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 7064068bf0034c9b8e919d0354bd2b8e 00:21:55.607 14:16:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.607 14:16:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:55.607 14:16:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.607 14:16:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:55.607 14:16:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.607 14:16:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:55.607 [2024-07-26 14:16:03.531216] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:55.607 14:16:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.607 14:16:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:21:55.607 14:16:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.607 14:16:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:55.865 nvme0n1 00:21:55.865 14:16:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.865 14:16:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:55.865 14:16:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.865 14:16:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:55.865 [ 00:21:55.865 { 00:21:55.865 "name": "nvme0n1", 00:21:55.865 "aliases": [ 00:21:55.865 "7064068b-f003-4c9b-8e91-9d0354bd2b8e" 00:21:55.865 ], 00:21:55.865 "product_name": "NVMe disk", 00:21:55.865 "block_size": 512, 00:21:55.865 "num_blocks": 2097152, 00:21:55.865 "uuid": "7064068b-f003-4c9b-8e91-9d0354bd2b8e", 00:21:55.865 "assigned_rate_limits": { 00:21:55.865 "rw_ios_per_sec": 0, 00:21:55.865 "rw_mbytes_per_sec": 0, 00:21:55.865 "r_mbytes_per_sec": 0, 00:21:55.865 "w_mbytes_per_sec": 0 00:21:55.865 }, 00:21:55.865 "claimed": false, 00:21:55.865 "zoned": false, 00:21:55.865 "supported_io_types": { 00:21:55.865 "read": true, 00:21:55.865 "write": true, 00:21:55.865 "unmap": false, 00:21:55.865 "flush": true, 00:21:55.865 "reset": true, 00:21:55.865 "nvme_admin": true, 00:21:55.865 "nvme_io": true, 00:21:55.865 "nvme_io_md": false, 00:21:55.865 "write_zeroes": true, 00:21:55.865 "zcopy": false, 00:21:55.865 "get_zone_info": false, 00:21:55.865 "zone_management": false, 00:21:55.865 "zone_append": false, 00:21:55.865 "compare": true, 00:21:55.865 "compare_and_write": true, 00:21:55.865 "abort": true, 00:21:55.865 "seek_hole": false, 00:21:55.865 "seek_data": false, 00:21:55.865 "copy": true, 00:21:55.865 "nvme_iov_md": false 00:21:55.865 }, 00:21:55.865 "memory_domains": [ 00:21:55.865 { 00:21:55.865 "dma_device_id": "system", 00:21:55.865 "dma_device_type": 1 00:21:55.865 } 00:21:55.865 ], 00:21:55.865 "driver_specific": { 00:21:55.865 "nvme": [ 00:21:55.865 { 00:21:55.865 "trid": { 00:21:55.865 "trtype": "TCP", 00:21:55.865 "adrfam": "IPv4", 00:21:55.865 "traddr": "10.0.0.2", 00:21:55.865 "trsvcid": "4420", 00:21:55.865 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:55.865 }, 00:21:55.865 "ctrlr_data": { 00:21:55.865 "cntlid": 1, 00:21:55.865 "vendor_id": "0x8086", 00:21:55.865 "model_number": "SPDK bdev Controller", 00:21:55.865 "serial_number": "00000000000000000000", 00:21:55.865 "firmware_revision": "24.09", 00:21:55.865 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:55.865 "oacs": { 00:21:55.865 "security": 0, 00:21:55.865 "format": 0, 00:21:55.865 "firmware": 0, 00:21:55.865 "ns_manage": 0 00:21:55.865 }, 00:21:55.865 "multi_ctrlr": true, 00:21:55.865 "ana_reporting": false 00:21:55.865 }, 00:21:55.865 "vs": { 00:21:55.865 "nvme_version": "1.3" 00:21:55.865 }, 00:21:55.865 "ns_data": { 00:21:55.865 "id": 1, 00:21:55.865 "can_share": true 00:21:55.865 } 00:21:55.865 } 00:21:55.865 ], 00:21:55.865 "mp_policy": "active_passive" 00:21:55.865 } 00:21:55.865 } 00:21:55.865 ] 00:21:55.865 14:16:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.865 14:16:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:21:55.865 14:16:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.865 14:16:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:55.865 [2024-07-26 14:16:03.779854] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:55.865 [2024-07-26 14:16:03.779940] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2043210 (9): Bad file descriptor 00:21:56.123 [2024-07-26 14:16:03.911660] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:56.123 14:16:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.123 14:16:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:56.123 14:16:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.123 14:16:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:56.123 [ 00:21:56.123 { 00:21:56.123 "name": "nvme0n1", 00:21:56.123 "aliases": [ 00:21:56.123 "7064068b-f003-4c9b-8e91-9d0354bd2b8e" 00:21:56.123 ], 00:21:56.123 "product_name": "NVMe disk", 00:21:56.123 "block_size": 512, 00:21:56.123 "num_blocks": 2097152, 00:21:56.123 "uuid": "7064068b-f003-4c9b-8e91-9d0354bd2b8e", 00:21:56.123 "assigned_rate_limits": { 00:21:56.123 "rw_ios_per_sec": 0, 00:21:56.123 "rw_mbytes_per_sec": 0, 00:21:56.123 "r_mbytes_per_sec": 0, 00:21:56.123 "w_mbytes_per_sec": 0 00:21:56.123 }, 00:21:56.123 "claimed": false, 00:21:56.123 "zoned": false, 00:21:56.123 "supported_io_types": { 00:21:56.123 "read": true, 00:21:56.123 "write": true, 00:21:56.123 "unmap": false, 00:21:56.123 "flush": true, 00:21:56.123 "reset": true, 00:21:56.123 "nvme_admin": true, 00:21:56.123 "nvme_io": true, 00:21:56.123 "nvme_io_md": false, 00:21:56.123 "write_zeroes": true, 00:21:56.123 "zcopy": false, 00:21:56.123 "get_zone_info": false, 00:21:56.123 "zone_management": false, 00:21:56.123 "zone_append": false, 00:21:56.123 "compare": true, 00:21:56.123 "compare_and_write": true, 00:21:56.123 "abort": true, 00:21:56.123 "seek_hole": false, 00:21:56.123 "seek_data": false, 00:21:56.123 "copy": true, 00:21:56.123 "nvme_iov_md": false 00:21:56.123 }, 00:21:56.123 "memory_domains": [ 00:21:56.123 { 00:21:56.123 "dma_device_id": "system", 00:21:56.123 "dma_device_type": 1 00:21:56.123 } 00:21:56.123 ], 00:21:56.123 "driver_specific": { 00:21:56.123 "nvme": [ 00:21:56.123 { 00:21:56.123 "trid": { 00:21:56.123 "trtype": "TCP", 00:21:56.123 "adrfam": "IPv4", 00:21:56.123 "traddr": "10.0.0.2", 00:21:56.123 "trsvcid": "4420", 00:21:56.123 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:56.123 }, 00:21:56.123 "ctrlr_data": { 00:21:56.123 "cntlid": 2, 00:21:56.123 "vendor_id": "0x8086", 00:21:56.123 "model_number": "SPDK bdev Controller", 00:21:56.123 "serial_number": "00000000000000000000", 00:21:56.123 "firmware_revision": "24.09", 00:21:56.123 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:56.123 "oacs": { 00:21:56.123 "security": 0, 00:21:56.123 "format": 0, 00:21:56.123 "firmware": 0, 00:21:56.123 "ns_manage": 0 00:21:56.123 }, 00:21:56.123 "multi_ctrlr": true, 00:21:56.123 "ana_reporting": false 00:21:56.123 }, 00:21:56.123 "vs": { 00:21:56.123 "nvme_version": "1.3" 00:21:56.123 }, 00:21:56.123 "ns_data": { 00:21:56.123 "id": 1, 00:21:56.123 "can_share": true 00:21:56.123 } 00:21:56.123 } 00:21:56.123 ], 00:21:56.123 "mp_policy": "active_passive" 00:21:56.123 } 00:21:56.123 } 00:21:56.123 ] 00:21:56.123 14:16:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.123 14:16:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:56.123 14:16:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.123 14:16:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:56.123 14:16:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.123 14:16:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:21:56.123 14:16:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.04RZCoUYyz 00:21:56.123 14:16:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:56.123 14:16:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.04RZCoUYyz 00:21:56.123 14:16:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:21:56.123 14:16:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.123 14:16:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:56.123 14:16:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.123 14:16:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:21:56.123 14:16:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.123 14:16:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:56.123 [2024-07-26 14:16:03.956394] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:56.123 [2024-07-26 14:16:03.956492] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:56.123 14:16:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.123 14:16:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.04RZCoUYyz 00:21:56.123 14:16:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.123 14:16:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:56.123 [2024-07-26 14:16:03.964414] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:56.123 14:16:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.123 14:16:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.04RZCoUYyz 00:21:56.123 14:16:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.123 14:16:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:56.123 [2024-07-26 14:16:03.972438] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:56.123 [2024-07-26 14:16:03.972484] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:56.123 nvme0n1 00:21:56.123 14:16:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.123 14:16:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:56.123 14:16:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.123 14:16:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:56.123 [ 00:21:56.123 { 00:21:56.124 "name": "nvme0n1", 00:21:56.124 "aliases": [ 00:21:56.124 "7064068b-f003-4c9b-8e91-9d0354bd2b8e" 00:21:56.124 ], 00:21:56.124 "product_name": "NVMe disk", 00:21:56.124 "block_size": 512, 00:21:56.124 "num_blocks": 2097152, 00:21:56.124 "uuid": "7064068b-f003-4c9b-8e91-9d0354bd2b8e", 00:21:56.124 "assigned_rate_limits": { 00:21:56.124 "rw_ios_per_sec": 0, 00:21:56.124 "rw_mbytes_per_sec": 0, 00:21:56.124 "r_mbytes_per_sec": 0, 00:21:56.124 "w_mbytes_per_sec": 0 00:21:56.124 }, 00:21:56.124 "claimed": false, 00:21:56.124 "zoned": false, 00:21:56.124 "supported_io_types": { 00:21:56.124 "read": true, 00:21:56.124 "write": true, 00:21:56.124 "unmap": false, 00:21:56.124 "flush": true, 00:21:56.124 "reset": true, 00:21:56.124 "nvme_admin": true, 00:21:56.124 "nvme_io": true, 00:21:56.124 "nvme_io_md": false, 00:21:56.124 "write_zeroes": true, 00:21:56.124 "zcopy": false, 00:21:56.124 "get_zone_info": false, 00:21:56.124 "zone_management": false, 00:21:56.124 "zone_append": false, 00:21:56.124 "compare": true, 00:21:56.124 "compare_and_write": true, 00:21:56.124 "abort": true, 00:21:56.124 "seek_hole": false, 00:21:56.124 "seek_data": false, 00:21:56.124 "copy": true, 00:21:56.124 "nvme_iov_md": false 00:21:56.124 }, 00:21:56.124 "memory_domains": [ 00:21:56.124 { 00:21:56.124 "dma_device_id": "system", 00:21:56.124 "dma_device_type": 1 00:21:56.124 } 00:21:56.124 ], 00:21:56.124 "driver_specific": { 00:21:56.124 "nvme": [ 00:21:56.124 { 00:21:56.124 "trid": { 00:21:56.124 "trtype": "TCP", 00:21:56.124 "adrfam": "IPv4", 00:21:56.124 "traddr": "10.0.0.2", 00:21:56.124 "trsvcid": "4421", 00:21:56.124 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:56.124 }, 00:21:56.124 "ctrlr_data": { 00:21:56.124 "cntlid": 3, 00:21:56.124 "vendor_id": "0x8086", 00:21:56.124 "model_number": "SPDK bdev Controller", 00:21:56.124 "serial_number": "00000000000000000000", 00:21:56.124 "firmware_revision": "24.09", 00:21:56.124 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:56.124 "oacs": { 00:21:56.124 "security": 0, 00:21:56.124 "format": 0, 00:21:56.124 "firmware": 0, 00:21:56.124 "ns_manage": 0 00:21:56.124 }, 00:21:56.124 "multi_ctrlr": true, 00:21:56.124 "ana_reporting": false 00:21:56.124 }, 00:21:56.124 "vs": { 00:21:56.124 "nvme_version": "1.3" 00:21:56.124 }, 00:21:56.124 "ns_data": { 00:21:56.124 "id": 1, 00:21:56.124 "can_share": true 00:21:56.124 } 00:21:56.124 } 00:21:56.124 ], 00:21:56.124 "mp_policy": "active_passive" 00:21:56.124 } 00:21:56.124 } 00:21:56.124 ] 00:21:56.124 14:16:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.124 14:16:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:56.124 14:16:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.124 14:16:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:56.124 14:16:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.124 14:16:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.04RZCoUYyz 00:21:56.124 14:16:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:21:56.124 14:16:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:21:56.124 14:16:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:56.124 14:16:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:21:56.124 14:16:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:56.124 14:16:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:21:56.124 14:16:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:56.124 14:16:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:56.124 rmmod nvme_tcp 00:21:56.124 rmmod nvme_fabrics 00:21:56.124 rmmod nvme_keyring 00:21:56.124 14:16:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:56.124 14:16:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:21:56.124 14:16:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:21:56.124 14:16:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 275628 ']' 00:21:56.124 14:16:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 275628 00:21:56.124 14:16:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@950 -- # '[' -z 275628 ']' 00:21:56.124 14:16:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # kill -0 275628 00:21:56.124 14:16:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # uname 00:21:56.124 14:16:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:56.124 14:16:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 275628 00:21:56.383 14:16:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:56.383 14:16:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:56.383 14:16:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 275628' 00:21:56.383 killing process with pid 275628 00:21:56.383 14:16:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@969 -- # kill 275628 00:21:56.383 [2024-07-26 14:16:04.154166] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:56.383 [2024-07-26 14:16:04.154199] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:56.383 14:16:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@974 -- # wait 275628 00:21:56.383 14:16:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:56.383 14:16:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:56.383 14:16:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:56.383 14:16:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:56.383 14:16:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:56.383 14:16:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:56.383 14:16:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:56.383 14:16:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:58.917 14:16:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:58.917 00:21:58.917 real 0m5.484s 00:21:58.917 user 0m2.093s 00:21:58.917 sys 0m1.754s 00:21:58.917 14:16:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:58.917 14:16:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:58.917 ************************************ 00:21:58.917 END TEST nvmf_async_init 00:21:58.917 ************************************ 00:21:58.917 14:16:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:21:58.917 14:16:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:58.917 14:16:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:58.917 14:16:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:58.917 ************************************ 00:21:58.917 START TEST dma 00:21:58.917 ************************************ 00:21:58.917 14:16:06 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:21:58.917 * Looking for test storage... 00:21:58.917 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:58.917 14:16:06 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:58.917 14:16:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:21:58.917 14:16:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:58.917 14:16:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:58.917 14:16:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:58.917 14:16:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:58.917 14:16:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:58.917 14:16:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:58.917 14:16:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:58.917 14:16:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:58.917 14:16:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:58.917 14:16:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:58.917 14:16:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:58.917 14:16:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:21:58.917 14:16:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:58.917 14:16:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:58.917 14:16:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:58.917 14:16:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:58.917 14:16:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:58.917 14:16:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:58.917 14:16:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:58.917 14:16:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:58.917 14:16:06 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:58.917 14:16:06 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:58.917 14:16:06 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:58.917 14:16:06 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:21:58.917 14:16:06 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:58.917 14:16:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@47 -- # : 0 00:21:58.917 14:16:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:58.917 14:16:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:58.917 14:16:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:58.917 14:16:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:58.917 14:16:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:58.917 14:16:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:58.917 14:16:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:58.917 14:16:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:58.917 14:16:06 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:21:58.917 14:16:06 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:21:58.917 00:21:58.917 real 0m0.064s 00:21:58.917 user 0m0.034s 00:21:58.917 sys 0m0.035s 00:21:58.917 14:16:06 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:58.917 14:16:06 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:21:58.917 ************************************ 00:21:58.917 END TEST dma 00:21:58.917 ************************************ 00:21:58.917 14:16:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:21:58.917 14:16:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:58.917 14:16:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:58.917 14:16:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:58.917 ************************************ 00:21:58.917 START TEST nvmf_identify 00:21:58.917 ************************************ 00:21:58.917 14:16:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:21:58.917 * Looking for test storage... 00:21:58.917 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:58.917 14:16:06 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:58.917 14:16:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:21:58.917 14:16:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:58.917 14:16:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:58.917 14:16:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:58.917 14:16:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:58.917 14:16:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:58.917 14:16:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:58.917 14:16:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:58.917 14:16:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:58.917 14:16:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:58.917 14:16:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:58.917 14:16:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:58.917 14:16:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:21:58.917 14:16:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:58.917 14:16:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:58.917 14:16:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:58.917 14:16:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:58.917 14:16:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:58.917 14:16:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:58.917 14:16:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:58.917 14:16:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:58.917 14:16:06 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:58.918 14:16:06 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:58.918 14:16:06 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:58.918 14:16:06 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:21:58.918 14:16:06 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:58.918 14:16:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:21:58.918 14:16:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:58.918 14:16:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:58.918 14:16:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:58.918 14:16:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:58.918 14:16:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:58.918 14:16:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:58.918 14:16:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:58.918 14:16:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:58.918 14:16:06 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:58.918 14:16:06 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:58.918 14:16:06 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:21:58.918 14:16:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:58.918 14:16:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:58.918 14:16:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:58.918 14:16:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:58.918 14:16:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:58.918 14:16:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:58.918 14:16:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:58.918 14:16:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:58.918 14:16:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:58.918 14:16:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:58.918 14:16:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:21:58.918 14:16:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:00.819 14:16:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:00.819 14:16:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:22:00.819 14:16:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:00.819 14:16:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:00.819 14:16:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:00.819 14:16:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:00.819 14:16:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:00.819 14:16:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:22:00.819 14:16:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:00.820 14:16:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:22:00.820 14:16:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:22:00.820 14:16:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:22:00.820 14:16:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:22:00.820 14:16:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:22:00.820 14:16:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:22:00.820 14:16:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:00.820 14:16:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:00.820 14:16:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:00.820 14:16:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:00.820 14:16:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:00.820 14:16:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:00.820 14:16:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:00.820 14:16:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:00.820 14:16:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:00.820 14:16:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:00.820 14:16:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:00.820 14:16:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:00.820 14:16:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:00.820 14:16:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:00.820 14:16:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:00.820 14:16:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:00.820 14:16:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:00.820 14:16:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:00.820 14:16:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:22:00.820 Found 0000:09:00.0 (0x8086 - 0x159b) 00:22:00.820 14:16:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:00.820 14:16:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:00.820 14:16:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:00.820 14:16:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:00.820 14:16:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:00.820 14:16:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:00.820 14:16:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:22:00.820 Found 0000:09:00.1 (0x8086 - 0x159b) 00:22:00.820 14:16:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:00.820 14:16:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:00.820 14:16:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:00.820 14:16:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:00.820 14:16:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:00.820 14:16:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:00.820 14:16:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:00.820 14:16:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:00.820 14:16:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:00.820 14:16:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:00.820 14:16:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:00.820 14:16:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:00.820 14:16:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:00.820 14:16:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:00.820 14:16:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:00.820 14:16:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:22:00.820 Found net devices under 0000:09:00.0: cvl_0_0 00:22:00.820 14:16:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:00.820 14:16:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:00.820 14:16:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:00.820 14:16:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:00.820 14:16:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:00.820 14:16:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:00.820 14:16:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:00.820 14:16:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:00.820 14:16:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:22:00.820 Found net devices under 0000:09:00.1: cvl_0_1 00:22:00.820 14:16:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:00.820 14:16:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:00.820 14:16:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:22:00.820 14:16:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:00.820 14:16:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:00.820 14:16:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:00.820 14:16:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:00.820 14:16:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:00.820 14:16:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:00.820 14:16:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:00.820 14:16:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:00.820 14:16:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:00.820 14:16:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:00.820 14:16:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:00.820 14:16:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:00.820 14:16:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:00.820 14:16:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:00.820 14:16:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:00.820 14:16:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:00.820 14:16:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:00.820 14:16:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:00.820 14:16:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:00.820 14:16:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:00.820 14:16:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:00.820 14:16:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:00.820 14:16:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:00.820 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:00.820 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.133 ms 00:22:00.820 00:22:00.820 --- 10.0.0.2 ping statistics --- 00:22:00.820 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:00.820 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:22:00.820 14:16:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:00.820 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:00.820 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.101 ms 00:22:00.820 00:22:00.820 --- 10.0.0.1 ping statistics --- 00:22:00.820 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:00.820 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:22:00.820 14:16:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:00.820 14:16:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:22:00.820 14:16:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:00.820 14:16:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:00.820 14:16:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:00.820 14:16:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:00.820 14:16:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:00.820 14:16:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:00.820 14:16:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:00.820 14:16:08 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:22:00.820 14:16:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:00.820 14:16:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:00.820 14:16:08 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=277676 00:22:00.820 14:16:08 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:00.820 14:16:08 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:00.820 14:16:08 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 277676 00:22:00.821 14:16:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@831 -- # '[' -z 277676 ']' 00:22:00.821 14:16:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:00.821 14:16:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:00.821 14:16:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:00.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:00.821 14:16:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:00.821 14:16:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:00.821 [2024-07-26 14:16:08.815072] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:22:00.821 [2024-07-26 14:16:08.815155] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:01.078 EAL: No free 2048 kB hugepages reported on node 1 00:22:01.078 [2024-07-26 14:16:08.880491] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:01.078 [2024-07-26 14:16:08.983762] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:01.078 [2024-07-26 14:16:08.983829] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:01.078 [2024-07-26 14:16:08.983852] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:01.078 [2024-07-26 14:16:08.983877] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:01.078 [2024-07-26 14:16:08.983886] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:01.078 [2024-07-26 14:16:08.983963] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:01.078 [2024-07-26 14:16:08.984026] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:01.078 [2024-07-26 14:16:08.984153] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:01.078 [2024-07-26 14:16:08.984156] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:01.338 14:16:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:01.338 14:16:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # return 0 00:22:01.338 14:16:09 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:01.338 14:16:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.338 14:16:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:01.338 [2024-07-26 14:16:09.114967] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:01.338 14:16:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.338 14:16:09 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:22:01.338 14:16:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:01.338 14:16:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:01.338 14:16:09 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:01.338 14:16:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.338 14:16:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:01.338 Malloc0 00:22:01.338 14:16:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.338 14:16:09 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:01.338 14:16:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.338 14:16:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:01.338 14:16:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.338 14:16:09 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:22:01.338 14:16:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.338 14:16:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:01.338 14:16:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.338 14:16:09 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:01.338 14:16:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.338 14:16:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:01.338 [2024-07-26 14:16:09.196405] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:01.338 14:16:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.338 14:16:09 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:01.338 14:16:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.338 14:16:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:01.338 14:16:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.338 14:16:09 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:22:01.338 14:16:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.338 14:16:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:01.338 [ 00:22:01.338 { 00:22:01.338 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:01.338 "subtype": "Discovery", 00:22:01.338 "listen_addresses": [ 00:22:01.338 { 00:22:01.338 "trtype": "TCP", 00:22:01.338 "adrfam": "IPv4", 00:22:01.338 "traddr": "10.0.0.2", 00:22:01.338 "trsvcid": "4420" 00:22:01.338 } 00:22:01.338 ], 00:22:01.338 "allow_any_host": true, 00:22:01.338 "hosts": [] 00:22:01.338 }, 00:22:01.338 { 00:22:01.338 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:01.338 "subtype": "NVMe", 00:22:01.338 "listen_addresses": [ 00:22:01.338 { 00:22:01.338 "trtype": "TCP", 00:22:01.338 "adrfam": "IPv4", 00:22:01.338 "traddr": "10.0.0.2", 00:22:01.338 "trsvcid": "4420" 00:22:01.338 } 00:22:01.338 ], 00:22:01.338 "allow_any_host": true, 00:22:01.338 "hosts": [], 00:22:01.338 "serial_number": "SPDK00000000000001", 00:22:01.338 "model_number": "SPDK bdev Controller", 00:22:01.338 "max_namespaces": 32, 00:22:01.338 "min_cntlid": 1, 00:22:01.338 "max_cntlid": 65519, 00:22:01.338 "namespaces": [ 00:22:01.338 { 00:22:01.338 "nsid": 1, 00:22:01.338 "bdev_name": "Malloc0", 00:22:01.338 "name": "Malloc0", 00:22:01.338 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:22:01.338 "eui64": "ABCDEF0123456789", 00:22:01.338 "uuid": "23608a9f-f189-4175-b64d-14f53df5cdd1" 00:22:01.338 } 00:22:01.338 ] 00:22:01.338 } 00:22:01.338 ] 00:22:01.338 14:16:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.338 14:16:09 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:22:01.338 [2024-07-26 14:16:09.238603] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:22:01.338 [2024-07-26 14:16:09.238648] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid277776 ] 00:22:01.338 EAL: No free 2048 kB hugepages reported on node 1 00:22:01.338 [2024-07-26 14:16:09.273905] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:22:01.338 [2024-07-26 14:16:09.273969] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:01.338 [2024-07-26 14:16:09.273979] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:01.338 [2024-07-26 14:16:09.273995] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:01.338 [2024-07-26 14:16:09.274009] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:22:01.338 [2024-07-26 14:16:09.277597] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:22:01.338 [2024-07-26 14:16:09.277665] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1189540 0 00:22:01.338 [2024-07-26 14:16:09.284551] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:01.338 [2024-07-26 14:16:09.284589] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:01.338 [2024-07-26 14:16:09.284600] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:01.338 [2024-07-26 14:16:09.284606] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:01.338 [2024-07-26 14:16:09.284676] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.338 [2024-07-26 14:16:09.284695] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.338 [2024-07-26 14:16:09.284704] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1189540) 00:22:01.338 [2024-07-26 14:16:09.284726] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:01.338 [2024-07-26 14:16:09.284753] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11e93c0, cid 0, qid 0 00:22:01.338 [2024-07-26 14:16:09.292542] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.338 [2024-07-26 14:16:09.292561] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.338 [2024-07-26 14:16:09.292569] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.338 [2024-07-26 14:16:09.292587] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11e93c0) on tqpair=0x1189540 00:22:01.338 [2024-07-26 14:16:09.292607] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:01.338 [2024-07-26 14:16:09.292620] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:22:01.338 [2024-07-26 14:16:09.292630] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:22:01.338 [2024-07-26 14:16:09.292652] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.338 [2024-07-26 14:16:09.292661] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.338 [2024-07-26 14:16:09.292668] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1189540) 00:22:01.338 [2024-07-26 14:16:09.292679] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.339 [2024-07-26 14:16:09.292702] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11e93c0, cid 0, qid 0 00:22:01.339 [2024-07-26 14:16:09.292814] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.339 [2024-07-26 14:16:09.292828] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.339 [2024-07-26 14:16:09.292835] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.339 [2024-07-26 14:16:09.292842] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11e93c0) on tqpair=0x1189540 00:22:01.339 [2024-07-26 14:16:09.292855] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:22:01.339 [2024-07-26 14:16:09.292869] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:22:01.339 [2024-07-26 14:16:09.292881] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.339 [2024-07-26 14:16:09.292888] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.339 [2024-07-26 14:16:09.292895] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1189540) 00:22:01.339 [2024-07-26 14:16:09.292905] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.339 [2024-07-26 14:16:09.292927] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11e93c0, cid 0, qid 0 00:22:01.339 [2024-07-26 14:16:09.292999] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.339 [2024-07-26 14:16:09.293011] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.339 [2024-07-26 14:16:09.293018] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.339 [2024-07-26 14:16:09.293025] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11e93c0) on tqpair=0x1189540 00:22:01.339 [2024-07-26 14:16:09.293034] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:22:01.339 [2024-07-26 14:16:09.293048] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:22:01.339 [2024-07-26 14:16:09.293067] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.339 [2024-07-26 14:16:09.293074] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.339 [2024-07-26 14:16:09.293084] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1189540) 00:22:01.339 [2024-07-26 14:16:09.293095] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.339 [2024-07-26 14:16:09.293116] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11e93c0, cid 0, qid 0 00:22:01.339 [2024-07-26 14:16:09.293189] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.339 [2024-07-26 14:16:09.293202] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.339 [2024-07-26 14:16:09.293209] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.339 [2024-07-26 14:16:09.293216] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11e93c0) on tqpair=0x1189540 00:22:01.339 [2024-07-26 14:16:09.293225] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:01.339 [2024-07-26 14:16:09.293241] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.339 [2024-07-26 14:16:09.293250] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.339 [2024-07-26 14:16:09.293256] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1189540) 00:22:01.339 [2024-07-26 14:16:09.293266] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.339 [2024-07-26 14:16:09.293287] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11e93c0, cid 0, qid 0 00:22:01.339 [2024-07-26 14:16:09.293356] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.339 [2024-07-26 14:16:09.293367] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.339 [2024-07-26 14:16:09.293374] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.339 [2024-07-26 14:16:09.293381] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11e93c0) on tqpair=0x1189540 00:22:01.339 [2024-07-26 14:16:09.293389] nvme_ctrlr.c:3873:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:22:01.339 [2024-07-26 14:16:09.293398] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:22:01.339 [2024-07-26 14:16:09.293410] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:01.339 [2024-07-26 14:16:09.293521] nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:22:01.339 [2024-07-26 14:16:09.293537] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:01.339 [2024-07-26 14:16:09.293554] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.339 [2024-07-26 14:16:09.293561] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.339 [2024-07-26 14:16:09.293568] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1189540) 00:22:01.339 [2024-07-26 14:16:09.293578] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.339 [2024-07-26 14:16:09.293609] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11e93c0, cid 0, qid 0 00:22:01.339 [2024-07-26 14:16:09.293686] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.339 [2024-07-26 14:16:09.293697] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.339 [2024-07-26 14:16:09.293704] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.339 [2024-07-26 14:16:09.293711] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11e93c0) on tqpair=0x1189540 00:22:01.339 [2024-07-26 14:16:09.293719] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:01.339 [2024-07-26 14:16:09.293739] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.339 [2024-07-26 14:16:09.293749] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.339 [2024-07-26 14:16:09.293755] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1189540) 00:22:01.339 [2024-07-26 14:16:09.293765] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.339 [2024-07-26 14:16:09.293786] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11e93c0, cid 0, qid 0 00:22:01.339 [2024-07-26 14:16:09.293865] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.339 [2024-07-26 14:16:09.293879] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.339 [2024-07-26 14:16:09.293886] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.339 [2024-07-26 14:16:09.293892] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11e93c0) on tqpair=0x1189540 00:22:01.339 [2024-07-26 14:16:09.293901] nvme_ctrlr.c:3908:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:01.339 [2024-07-26 14:16:09.293909] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:22:01.339 [2024-07-26 14:16:09.293922] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:22:01.339 [2024-07-26 14:16:09.293941] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:22:01.339 [2024-07-26 14:16:09.293960] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.339 [2024-07-26 14:16:09.293968] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1189540) 00:22:01.339 [2024-07-26 14:16:09.293979] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.339 [2024-07-26 14:16:09.293999] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11e93c0, cid 0, qid 0 00:22:01.339 [2024-07-26 14:16:09.294118] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:01.339 [2024-07-26 14:16:09.294134] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:01.339 [2024-07-26 14:16:09.294141] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:01.339 [2024-07-26 14:16:09.294148] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1189540): datao=0, datal=4096, cccid=0 00:22:01.339 [2024-07-26 14:16:09.294156] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x11e93c0) on tqpair(0x1189540): expected_datao=0, payload_size=4096 00:22:01.339 [2024-07-26 14:16:09.294164] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.339 [2024-07-26 14:16:09.294183] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:01.339 [2024-07-26 14:16:09.294193] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:01.339 [2024-07-26 14:16:09.338541] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.339 [2024-07-26 14:16:09.338559] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.339 [2024-07-26 14:16:09.338567] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.339 [2024-07-26 14:16:09.338573] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11e93c0) on tqpair=0x1189540 00:22:01.339 [2024-07-26 14:16:09.338586] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:22:01.339 [2024-07-26 14:16:09.338595] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:22:01.339 [2024-07-26 14:16:09.338603] nvme_ctrlr.c:2064:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:22:01.339 [2024-07-26 14:16:09.338612] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:22:01.339 [2024-07-26 14:16:09.338625] nvme_ctrlr.c:2103:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:22:01.339 [2024-07-26 14:16:09.338633] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:22:01.339 [2024-07-26 14:16:09.338649] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:22:01.339 [2024-07-26 14:16:09.338676] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.339 [2024-07-26 14:16:09.338684] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.339 [2024-07-26 14:16:09.338691] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1189540) 00:22:01.339 [2024-07-26 14:16:09.338702] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:01.339 [2024-07-26 14:16:09.338726] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11e93c0, cid 0, qid 0 00:22:01.339 [2024-07-26 14:16:09.338814] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.339 [2024-07-26 14:16:09.338828] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.340 [2024-07-26 14:16:09.338835] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.340 [2024-07-26 14:16:09.338842] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11e93c0) on tqpair=0x1189540 00:22:01.340 [2024-07-26 14:16:09.338856] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.340 [2024-07-26 14:16:09.338863] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.340 [2024-07-26 14:16:09.338869] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1189540) 00:22:01.340 [2024-07-26 14:16:09.338879] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.340 [2024-07-26 14:16:09.338889] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.340 [2024-07-26 14:16:09.338896] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.340 [2024-07-26 14:16:09.338902] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1189540) 00:22:01.340 [2024-07-26 14:16:09.338910] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.340 [2024-07-26 14:16:09.338920] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.340 [2024-07-26 14:16:09.338926] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.340 [2024-07-26 14:16:09.338932] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1189540) 00:22:01.340 [2024-07-26 14:16:09.338941] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.340 [2024-07-26 14:16:09.338950] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.340 [2024-07-26 14:16:09.338956] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.340 [2024-07-26 14:16:09.338963] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1189540) 00:22:01.340 [2024-07-26 14:16:09.338971] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.340 [2024-07-26 14:16:09.338980] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:22:01.340 [2024-07-26 14:16:09.338999] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:01.340 [2024-07-26 14:16:09.339012] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.340 [2024-07-26 14:16:09.339020] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1189540) 00:22:01.340 [2024-07-26 14:16:09.339030] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.340 [2024-07-26 14:16:09.339057] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11e93c0, cid 0, qid 0 00:22:01.340 [2024-07-26 14:16:09.339068] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11e9540, cid 1, qid 0 00:22:01.340 [2024-07-26 14:16:09.339076] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11e96c0, cid 2, qid 0 00:22:01.340 [2024-07-26 14:16:09.339084] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11e9840, cid 3, qid 0 00:22:01.340 [2024-07-26 14:16:09.339091] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11e99c0, cid 4, qid 0 00:22:01.340 [2024-07-26 14:16:09.339234] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.340 [2024-07-26 14:16:09.339248] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.340 [2024-07-26 14:16:09.339255] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.340 [2024-07-26 14:16:09.339262] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11e99c0) on tqpair=0x1189540 00:22:01.340 [2024-07-26 14:16:09.339273] nvme_ctrlr.c:3026:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:22:01.340 [2024-07-26 14:16:09.339282] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:22:01.340 [2024-07-26 14:16:09.339300] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.340 [2024-07-26 14:16:09.339309] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1189540) 00:22:01.340 [2024-07-26 14:16:09.339320] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.340 [2024-07-26 14:16:09.339340] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11e99c0, cid 4, qid 0 00:22:01.340 [2024-07-26 14:16:09.339425] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:01.340 [2024-07-26 14:16:09.339437] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:01.340 [2024-07-26 14:16:09.339444] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:01.340 [2024-07-26 14:16:09.339450] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1189540): datao=0, datal=4096, cccid=4 00:22:01.340 [2024-07-26 14:16:09.339458] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x11e99c0) on tqpair(0x1189540): expected_datao=0, payload_size=4096 00:22:01.340 [2024-07-26 14:16:09.339465] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.340 [2024-07-26 14:16:09.339481] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:01.340 [2024-07-26 14:16:09.339489] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:01.340 [2024-07-26 14:16:09.339501] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.340 [2024-07-26 14:16:09.339511] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.340 [2024-07-26 14:16:09.339517] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.340 [2024-07-26 14:16:09.339524] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11e99c0) on tqpair=0x1189540 00:22:01.340 [2024-07-26 14:16:09.339554] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:22:01.340 [2024-07-26 14:16:09.339595] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.340 [2024-07-26 14:16:09.339606] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1189540) 00:22:01.340 [2024-07-26 14:16:09.339616] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.340 [2024-07-26 14:16:09.339628] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.340 [2024-07-26 14:16:09.339635] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.340 [2024-07-26 14:16:09.339641] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1189540) 00:22:01.340 [2024-07-26 14:16:09.339653] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.340 [2024-07-26 14:16:09.339681] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11e99c0, cid 4, qid 0 00:22:01.340 [2024-07-26 14:16:09.339693] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11e9b40, cid 5, qid 0 00:22:01.340 [2024-07-26 14:16:09.339810] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:01.340 [2024-07-26 14:16:09.339822] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:01.340 [2024-07-26 14:16:09.339829] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:01.340 [2024-07-26 14:16:09.339835] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1189540): datao=0, datal=1024, cccid=4 00:22:01.340 [2024-07-26 14:16:09.339842] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x11e99c0) on tqpair(0x1189540): expected_datao=0, payload_size=1024 00:22:01.340 [2024-07-26 14:16:09.339850] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.340 [2024-07-26 14:16:09.339860] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:01.340 [2024-07-26 14:16:09.339867] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:01.340 [2024-07-26 14:16:09.339876] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.340 [2024-07-26 14:16:09.339885] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.340 [2024-07-26 14:16:09.339891] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.340 [2024-07-26 14:16:09.339898] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11e9b40) on tqpair=0x1189540 00:22:01.601 [2024-07-26 14:16:09.380613] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.601 [2024-07-26 14:16:09.380633] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.601 [2024-07-26 14:16:09.380642] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.601 [2024-07-26 14:16:09.380649] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11e99c0) on tqpair=0x1189540 00:22:01.601 [2024-07-26 14:16:09.380668] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.601 [2024-07-26 14:16:09.380678] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1189540) 00:22:01.601 [2024-07-26 14:16:09.380689] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.601 [2024-07-26 14:16:09.380720] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11e99c0, cid 4, qid 0 00:22:01.601 [2024-07-26 14:16:09.380814] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:01.601 [2024-07-26 14:16:09.380827] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:01.601 [2024-07-26 14:16:09.380834] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:01.601 [2024-07-26 14:16:09.380840] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1189540): datao=0, datal=3072, cccid=4 00:22:01.601 [2024-07-26 14:16:09.380848] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x11e99c0) on tqpair(0x1189540): expected_datao=0, payload_size=3072 00:22:01.601 [2024-07-26 14:16:09.380855] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.601 [2024-07-26 14:16:09.380875] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:01.601 [2024-07-26 14:16:09.380884] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:01.601 [2024-07-26 14:16:09.421601] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.601 [2024-07-26 14:16:09.421620] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.601 [2024-07-26 14:16:09.421627] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.601 [2024-07-26 14:16:09.421634] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11e99c0) on tqpair=0x1189540 00:22:01.601 [2024-07-26 14:16:09.421651] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.601 [2024-07-26 14:16:09.421659] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1189540) 00:22:01.601 [2024-07-26 14:16:09.421675] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.601 [2024-07-26 14:16:09.421705] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11e99c0, cid 4, qid 0 00:22:01.601 [2024-07-26 14:16:09.421799] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:01.601 [2024-07-26 14:16:09.421813] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:01.601 [2024-07-26 14:16:09.421820] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:01.601 [2024-07-26 14:16:09.421827] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1189540): datao=0, datal=8, cccid=4 00:22:01.601 [2024-07-26 14:16:09.421834] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x11e99c0) on tqpair(0x1189540): expected_datao=0, payload_size=8 00:22:01.601 [2024-07-26 14:16:09.421842] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.601 [2024-07-26 14:16:09.421852] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:01.601 [2024-07-26 14:16:09.421859] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:01.601 [2024-07-26 14:16:09.465550] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.601 [2024-07-26 14:16:09.465568] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.601 [2024-07-26 14:16:09.465576] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.601 [2024-07-26 14:16:09.465583] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11e99c0) on tqpair=0x1189540 00:22:01.601 ===================================================== 00:22:01.601 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:22:01.601 ===================================================== 00:22:01.601 Controller Capabilities/Features 00:22:01.601 ================================ 00:22:01.601 Vendor ID: 0000 00:22:01.601 Subsystem Vendor ID: 0000 00:22:01.601 Serial Number: .................... 00:22:01.601 Model Number: ........................................ 00:22:01.601 Firmware Version: 24.09 00:22:01.601 Recommended Arb Burst: 0 00:22:01.601 IEEE OUI Identifier: 00 00 00 00:22:01.601 Multi-path I/O 00:22:01.601 May have multiple subsystem ports: No 00:22:01.601 May have multiple controllers: No 00:22:01.601 Associated with SR-IOV VF: No 00:22:01.601 Max Data Transfer Size: 131072 00:22:01.601 Max Number of Namespaces: 0 00:22:01.601 Max Number of I/O Queues: 1024 00:22:01.601 NVMe Specification Version (VS): 1.3 00:22:01.601 NVMe Specification Version (Identify): 1.3 00:22:01.601 Maximum Queue Entries: 128 00:22:01.601 Contiguous Queues Required: Yes 00:22:01.601 Arbitration Mechanisms Supported 00:22:01.601 Weighted Round Robin: Not Supported 00:22:01.601 Vendor Specific: Not Supported 00:22:01.601 Reset Timeout: 15000 ms 00:22:01.601 Doorbell Stride: 4 bytes 00:22:01.601 NVM Subsystem Reset: Not Supported 00:22:01.601 Command Sets Supported 00:22:01.601 NVM Command Set: Supported 00:22:01.601 Boot Partition: Not Supported 00:22:01.601 Memory Page Size Minimum: 4096 bytes 00:22:01.601 Memory Page Size Maximum: 4096 bytes 00:22:01.601 Persistent Memory Region: Not Supported 00:22:01.601 Optional Asynchronous Events Supported 00:22:01.601 Namespace Attribute Notices: Not Supported 00:22:01.601 Firmware Activation Notices: Not Supported 00:22:01.601 ANA Change Notices: Not Supported 00:22:01.601 PLE Aggregate Log Change Notices: Not Supported 00:22:01.601 LBA Status Info Alert Notices: Not Supported 00:22:01.601 EGE Aggregate Log Change Notices: Not Supported 00:22:01.601 Normal NVM Subsystem Shutdown event: Not Supported 00:22:01.601 Zone Descriptor Change Notices: Not Supported 00:22:01.601 Discovery Log Change Notices: Supported 00:22:01.601 Controller Attributes 00:22:01.601 128-bit Host Identifier: Not Supported 00:22:01.601 Non-Operational Permissive Mode: Not Supported 00:22:01.601 NVM Sets: Not Supported 00:22:01.601 Read Recovery Levels: Not Supported 00:22:01.601 Endurance Groups: Not Supported 00:22:01.601 Predictable Latency Mode: Not Supported 00:22:01.601 Traffic Based Keep ALive: Not Supported 00:22:01.601 Namespace Granularity: Not Supported 00:22:01.601 SQ Associations: Not Supported 00:22:01.601 UUID List: Not Supported 00:22:01.601 Multi-Domain Subsystem: Not Supported 00:22:01.601 Fixed Capacity Management: Not Supported 00:22:01.601 Variable Capacity Management: Not Supported 00:22:01.601 Delete Endurance Group: Not Supported 00:22:01.601 Delete NVM Set: Not Supported 00:22:01.601 Extended LBA Formats Supported: Not Supported 00:22:01.601 Flexible Data Placement Supported: Not Supported 00:22:01.601 00:22:01.601 Controller Memory Buffer Support 00:22:01.601 ================================ 00:22:01.601 Supported: No 00:22:01.601 00:22:01.601 Persistent Memory Region Support 00:22:01.601 ================================ 00:22:01.601 Supported: No 00:22:01.601 00:22:01.601 Admin Command Set Attributes 00:22:01.601 ============================ 00:22:01.601 Security Send/Receive: Not Supported 00:22:01.601 Format NVM: Not Supported 00:22:01.601 Firmware Activate/Download: Not Supported 00:22:01.601 Namespace Management: Not Supported 00:22:01.601 Device Self-Test: Not Supported 00:22:01.601 Directives: Not Supported 00:22:01.601 NVMe-MI: Not Supported 00:22:01.601 Virtualization Management: Not Supported 00:22:01.601 Doorbell Buffer Config: Not Supported 00:22:01.601 Get LBA Status Capability: Not Supported 00:22:01.601 Command & Feature Lockdown Capability: Not Supported 00:22:01.601 Abort Command Limit: 1 00:22:01.601 Async Event Request Limit: 4 00:22:01.601 Number of Firmware Slots: N/A 00:22:01.601 Firmware Slot 1 Read-Only: N/A 00:22:01.601 Firmware Activation Without Reset: N/A 00:22:01.601 Multiple Update Detection Support: N/A 00:22:01.601 Firmware Update Granularity: No Information Provided 00:22:01.601 Per-Namespace SMART Log: No 00:22:01.601 Asymmetric Namespace Access Log Page: Not Supported 00:22:01.601 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:22:01.601 Command Effects Log Page: Not Supported 00:22:01.601 Get Log Page Extended Data: Supported 00:22:01.601 Telemetry Log Pages: Not Supported 00:22:01.601 Persistent Event Log Pages: Not Supported 00:22:01.601 Supported Log Pages Log Page: May Support 00:22:01.601 Commands Supported & Effects Log Page: Not Supported 00:22:01.601 Feature Identifiers & Effects Log Page:May Support 00:22:01.601 NVMe-MI Commands & Effects Log Page: May Support 00:22:01.601 Data Area 4 for Telemetry Log: Not Supported 00:22:01.601 Error Log Page Entries Supported: 128 00:22:01.601 Keep Alive: Not Supported 00:22:01.601 00:22:01.602 NVM Command Set Attributes 00:22:01.602 ========================== 00:22:01.602 Submission Queue Entry Size 00:22:01.602 Max: 1 00:22:01.602 Min: 1 00:22:01.602 Completion Queue Entry Size 00:22:01.602 Max: 1 00:22:01.602 Min: 1 00:22:01.602 Number of Namespaces: 0 00:22:01.602 Compare Command: Not Supported 00:22:01.602 Write Uncorrectable Command: Not Supported 00:22:01.602 Dataset Management Command: Not Supported 00:22:01.602 Write Zeroes Command: Not Supported 00:22:01.602 Set Features Save Field: Not Supported 00:22:01.602 Reservations: Not Supported 00:22:01.602 Timestamp: Not Supported 00:22:01.602 Copy: Not Supported 00:22:01.602 Volatile Write Cache: Not Present 00:22:01.602 Atomic Write Unit (Normal): 1 00:22:01.602 Atomic Write Unit (PFail): 1 00:22:01.602 Atomic Compare & Write Unit: 1 00:22:01.602 Fused Compare & Write: Supported 00:22:01.602 Scatter-Gather List 00:22:01.602 SGL Command Set: Supported 00:22:01.602 SGL Keyed: Supported 00:22:01.602 SGL Bit Bucket Descriptor: Not Supported 00:22:01.602 SGL Metadata Pointer: Not Supported 00:22:01.602 Oversized SGL: Not Supported 00:22:01.602 SGL Metadata Address: Not Supported 00:22:01.602 SGL Offset: Supported 00:22:01.602 Transport SGL Data Block: Not Supported 00:22:01.602 Replay Protected Memory Block: Not Supported 00:22:01.602 00:22:01.602 Firmware Slot Information 00:22:01.602 ========================= 00:22:01.602 Active slot: 0 00:22:01.602 00:22:01.602 00:22:01.602 Error Log 00:22:01.602 ========= 00:22:01.602 00:22:01.602 Active Namespaces 00:22:01.602 ================= 00:22:01.602 Discovery Log Page 00:22:01.602 ================== 00:22:01.602 Generation Counter: 2 00:22:01.602 Number of Records: 2 00:22:01.602 Record Format: 0 00:22:01.602 00:22:01.602 Discovery Log Entry 0 00:22:01.602 ---------------------- 00:22:01.602 Transport Type: 3 (TCP) 00:22:01.602 Address Family: 1 (IPv4) 00:22:01.602 Subsystem Type: 3 (Current Discovery Subsystem) 00:22:01.602 Entry Flags: 00:22:01.602 Duplicate Returned Information: 1 00:22:01.602 Explicit Persistent Connection Support for Discovery: 1 00:22:01.602 Transport Requirements: 00:22:01.602 Secure Channel: Not Required 00:22:01.602 Port ID: 0 (0x0000) 00:22:01.602 Controller ID: 65535 (0xffff) 00:22:01.602 Admin Max SQ Size: 128 00:22:01.602 Transport Service Identifier: 4420 00:22:01.602 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:22:01.602 Transport Address: 10.0.0.2 00:22:01.602 Discovery Log Entry 1 00:22:01.602 ---------------------- 00:22:01.602 Transport Type: 3 (TCP) 00:22:01.602 Address Family: 1 (IPv4) 00:22:01.602 Subsystem Type: 2 (NVM Subsystem) 00:22:01.602 Entry Flags: 00:22:01.602 Duplicate Returned Information: 0 00:22:01.602 Explicit Persistent Connection Support for Discovery: 0 00:22:01.602 Transport Requirements: 00:22:01.602 Secure Channel: Not Required 00:22:01.602 Port ID: 0 (0x0000) 00:22:01.602 Controller ID: 65535 (0xffff) 00:22:01.602 Admin Max SQ Size: 128 00:22:01.602 Transport Service Identifier: 4420 00:22:01.602 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:22:01.602 Transport Address: 10.0.0.2 [2024-07-26 14:16:09.465696] nvme_ctrlr.c:4361:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:22:01.602 [2024-07-26 14:16:09.465719] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11e93c0) on tqpair=0x1189540 00:22:01.602 [2024-07-26 14:16:09.465732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.602 [2024-07-26 14:16:09.465741] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11e9540) on tqpair=0x1189540 00:22:01.602 [2024-07-26 14:16:09.465749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.602 [2024-07-26 14:16:09.465757] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11e96c0) on tqpair=0x1189540 00:22:01.602 [2024-07-26 14:16:09.465764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.602 [2024-07-26 14:16:09.465772] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11e9840) on tqpair=0x1189540 00:22:01.602 [2024-07-26 14:16:09.465780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.602 [2024-07-26 14:16:09.465799] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.602 [2024-07-26 14:16:09.465808] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.602 [2024-07-26 14:16:09.465830] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1189540) 00:22:01.602 [2024-07-26 14:16:09.465841] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.602 [2024-07-26 14:16:09.465867] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11e9840, cid 3, qid 0 00:22:01.602 [2024-07-26 14:16:09.465955] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.602 [2024-07-26 14:16:09.465969] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.602 [2024-07-26 14:16:09.465976] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.602 [2024-07-26 14:16:09.465983] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11e9840) on tqpair=0x1189540 00:22:01.602 [2024-07-26 14:16:09.465996] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.602 [2024-07-26 14:16:09.466003] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.602 [2024-07-26 14:16:09.466009] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1189540) 00:22:01.602 [2024-07-26 14:16:09.466024] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.602 [2024-07-26 14:16:09.466052] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11e9840, cid 3, qid 0 00:22:01.602 [2024-07-26 14:16:09.466141] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.602 [2024-07-26 14:16:09.466152] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.602 [2024-07-26 14:16:09.466159] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.602 [2024-07-26 14:16:09.466166] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11e9840) on tqpair=0x1189540 00:22:01.602 [2024-07-26 14:16:09.466175] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:22:01.602 [2024-07-26 14:16:09.466184] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:22:01.602 [2024-07-26 14:16:09.466199] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.602 [2024-07-26 14:16:09.466208] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.602 [2024-07-26 14:16:09.466214] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1189540) 00:22:01.602 [2024-07-26 14:16:09.466225] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.602 [2024-07-26 14:16:09.466245] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11e9840, cid 3, qid 0 00:22:01.602 [2024-07-26 14:16:09.466323] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.602 [2024-07-26 14:16:09.466337] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.602 [2024-07-26 14:16:09.466343] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.602 [2024-07-26 14:16:09.466350] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11e9840) on tqpair=0x1189540 00:22:01.602 [2024-07-26 14:16:09.466367] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.602 [2024-07-26 14:16:09.466376] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.602 [2024-07-26 14:16:09.466382] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1189540) 00:22:01.602 [2024-07-26 14:16:09.466393] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.602 [2024-07-26 14:16:09.466413] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11e9840, cid 3, qid 0 00:22:01.602 [2024-07-26 14:16:09.466487] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.602 [2024-07-26 14:16:09.466499] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.602 [2024-07-26 14:16:09.466506] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.602 [2024-07-26 14:16:09.466512] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11e9840) on tqpair=0x1189540 00:22:01.602 [2024-07-26 14:16:09.466534] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.602 [2024-07-26 14:16:09.466544] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.602 [2024-07-26 14:16:09.466550] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1189540) 00:22:01.602 [2024-07-26 14:16:09.466561] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.602 [2024-07-26 14:16:09.466582] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11e9840, cid 3, qid 0 00:22:01.602 [2024-07-26 14:16:09.466676] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.602 [2024-07-26 14:16:09.466688] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.602 [2024-07-26 14:16:09.466694] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.603 [2024-07-26 14:16:09.466701] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11e9840) on tqpair=0x1189540 00:22:01.603 [2024-07-26 14:16:09.466716] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.603 [2024-07-26 14:16:09.466729] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.603 [2024-07-26 14:16:09.466736] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1189540) 00:22:01.603 [2024-07-26 14:16:09.466746] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.603 [2024-07-26 14:16:09.466767] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11e9840, cid 3, qid 0 00:22:01.603 [2024-07-26 14:16:09.466837] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.603 [2024-07-26 14:16:09.466851] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.603 [2024-07-26 14:16:09.466858] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.603 [2024-07-26 14:16:09.466865] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11e9840) on tqpair=0x1189540 00:22:01.603 [2024-07-26 14:16:09.466880] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.603 [2024-07-26 14:16:09.466889] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.603 [2024-07-26 14:16:09.466895] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1189540) 00:22:01.603 [2024-07-26 14:16:09.466905] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.603 [2024-07-26 14:16:09.466926] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11e9840, cid 3, qid 0 00:22:01.603 [2024-07-26 14:16:09.466998] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.603 [2024-07-26 14:16:09.467012] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.603 [2024-07-26 14:16:09.467019] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.603 [2024-07-26 14:16:09.467025] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11e9840) on tqpair=0x1189540 00:22:01.603 [2024-07-26 14:16:09.467041] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.603 [2024-07-26 14:16:09.467050] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.603 [2024-07-26 14:16:09.467056] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1189540) 00:22:01.603 [2024-07-26 14:16:09.467066] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.603 [2024-07-26 14:16:09.467086] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11e9840, cid 3, qid 0 00:22:01.603 [2024-07-26 14:16:09.467159] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.603 [2024-07-26 14:16:09.467172] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.603 [2024-07-26 14:16:09.467179] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.603 [2024-07-26 14:16:09.467186] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11e9840) on tqpair=0x1189540 00:22:01.603 [2024-07-26 14:16:09.467201] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.603 [2024-07-26 14:16:09.467210] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.603 [2024-07-26 14:16:09.467216] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1189540) 00:22:01.603 [2024-07-26 14:16:09.467226] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.603 [2024-07-26 14:16:09.467247] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11e9840, cid 3, qid 0 00:22:01.603 [2024-07-26 14:16:09.467315] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.603 [2024-07-26 14:16:09.467326] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.603 [2024-07-26 14:16:09.467333] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.603 [2024-07-26 14:16:09.467340] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11e9840) on tqpair=0x1189540 00:22:01.603 [2024-07-26 14:16:09.467355] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.603 [2024-07-26 14:16:09.467364] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.603 [2024-07-26 14:16:09.467374] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1189540) 00:22:01.603 [2024-07-26 14:16:09.467384] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.603 [2024-07-26 14:16:09.467405] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11e9840, cid 3, qid 0 00:22:01.603 [2024-07-26 14:16:09.467476] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.603 [2024-07-26 14:16:09.467488] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.603 [2024-07-26 14:16:09.467494] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.603 [2024-07-26 14:16:09.467501] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11e9840) on tqpair=0x1189540 00:22:01.603 [2024-07-26 14:16:09.467516] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.603 [2024-07-26 14:16:09.467525] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.603 [2024-07-26 14:16:09.467540] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1189540) 00:22:01.603 [2024-07-26 14:16:09.467551] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.603 [2024-07-26 14:16:09.467572] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11e9840, cid 3, qid 0 00:22:01.603 [2024-07-26 14:16:09.467649] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.603 [2024-07-26 14:16:09.467662] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.603 [2024-07-26 14:16:09.467669] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.603 [2024-07-26 14:16:09.467676] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11e9840) on tqpair=0x1189540 00:22:01.603 [2024-07-26 14:16:09.467691] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.603 [2024-07-26 14:16:09.467700] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.603 [2024-07-26 14:16:09.467706] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1189540) 00:22:01.603 [2024-07-26 14:16:09.467717] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.603 [2024-07-26 14:16:09.467737] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11e9840, cid 3, qid 0 00:22:01.603 [2024-07-26 14:16:09.467809] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.603 [2024-07-26 14:16:09.467821] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.603 [2024-07-26 14:16:09.467828] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.603 [2024-07-26 14:16:09.467834] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11e9840) on tqpair=0x1189540 00:22:01.603 [2024-07-26 14:16:09.467849] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.603 [2024-07-26 14:16:09.467858] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.603 [2024-07-26 14:16:09.467865] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1189540) 00:22:01.603 [2024-07-26 14:16:09.467875] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.603 [2024-07-26 14:16:09.467895] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11e9840, cid 3, qid 0 00:22:01.603 [2024-07-26 14:16:09.467966] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.603 [2024-07-26 14:16:09.467977] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.603 [2024-07-26 14:16:09.467984] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.603 [2024-07-26 14:16:09.467991] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11e9840) on tqpair=0x1189540 00:22:01.603 [2024-07-26 14:16:09.468006] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.603 [2024-07-26 14:16:09.468015] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.603 [2024-07-26 14:16:09.468021] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1189540) 00:22:01.603 [2024-07-26 14:16:09.468035] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.603 [2024-07-26 14:16:09.468056] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11e9840, cid 3, qid 0 00:22:01.603 [2024-07-26 14:16:09.468126] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.603 [2024-07-26 14:16:09.468140] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.603 [2024-07-26 14:16:09.468147] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.603 [2024-07-26 14:16:09.468153] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11e9840) on tqpair=0x1189540 00:22:01.603 [2024-07-26 14:16:09.468169] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.603 [2024-07-26 14:16:09.468178] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.603 [2024-07-26 14:16:09.468184] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1189540) 00:22:01.603 [2024-07-26 14:16:09.468194] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.603 [2024-07-26 14:16:09.468214] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11e9840, cid 3, qid 0 00:22:01.603 [2024-07-26 14:16:09.468285] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.603 [2024-07-26 14:16:09.468296] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.603 [2024-07-26 14:16:09.468303] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.603 [2024-07-26 14:16:09.468310] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11e9840) on tqpair=0x1189540 00:22:01.603 [2024-07-26 14:16:09.468325] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.603 [2024-07-26 14:16:09.468334] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.603 [2024-07-26 14:16:09.468340] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1189540) 00:22:01.603 [2024-07-26 14:16:09.468350] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.603 [2024-07-26 14:16:09.468370] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11e9840, cid 3, qid 0 00:22:01.603 [2024-07-26 14:16:09.468441] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.603 [2024-07-26 14:16:09.468453] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.603 [2024-07-26 14:16:09.468460] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.603 [2024-07-26 14:16:09.468467] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11e9840) on tqpair=0x1189540 00:22:01.603 [2024-07-26 14:16:09.468482] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.603 [2024-07-26 14:16:09.468491] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.603 [2024-07-26 14:16:09.468497] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1189540) 00:22:01.603 [2024-07-26 14:16:09.468507] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.603 [2024-07-26 14:16:09.468541] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11e9840, cid 3, qid 0 00:22:01.604 [2024-07-26 14:16:09.468605] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.604 [2024-07-26 14:16:09.468617] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.604 [2024-07-26 14:16:09.468624] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.604 [2024-07-26 14:16:09.468630] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11e9840) on tqpair=0x1189540 00:22:01.604 [2024-07-26 14:16:09.468646] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.604 [2024-07-26 14:16:09.468655] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.604 [2024-07-26 14:16:09.468662] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1189540) 00:22:01.604 [2024-07-26 14:16:09.468672] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.604 [2024-07-26 14:16:09.468697] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11e9840, cid 3, qid 0 00:22:01.604 [2024-07-26 14:16:09.468762] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.604 [2024-07-26 14:16:09.468774] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.604 [2024-07-26 14:16:09.468780] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.604 [2024-07-26 14:16:09.468787] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11e9840) on tqpair=0x1189540 00:22:01.604 [2024-07-26 14:16:09.468802] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.604 [2024-07-26 14:16:09.468811] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.604 [2024-07-26 14:16:09.468818] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1189540) 00:22:01.604 [2024-07-26 14:16:09.468828] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.604 [2024-07-26 14:16:09.468848] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11e9840, cid 3, qid 0 00:22:01.604 [2024-07-26 14:16:09.468921] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.604 [2024-07-26 14:16:09.468935] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.604 [2024-07-26 14:16:09.468941] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.604 [2024-07-26 14:16:09.468948] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11e9840) on tqpair=0x1189540 00:22:01.604 [2024-07-26 14:16:09.468963] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.604 [2024-07-26 14:16:09.468972] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.604 [2024-07-26 14:16:09.468979] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1189540) 00:22:01.604 [2024-07-26 14:16:09.468989] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.604 [2024-07-26 14:16:09.469009] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11e9840, cid 3, qid 0 00:22:01.604 [2024-07-26 14:16:09.469082] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.604 [2024-07-26 14:16:09.469095] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.604 [2024-07-26 14:16:09.469102] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.604 [2024-07-26 14:16:09.469108] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11e9840) on tqpair=0x1189540 00:22:01.604 [2024-07-26 14:16:09.469124] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.604 [2024-07-26 14:16:09.469132] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.604 [2024-07-26 14:16:09.469139] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1189540) 00:22:01.604 [2024-07-26 14:16:09.469149] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.604 [2024-07-26 14:16:09.469169] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11e9840, cid 3, qid 0 00:22:01.604 [2024-07-26 14:16:09.469261] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.604 [2024-07-26 14:16:09.469275] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.604 [2024-07-26 14:16:09.469281] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.604 [2024-07-26 14:16:09.469288] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11e9840) on tqpair=0x1189540 00:22:01.604 [2024-07-26 14:16:09.469304] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.604 [2024-07-26 14:16:09.469312] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.604 [2024-07-26 14:16:09.469319] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1189540) 00:22:01.604 [2024-07-26 14:16:09.469329] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.604 [2024-07-26 14:16:09.469353] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11e9840, cid 3, qid 0 00:22:01.604 [2024-07-26 14:16:09.469424] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.604 [2024-07-26 14:16:09.469436] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.604 [2024-07-26 14:16:09.469442] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.604 [2024-07-26 14:16:09.469449] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11e9840) on tqpair=0x1189540 00:22:01.604 [2024-07-26 14:16:09.469464] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.604 [2024-07-26 14:16:09.469473] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.604 [2024-07-26 14:16:09.469479] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1189540) 00:22:01.604 [2024-07-26 14:16:09.469489] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.604 [2024-07-26 14:16:09.469509] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11e9840, cid 3, qid 0 00:22:01.604 [2024-07-26 14:16:09.473559] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.604 [2024-07-26 14:16:09.473575] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.604 [2024-07-26 14:16:09.473582] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.604 [2024-07-26 14:16:09.473589] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11e9840) on tqpair=0x1189540 00:22:01.604 [2024-07-26 14:16:09.473606] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.604 [2024-07-26 14:16:09.473615] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.604 [2024-07-26 14:16:09.473622] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1189540) 00:22:01.604 [2024-07-26 14:16:09.473632] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.604 [2024-07-26 14:16:09.473654] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11e9840, cid 3, qid 0 00:22:01.604 [2024-07-26 14:16:09.473727] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.604 [2024-07-26 14:16:09.473739] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.604 [2024-07-26 14:16:09.473746] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.604 [2024-07-26 14:16:09.473752] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11e9840) on tqpair=0x1189540 00:22:01.604 [2024-07-26 14:16:09.473765] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 7 milliseconds 00:22:01.604 00:22:01.604 14:16:09 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:22:01.604 [2024-07-26 14:16:09.516719] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:22:01.604 [2024-07-26 14:16:09.516764] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid277784 ] 00:22:01.604 EAL: No free 2048 kB hugepages reported on node 1 00:22:01.604 [2024-07-26 14:16:09.551350] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:22:01.604 [2024-07-26 14:16:09.551402] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:01.604 [2024-07-26 14:16:09.551411] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:01.604 [2024-07-26 14:16:09.551425] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:01.604 [2024-07-26 14:16:09.551440] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:22:01.604 [2024-07-26 14:16:09.551659] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:22:01.604 [2024-07-26 14:16:09.551696] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1ffc540 0 00:22:01.604 [2024-07-26 14:16:09.566551] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:01.604 [2024-07-26 14:16:09.566574] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:01.604 [2024-07-26 14:16:09.566582] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:01.604 [2024-07-26 14:16:09.566588] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:01.604 [2024-07-26 14:16:09.566627] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.604 [2024-07-26 14:16:09.566639] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.604 [2024-07-26 14:16:09.566645] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ffc540) 00:22:01.604 [2024-07-26 14:16:09.566659] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:01.604 [2024-07-26 14:16:09.566684] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x205c3c0, cid 0, qid 0 00:22:01.604 [2024-07-26 14:16:09.574548] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.604 [2024-07-26 14:16:09.574566] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.604 [2024-07-26 14:16:09.574573] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.604 [2024-07-26 14:16:09.574580] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x205c3c0) on tqpair=0x1ffc540 00:22:01.604 [2024-07-26 14:16:09.574594] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:01.604 [2024-07-26 14:16:09.574604] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:22:01.604 [2024-07-26 14:16:09.574613] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:22:01.604 [2024-07-26 14:16:09.574632] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.604 [2024-07-26 14:16:09.574641] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.604 [2024-07-26 14:16:09.574647] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ffc540) 00:22:01.604 [2024-07-26 14:16:09.574659] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.605 [2024-07-26 14:16:09.574681] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x205c3c0, cid 0, qid 0 00:22:01.605 [2024-07-26 14:16:09.574798] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.605 [2024-07-26 14:16:09.574810] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.605 [2024-07-26 14:16:09.574817] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.605 [2024-07-26 14:16:09.574823] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x205c3c0) on tqpair=0x1ffc540 00:22:01.605 [2024-07-26 14:16:09.574835] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:22:01.605 [2024-07-26 14:16:09.574849] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:22:01.605 [2024-07-26 14:16:09.574861] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.605 [2024-07-26 14:16:09.574868] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.605 [2024-07-26 14:16:09.574874] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ffc540) 00:22:01.605 [2024-07-26 14:16:09.574885] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.605 [2024-07-26 14:16:09.574906] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x205c3c0, cid 0, qid 0 00:22:01.605 [2024-07-26 14:16:09.574988] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.605 [2024-07-26 14:16:09.575003] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.605 [2024-07-26 14:16:09.575011] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.605 [2024-07-26 14:16:09.575018] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x205c3c0) on tqpair=0x1ffc540 00:22:01.605 [2024-07-26 14:16:09.575026] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:22:01.605 [2024-07-26 14:16:09.575040] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:22:01.605 [2024-07-26 14:16:09.575052] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.605 [2024-07-26 14:16:09.575059] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.605 [2024-07-26 14:16:09.575066] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ffc540) 00:22:01.605 [2024-07-26 14:16:09.575076] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.605 [2024-07-26 14:16:09.575097] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x205c3c0, cid 0, qid 0 00:22:01.605 [2024-07-26 14:16:09.575177] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.605 [2024-07-26 14:16:09.575191] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.605 [2024-07-26 14:16:09.575198] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.605 [2024-07-26 14:16:09.575204] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x205c3c0) on tqpair=0x1ffc540 00:22:01.605 [2024-07-26 14:16:09.575212] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:01.605 [2024-07-26 14:16:09.575228] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.605 [2024-07-26 14:16:09.575237] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.605 [2024-07-26 14:16:09.575244] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ffc540) 00:22:01.605 [2024-07-26 14:16:09.575254] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.605 [2024-07-26 14:16:09.575275] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x205c3c0, cid 0, qid 0 00:22:01.605 [2024-07-26 14:16:09.575353] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.605 [2024-07-26 14:16:09.575367] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.605 [2024-07-26 14:16:09.575374] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.605 [2024-07-26 14:16:09.575380] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x205c3c0) on tqpair=0x1ffc540 00:22:01.605 [2024-07-26 14:16:09.575388] nvme_ctrlr.c:3873:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:22:01.605 [2024-07-26 14:16:09.575396] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:22:01.605 [2024-07-26 14:16:09.575409] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:01.605 [2024-07-26 14:16:09.575518] nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:22:01.605 [2024-07-26 14:16:09.575526] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:01.605 [2024-07-26 14:16:09.575546] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.605 [2024-07-26 14:16:09.575553] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.605 [2024-07-26 14:16:09.575560] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ffc540) 00:22:01.605 [2024-07-26 14:16:09.575570] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.605 [2024-07-26 14:16:09.575596] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x205c3c0, cid 0, qid 0 00:22:01.605 [2024-07-26 14:16:09.575711] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.605 [2024-07-26 14:16:09.575724] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.605 [2024-07-26 14:16:09.575730] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.605 [2024-07-26 14:16:09.575737] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x205c3c0) on tqpair=0x1ffc540 00:22:01.605 [2024-07-26 14:16:09.575745] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:01.605 [2024-07-26 14:16:09.575760] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.605 [2024-07-26 14:16:09.575769] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.605 [2024-07-26 14:16:09.575775] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ffc540) 00:22:01.605 [2024-07-26 14:16:09.575786] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.605 [2024-07-26 14:16:09.575806] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x205c3c0, cid 0, qid 0 00:22:01.605 [2024-07-26 14:16:09.575889] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.605 [2024-07-26 14:16:09.575902] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.605 [2024-07-26 14:16:09.575909] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.605 [2024-07-26 14:16:09.575916] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x205c3c0) on tqpair=0x1ffc540 00:22:01.605 [2024-07-26 14:16:09.575923] nvme_ctrlr.c:3908:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:01.605 [2024-07-26 14:16:09.575932] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:22:01.605 [2024-07-26 14:16:09.575945] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:22:01.605 [2024-07-26 14:16:09.575959] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:22:01.605 [2024-07-26 14:16:09.575972] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.605 [2024-07-26 14:16:09.575980] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ffc540) 00:22:01.605 [2024-07-26 14:16:09.575990] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.605 [2024-07-26 14:16:09.576011] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x205c3c0, cid 0, qid 0 00:22:01.605 [2024-07-26 14:16:09.576133] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:01.605 [2024-07-26 14:16:09.576146] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:01.605 [2024-07-26 14:16:09.576153] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:01.605 [2024-07-26 14:16:09.576159] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ffc540): datao=0, datal=4096, cccid=0 00:22:01.605 [2024-07-26 14:16:09.576167] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x205c3c0) on tqpair(0x1ffc540): expected_datao=0, payload_size=4096 00:22:01.605 [2024-07-26 14:16:09.576174] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.605 [2024-07-26 14:16:09.576191] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:01.605 [2024-07-26 14:16:09.576200] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:01.605 [2024-07-26 14:16:09.576211] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.605 [2024-07-26 14:16:09.576221] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.605 [2024-07-26 14:16:09.576228] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.606 [2024-07-26 14:16:09.576239] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x205c3c0) on tqpair=0x1ffc540 00:22:01.606 [2024-07-26 14:16:09.576249] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:22:01.606 [2024-07-26 14:16:09.576258] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:22:01.606 [2024-07-26 14:16:09.576265] nvme_ctrlr.c:2064:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:22:01.606 [2024-07-26 14:16:09.576272] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:22:01.606 [2024-07-26 14:16:09.576280] nvme_ctrlr.c:2103:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:22:01.606 [2024-07-26 14:16:09.576288] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:22:01.606 [2024-07-26 14:16:09.576302] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:22:01.606 [2024-07-26 14:16:09.576318] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.606 [2024-07-26 14:16:09.576326] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.606 [2024-07-26 14:16:09.576332] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ffc540) 00:22:01.606 [2024-07-26 14:16:09.576343] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:01.606 [2024-07-26 14:16:09.576364] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x205c3c0, cid 0, qid 0 00:22:01.606 [2024-07-26 14:16:09.576452] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.606 [2024-07-26 14:16:09.576464] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.606 [2024-07-26 14:16:09.576470] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.606 [2024-07-26 14:16:09.576477] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x205c3c0) on tqpair=0x1ffc540 00:22:01.606 [2024-07-26 14:16:09.576487] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.606 [2024-07-26 14:16:09.576494] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.606 [2024-07-26 14:16:09.576501] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ffc540) 00:22:01.606 [2024-07-26 14:16:09.576510] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.606 [2024-07-26 14:16:09.576520] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.606 [2024-07-26 14:16:09.576535] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.606 [2024-07-26 14:16:09.576543] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1ffc540) 00:22:01.606 [2024-07-26 14:16:09.576552] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.606 [2024-07-26 14:16:09.576562] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.606 [2024-07-26 14:16:09.576569] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.606 [2024-07-26 14:16:09.576575] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1ffc540) 00:22:01.606 [2024-07-26 14:16:09.576584] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.606 [2024-07-26 14:16:09.576594] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.606 [2024-07-26 14:16:09.576601] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.606 [2024-07-26 14:16:09.576607] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ffc540) 00:22:01.606 [2024-07-26 14:16:09.576615] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.606 [2024-07-26 14:16:09.576628] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:22:01.606 [2024-07-26 14:16:09.576647] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:01.606 [2024-07-26 14:16:09.576659] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.606 [2024-07-26 14:16:09.576666] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ffc540) 00:22:01.606 [2024-07-26 14:16:09.576677] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.606 [2024-07-26 14:16:09.576699] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x205c3c0, cid 0, qid 0 00:22:01.606 [2024-07-26 14:16:09.576725] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x205c540, cid 1, qid 0 00:22:01.606 [2024-07-26 14:16:09.576733] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x205c6c0, cid 2, qid 0 00:22:01.606 [2024-07-26 14:16:09.576740] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x205c840, cid 3, qid 0 00:22:01.606 [2024-07-26 14:16:09.576748] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x205c9c0, cid 4, qid 0 00:22:01.606 [2024-07-26 14:16:09.576949] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.606 [2024-07-26 14:16:09.576964] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.606 [2024-07-26 14:16:09.576971] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.606 [2024-07-26 14:16:09.576978] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x205c9c0) on tqpair=0x1ffc540 00:22:01.606 [2024-07-26 14:16:09.576985] nvme_ctrlr.c:3026:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:22:01.606 [2024-07-26 14:16:09.576994] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:22:01.606 [2024-07-26 14:16:09.577013] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:22:01.606 [2024-07-26 14:16:09.577026] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:22:01.606 [2024-07-26 14:16:09.577037] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.606 [2024-07-26 14:16:09.577045] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.606 [2024-07-26 14:16:09.577051] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ffc540) 00:22:01.606 [2024-07-26 14:16:09.577062] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:01.606 [2024-07-26 14:16:09.577097] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x205c9c0, cid 4, qid 0 00:22:01.606 [2024-07-26 14:16:09.580538] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.606 [2024-07-26 14:16:09.580555] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.606 [2024-07-26 14:16:09.580562] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.606 [2024-07-26 14:16:09.580569] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x205c9c0) on tqpair=0x1ffc540 00:22:01.606 [2024-07-26 14:16:09.580637] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:22:01.606 [2024-07-26 14:16:09.580657] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:22:01.606 [2024-07-26 14:16:09.580673] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.606 [2024-07-26 14:16:09.580681] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ffc540) 00:22:01.606 [2024-07-26 14:16:09.580691] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.606 [2024-07-26 14:16:09.580716] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x205c9c0, cid 4, qid 0 00:22:01.606 [2024-07-26 14:16:09.580857] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:01.606 [2024-07-26 14:16:09.580872] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:01.606 [2024-07-26 14:16:09.580879] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:01.606 [2024-07-26 14:16:09.580886] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ffc540): datao=0, datal=4096, cccid=4 00:22:01.606 [2024-07-26 14:16:09.580894] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x205c9c0) on tqpair(0x1ffc540): expected_datao=0, payload_size=4096 00:22:01.606 [2024-07-26 14:16:09.580901] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.606 [2024-07-26 14:16:09.580919] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:01.606 [2024-07-26 14:16:09.580928] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:01.866 [2024-07-26 14:16:09.626539] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.866 [2024-07-26 14:16:09.626560] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.866 [2024-07-26 14:16:09.626569] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.866 [2024-07-26 14:16:09.626577] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x205c9c0) on tqpair=0x1ffc540 00:22:01.866 [2024-07-26 14:16:09.626596] nvme_ctrlr.c:4697:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:22:01.866 [2024-07-26 14:16:09.626619] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:22:01.866 [2024-07-26 14:16:09.626639] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:22:01.866 [2024-07-26 14:16:09.626655] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.866 [2024-07-26 14:16:09.626669] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ffc540) 00:22:01.866 [2024-07-26 14:16:09.626688] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.866 [2024-07-26 14:16:09.626725] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x205c9c0, cid 4, qid 0 00:22:01.866 [2024-07-26 14:16:09.626864] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:01.866 [2024-07-26 14:16:09.626887] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:01.866 [2024-07-26 14:16:09.626896] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:01.866 [2024-07-26 14:16:09.626903] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ffc540): datao=0, datal=4096, cccid=4 00:22:01.866 [2024-07-26 14:16:09.626911] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x205c9c0) on tqpair(0x1ffc540): expected_datao=0, payload_size=4096 00:22:01.866 [2024-07-26 14:16:09.626919] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.866 [2024-07-26 14:16:09.626938] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:01.866 [2024-07-26 14:16:09.626947] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:01.866 [2024-07-26 14:16:09.667633] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.866 [2024-07-26 14:16:09.667652] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.866 [2024-07-26 14:16:09.667660] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.866 [2024-07-26 14:16:09.667667] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x205c9c0) on tqpair=0x1ffc540 00:22:01.866 [2024-07-26 14:16:09.667695] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:22:01.866 [2024-07-26 14:16:09.667716] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:22:01.866 [2024-07-26 14:16:09.667735] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.866 [2024-07-26 14:16:09.667744] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ffc540) 00:22:01.866 [2024-07-26 14:16:09.667756] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.866 [2024-07-26 14:16:09.667780] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x205c9c0, cid 4, qid 0 00:22:01.866 [2024-07-26 14:16:09.667871] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:01.866 [2024-07-26 14:16:09.667886] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:01.866 [2024-07-26 14:16:09.667892] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:01.866 [2024-07-26 14:16:09.667899] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ffc540): datao=0, datal=4096, cccid=4 00:22:01.866 [2024-07-26 14:16:09.667906] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x205c9c0) on tqpair(0x1ffc540): expected_datao=0, payload_size=4096 00:22:01.866 [2024-07-26 14:16:09.667914] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.866 [2024-07-26 14:16:09.667931] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:01.866 [2024-07-26 14:16:09.667940] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:01.866 [2024-07-26 14:16:09.667972] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.866 [2024-07-26 14:16:09.667985] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.866 [2024-07-26 14:16:09.667992] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.866 [2024-07-26 14:16:09.667998] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x205c9c0) on tqpair=0x1ffc540 00:22:01.866 [2024-07-26 14:16:09.668012] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:22:01.866 [2024-07-26 14:16:09.668026] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:22:01.866 [2024-07-26 14:16:09.668042] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:22:01.866 [2024-07-26 14:16:09.668056] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:22:01.866 [2024-07-26 14:16:09.668066] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:22:01.866 [2024-07-26 14:16:09.668075] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:22:01.866 [2024-07-26 14:16:09.668084] nvme_ctrlr.c:3114:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:22:01.866 [2024-07-26 14:16:09.668092] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:22:01.867 [2024-07-26 14:16:09.668101] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:22:01.867 [2024-07-26 14:16:09.668120] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.867 [2024-07-26 14:16:09.668129] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ffc540) 00:22:01.867 [2024-07-26 14:16:09.668139] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.867 [2024-07-26 14:16:09.668151] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.867 [2024-07-26 14:16:09.668158] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.867 [2024-07-26 14:16:09.668164] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1ffc540) 00:22:01.867 [2024-07-26 14:16:09.668173] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.867 [2024-07-26 14:16:09.668201] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x205c9c0, cid 4, qid 0 00:22:01.867 [2024-07-26 14:16:09.668214] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x205cb40, cid 5, qid 0 00:22:01.867 [2024-07-26 14:16:09.668311] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.867 [2024-07-26 14:16:09.668325] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.867 [2024-07-26 14:16:09.668332] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.867 [2024-07-26 14:16:09.668339] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x205c9c0) on tqpair=0x1ffc540 00:22:01.867 [2024-07-26 14:16:09.668349] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.867 [2024-07-26 14:16:09.668359] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.867 [2024-07-26 14:16:09.668365] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.867 [2024-07-26 14:16:09.668372] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x205cb40) on tqpair=0x1ffc540 00:22:01.867 [2024-07-26 14:16:09.668387] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.867 [2024-07-26 14:16:09.668396] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1ffc540) 00:22:01.867 [2024-07-26 14:16:09.668407] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.867 [2024-07-26 14:16:09.668428] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x205cb40, cid 5, qid 0 00:22:01.867 [2024-07-26 14:16:09.668509] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.867 [2024-07-26 14:16:09.668523] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.867 [2024-07-26 14:16:09.668540] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.867 [2024-07-26 14:16:09.668548] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x205cb40) on tqpair=0x1ffc540 00:22:01.867 [2024-07-26 14:16:09.668564] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.867 [2024-07-26 14:16:09.668573] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1ffc540) 00:22:01.867 [2024-07-26 14:16:09.668584] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.867 [2024-07-26 14:16:09.668605] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x205cb40, cid 5, qid 0 00:22:01.867 [2024-07-26 14:16:09.668692] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.867 [2024-07-26 14:16:09.668704] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.867 [2024-07-26 14:16:09.668711] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.867 [2024-07-26 14:16:09.668717] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x205cb40) on tqpair=0x1ffc540 00:22:01.867 [2024-07-26 14:16:09.668733] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.867 [2024-07-26 14:16:09.668741] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1ffc540) 00:22:01.867 [2024-07-26 14:16:09.668752] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.867 [2024-07-26 14:16:09.668771] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x205cb40, cid 5, qid 0 00:22:01.867 [2024-07-26 14:16:09.668865] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.867 [2024-07-26 14:16:09.668876] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.867 [2024-07-26 14:16:09.668883] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.867 [2024-07-26 14:16:09.668890] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x205cb40) on tqpair=0x1ffc540 00:22:01.867 [2024-07-26 14:16:09.668914] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.867 [2024-07-26 14:16:09.668925] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1ffc540) 00:22:01.867 [2024-07-26 14:16:09.668938] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.867 [2024-07-26 14:16:09.668952] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.867 [2024-07-26 14:16:09.668959] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ffc540) 00:22:01.867 [2024-07-26 14:16:09.668969] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.867 [2024-07-26 14:16:09.668981] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.867 [2024-07-26 14:16:09.668988] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1ffc540) 00:22:01.867 [2024-07-26 14:16:09.668998] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.867 [2024-07-26 14:16:09.669010] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.867 [2024-07-26 14:16:09.669017] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1ffc540) 00:22:01.867 [2024-07-26 14:16:09.669027] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.867 [2024-07-26 14:16:09.669064] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x205cb40, cid 5, qid 0 00:22:01.867 [2024-07-26 14:16:09.669074] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x205c9c0, cid 4, qid 0 00:22:01.867 [2024-07-26 14:16:09.669082] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x205ccc0, cid 6, qid 0 00:22:01.867 [2024-07-26 14:16:09.669090] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x205ce40, cid 7, qid 0 00:22:01.867 [2024-07-26 14:16:09.669274] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:01.867 [2024-07-26 14:16:09.669287] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:01.867 [2024-07-26 14:16:09.669294] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:01.867 [2024-07-26 14:16:09.669300] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ffc540): datao=0, datal=8192, cccid=5 00:22:01.867 [2024-07-26 14:16:09.669308] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x205cb40) on tqpair(0x1ffc540): expected_datao=0, payload_size=8192 00:22:01.867 [2024-07-26 14:16:09.669315] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.867 [2024-07-26 14:16:09.669336] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:01.867 [2024-07-26 14:16:09.669346] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:01.867 [2024-07-26 14:16:09.669355] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:01.867 [2024-07-26 14:16:09.669364] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:01.867 [2024-07-26 14:16:09.669370] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:01.867 [2024-07-26 14:16:09.669377] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ffc540): datao=0, datal=512, cccid=4 00:22:01.867 [2024-07-26 14:16:09.669384] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x205c9c0) on tqpair(0x1ffc540): expected_datao=0, payload_size=512 00:22:01.867 [2024-07-26 14:16:09.669392] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.867 [2024-07-26 14:16:09.669401] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:01.867 [2024-07-26 14:16:09.669408] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:01.867 [2024-07-26 14:16:09.669417] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:01.867 [2024-07-26 14:16:09.669426] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:01.867 [2024-07-26 14:16:09.669433] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:01.867 [2024-07-26 14:16:09.669439] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ffc540): datao=0, datal=512, cccid=6 00:22:01.867 [2024-07-26 14:16:09.669450] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x205ccc0) on tqpair(0x1ffc540): expected_datao=0, payload_size=512 00:22:01.867 [2024-07-26 14:16:09.669458] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.867 [2024-07-26 14:16:09.669468] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:01.867 [2024-07-26 14:16:09.669475] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:01.867 [2024-07-26 14:16:09.669484] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:01.867 [2024-07-26 14:16:09.669493] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:01.867 [2024-07-26 14:16:09.669499] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:01.867 [2024-07-26 14:16:09.669505] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ffc540): datao=0, datal=4096, cccid=7 00:22:01.867 [2024-07-26 14:16:09.669513] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x205ce40) on tqpair(0x1ffc540): expected_datao=0, payload_size=4096 00:22:01.867 [2024-07-26 14:16:09.669520] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.867 [2024-07-26 14:16:09.669538] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:01.867 [2024-07-26 14:16:09.669547] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:01.867 [2024-07-26 14:16:09.669558] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.867 [2024-07-26 14:16:09.669568] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.867 [2024-07-26 14:16:09.669575] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.867 [2024-07-26 14:16:09.669582] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x205cb40) on tqpair=0x1ffc540 00:22:01.867 [2024-07-26 14:16:09.669615] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.867 [2024-07-26 14:16:09.669627] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.867 [2024-07-26 14:16:09.669633] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.868 [2024-07-26 14:16:09.669639] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x205c9c0) on tqpair=0x1ffc540 00:22:01.868 [2024-07-26 14:16:09.669654] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.868 [2024-07-26 14:16:09.669664] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.868 [2024-07-26 14:16:09.669670] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.868 [2024-07-26 14:16:09.669677] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x205ccc0) on tqpair=0x1ffc540 00:22:01.868 [2024-07-26 14:16:09.669687] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.868 [2024-07-26 14:16:09.669696] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.868 [2024-07-26 14:16:09.669703] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.868 [2024-07-26 14:16:09.669709] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x205ce40) on tqpair=0x1ffc540 00:22:01.868 ===================================================== 00:22:01.868 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:01.868 ===================================================== 00:22:01.868 Controller Capabilities/Features 00:22:01.868 ================================ 00:22:01.868 Vendor ID: 8086 00:22:01.868 Subsystem Vendor ID: 8086 00:22:01.868 Serial Number: SPDK00000000000001 00:22:01.868 Model Number: SPDK bdev Controller 00:22:01.868 Firmware Version: 24.09 00:22:01.868 Recommended Arb Burst: 6 00:22:01.868 IEEE OUI Identifier: e4 d2 5c 00:22:01.868 Multi-path I/O 00:22:01.868 May have multiple subsystem ports: Yes 00:22:01.868 May have multiple controllers: Yes 00:22:01.868 Associated with SR-IOV VF: No 00:22:01.868 Max Data Transfer Size: 131072 00:22:01.868 Max Number of Namespaces: 32 00:22:01.868 Max Number of I/O Queues: 127 00:22:01.868 NVMe Specification Version (VS): 1.3 00:22:01.868 NVMe Specification Version (Identify): 1.3 00:22:01.868 Maximum Queue Entries: 128 00:22:01.868 Contiguous Queues Required: Yes 00:22:01.868 Arbitration Mechanisms Supported 00:22:01.868 Weighted Round Robin: Not Supported 00:22:01.868 Vendor Specific: Not Supported 00:22:01.868 Reset Timeout: 15000 ms 00:22:01.868 Doorbell Stride: 4 bytes 00:22:01.868 NVM Subsystem Reset: Not Supported 00:22:01.868 Command Sets Supported 00:22:01.868 NVM Command Set: Supported 00:22:01.868 Boot Partition: Not Supported 00:22:01.868 Memory Page Size Minimum: 4096 bytes 00:22:01.868 Memory Page Size Maximum: 4096 bytes 00:22:01.868 Persistent Memory Region: Not Supported 00:22:01.868 Optional Asynchronous Events Supported 00:22:01.868 Namespace Attribute Notices: Supported 00:22:01.868 Firmware Activation Notices: Not Supported 00:22:01.868 ANA Change Notices: Not Supported 00:22:01.868 PLE Aggregate Log Change Notices: Not Supported 00:22:01.868 LBA Status Info Alert Notices: Not Supported 00:22:01.868 EGE Aggregate Log Change Notices: Not Supported 00:22:01.868 Normal NVM Subsystem Shutdown event: Not Supported 00:22:01.868 Zone Descriptor Change Notices: Not Supported 00:22:01.868 Discovery Log Change Notices: Not Supported 00:22:01.868 Controller Attributes 00:22:01.868 128-bit Host Identifier: Supported 00:22:01.868 Non-Operational Permissive Mode: Not Supported 00:22:01.868 NVM Sets: Not Supported 00:22:01.868 Read Recovery Levels: Not Supported 00:22:01.868 Endurance Groups: Not Supported 00:22:01.868 Predictable Latency Mode: Not Supported 00:22:01.868 Traffic Based Keep ALive: Not Supported 00:22:01.868 Namespace Granularity: Not Supported 00:22:01.868 SQ Associations: Not Supported 00:22:01.868 UUID List: Not Supported 00:22:01.868 Multi-Domain Subsystem: Not Supported 00:22:01.868 Fixed Capacity Management: Not Supported 00:22:01.868 Variable Capacity Management: Not Supported 00:22:01.868 Delete Endurance Group: Not Supported 00:22:01.868 Delete NVM Set: Not Supported 00:22:01.868 Extended LBA Formats Supported: Not Supported 00:22:01.868 Flexible Data Placement Supported: Not Supported 00:22:01.868 00:22:01.868 Controller Memory Buffer Support 00:22:01.868 ================================ 00:22:01.868 Supported: No 00:22:01.868 00:22:01.868 Persistent Memory Region Support 00:22:01.868 ================================ 00:22:01.868 Supported: No 00:22:01.868 00:22:01.868 Admin Command Set Attributes 00:22:01.868 ============================ 00:22:01.868 Security Send/Receive: Not Supported 00:22:01.868 Format NVM: Not Supported 00:22:01.868 Firmware Activate/Download: Not Supported 00:22:01.868 Namespace Management: Not Supported 00:22:01.868 Device Self-Test: Not Supported 00:22:01.868 Directives: Not Supported 00:22:01.868 NVMe-MI: Not Supported 00:22:01.868 Virtualization Management: Not Supported 00:22:01.868 Doorbell Buffer Config: Not Supported 00:22:01.868 Get LBA Status Capability: Not Supported 00:22:01.868 Command & Feature Lockdown Capability: Not Supported 00:22:01.868 Abort Command Limit: 4 00:22:01.868 Async Event Request Limit: 4 00:22:01.868 Number of Firmware Slots: N/A 00:22:01.868 Firmware Slot 1 Read-Only: N/A 00:22:01.868 Firmware Activation Without Reset: N/A 00:22:01.868 Multiple Update Detection Support: N/A 00:22:01.868 Firmware Update Granularity: No Information Provided 00:22:01.868 Per-Namespace SMART Log: No 00:22:01.868 Asymmetric Namespace Access Log Page: Not Supported 00:22:01.868 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:22:01.868 Command Effects Log Page: Supported 00:22:01.868 Get Log Page Extended Data: Supported 00:22:01.868 Telemetry Log Pages: Not Supported 00:22:01.868 Persistent Event Log Pages: Not Supported 00:22:01.868 Supported Log Pages Log Page: May Support 00:22:01.868 Commands Supported & Effects Log Page: Not Supported 00:22:01.868 Feature Identifiers & Effects Log Page:May Support 00:22:01.868 NVMe-MI Commands & Effects Log Page: May Support 00:22:01.868 Data Area 4 for Telemetry Log: Not Supported 00:22:01.868 Error Log Page Entries Supported: 128 00:22:01.868 Keep Alive: Supported 00:22:01.868 Keep Alive Granularity: 10000 ms 00:22:01.868 00:22:01.868 NVM Command Set Attributes 00:22:01.868 ========================== 00:22:01.868 Submission Queue Entry Size 00:22:01.868 Max: 64 00:22:01.868 Min: 64 00:22:01.868 Completion Queue Entry Size 00:22:01.868 Max: 16 00:22:01.868 Min: 16 00:22:01.868 Number of Namespaces: 32 00:22:01.868 Compare Command: Supported 00:22:01.868 Write Uncorrectable Command: Not Supported 00:22:01.868 Dataset Management Command: Supported 00:22:01.868 Write Zeroes Command: Supported 00:22:01.868 Set Features Save Field: Not Supported 00:22:01.868 Reservations: Supported 00:22:01.868 Timestamp: Not Supported 00:22:01.868 Copy: Supported 00:22:01.868 Volatile Write Cache: Present 00:22:01.868 Atomic Write Unit (Normal): 1 00:22:01.868 Atomic Write Unit (PFail): 1 00:22:01.868 Atomic Compare & Write Unit: 1 00:22:01.868 Fused Compare & Write: Supported 00:22:01.868 Scatter-Gather List 00:22:01.868 SGL Command Set: Supported 00:22:01.868 SGL Keyed: Supported 00:22:01.868 SGL Bit Bucket Descriptor: Not Supported 00:22:01.868 SGL Metadata Pointer: Not Supported 00:22:01.868 Oversized SGL: Not Supported 00:22:01.868 SGL Metadata Address: Not Supported 00:22:01.868 SGL Offset: Supported 00:22:01.868 Transport SGL Data Block: Not Supported 00:22:01.868 Replay Protected Memory Block: Not Supported 00:22:01.868 00:22:01.868 Firmware Slot Information 00:22:01.868 ========================= 00:22:01.869 Active slot: 1 00:22:01.869 Slot 1 Firmware Revision: 24.09 00:22:01.869 00:22:01.869 00:22:01.869 Commands Supported and Effects 00:22:01.869 ============================== 00:22:01.869 Admin Commands 00:22:01.869 -------------- 00:22:01.869 Get Log Page (02h): Supported 00:22:01.869 Identify (06h): Supported 00:22:01.869 Abort (08h): Supported 00:22:01.869 Set Features (09h): Supported 00:22:01.869 Get Features (0Ah): Supported 00:22:01.869 Asynchronous Event Request (0Ch): Supported 00:22:01.869 Keep Alive (18h): Supported 00:22:01.869 I/O Commands 00:22:01.869 ------------ 00:22:01.869 Flush (00h): Supported LBA-Change 00:22:01.869 Write (01h): Supported LBA-Change 00:22:01.869 Read (02h): Supported 00:22:01.869 Compare (05h): Supported 00:22:01.869 Write Zeroes (08h): Supported LBA-Change 00:22:01.869 Dataset Management (09h): Supported LBA-Change 00:22:01.869 Copy (19h): Supported LBA-Change 00:22:01.869 00:22:01.869 Error Log 00:22:01.869 ========= 00:22:01.869 00:22:01.869 Arbitration 00:22:01.869 =========== 00:22:01.869 Arbitration Burst: 1 00:22:01.869 00:22:01.869 Power Management 00:22:01.869 ================ 00:22:01.869 Number of Power States: 1 00:22:01.869 Current Power State: Power State #0 00:22:01.869 Power State #0: 00:22:01.869 Max Power: 0.00 W 00:22:01.869 Non-Operational State: Operational 00:22:01.869 Entry Latency: Not Reported 00:22:01.869 Exit Latency: Not Reported 00:22:01.869 Relative Read Throughput: 0 00:22:01.869 Relative Read Latency: 0 00:22:01.869 Relative Write Throughput: 0 00:22:01.869 Relative Write Latency: 0 00:22:01.869 Idle Power: Not Reported 00:22:01.869 Active Power: Not Reported 00:22:01.869 Non-Operational Permissive Mode: Not Supported 00:22:01.869 00:22:01.869 Health Information 00:22:01.869 ================== 00:22:01.869 Critical Warnings: 00:22:01.869 Available Spare Space: OK 00:22:01.869 Temperature: OK 00:22:01.869 Device Reliability: OK 00:22:01.869 Read Only: No 00:22:01.869 Volatile Memory Backup: OK 00:22:01.869 Current Temperature: 0 Kelvin (-273 Celsius) 00:22:01.869 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:22:01.869 Available Spare: 0% 00:22:01.869 Available Spare Threshold: 0% 00:22:01.869 Life Percentage Used:[2024-07-26 14:16:09.669834] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.869 [2024-07-26 14:16:09.669847] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1ffc540) 00:22:01.869 [2024-07-26 14:16:09.669857] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.869 [2024-07-26 14:16:09.669878] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x205ce40, cid 7, qid 0 00:22:01.869 [2024-07-26 14:16:09.670012] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.869 [2024-07-26 14:16:09.670025] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.869 [2024-07-26 14:16:09.670032] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.869 [2024-07-26 14:16:09.670038] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x205ce40) on tqpair=0x1ffc540 00:22:01.869 [2024-07-26 14:16:09.670080] nvme_ctrlr.c:4361:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:22:01.869 [2024-07-26 14:16:09.670100] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x205c3c0) on tqpair=0x1ffc540 00:22:01.869 [2024-07-26 14:16:09.670113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.869 [2024-07-26 14:16:09.670123] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x205c540) on tqpair=0x1ffc540 00:22:01.869 [2024-07-26 14:16:09.670130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.869 [2024-07-26 14:16:09.670139] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x205c6c0) on tqpair=0x1ffc540 00:22:01.869 [2024-07-26 14:16:09.670146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.869 [2024-07-26 14:16:09.670154] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x205c840) on tqpair=0x1ffc540 00:22:01.869 [2024-07-26 14:16:09.670162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.869 [2024-07-26 14:16:09.670175] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.869 [2024-07-26 14:16:09.670197] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.869 [2024-07-26 14:16:09.670204] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ffc540) 00:22:01.869 [2024-07-26 14:16:09.670215] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.869 [2024-07-26 14:16:09.670236] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x205c840, cid 3, qid 0 00:22:01.869 [2024-07-26 14:16:09.670340] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.869 [2024-07-26 14:16:09.670355] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.869 [2024-07-26 14:16:09.670362] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.869 [2024-07-26 14:16:09.670369] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x205c840) on tqpair=0x1ffc540 00:22:01.869 [2024-07-26 14:16:09.670380] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.869 [2024-07-26 14:16:09.670387] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.869 [2024-07-26 14:16:09.670393] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ffc540) 00:22:01.869 [2024-07-26 14:16:09.670404] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.869 [2024-07-26 14:16:09.670429] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x205c840, cid 3, qid 0 00:22:01.869 [2024-07-26 14:16:09.670523] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.869 [2024-07-26 14:16:09.674548] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.869 [2024-07-26 14:16:09.674556] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.869 [2024-07-26 14:16:09.674563] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x205c840) on tqpair=0x1ffc540 00:22:01.869 [2024-07-26 14:16:09.674570] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:22:01.869 [2024-07-26 14:16:09.674578] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:22:01.869 [2024-07-26 14:16:09.674595] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.869 [2024-07-26 14:16:09.674604] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.869 [2024-07-26 14:16:09.674610] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ffc540) 00:22:01.869 [2024-07-26 14:16:09.674621] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.869 [2024-07-26 14:16:09.674642] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x205c840, cid 3, qid 0 00:22:01.869 [2024-07-26 14:16:09.674763] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.869 [2024-07-26 14:16:09.674778] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.869 [2024-07-26 14:16:09.674785] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.869 [2024-07-26 14:16:09.674795] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x205c840) on tqpair=0x1ffc540 00:22:01.869 [2024-07-26 14:16:09.674809] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 0 milliseconds 00:22:01.869 0% 00:22:01.869 Data Units Read: 0 00:22:01.869 Data Units Written: 0 00:22:01.869 Host Read Commands: 0 00:22:01.869 Host Write Commands: 0 00:22:01.869 Controller Busy Time: 0 minutes 00:22:01.869 Power Cycles: 0 00:22:01.869 Power On Hours: 0 hours 00:22:01.869 Unsafe Shutdowns: 0 00:22:01.869 Unrecoverable Media Errors: 0 00:22:01.869 Lifetime Error Log Entries: 0 00:22:01.869 Warning Temperature Time: 0 minutes 00:22:01.869 Critical Temperature Time: 0 minutes 00:22:01.869 00:22:01.869 Number of Queues 00:22:01.869 ================ 00:22:01.869 Number of I/O Submission Queues: 127 00:22:01.869 Number of I/O Completion Queues: 127 00:22:01.869 00:22:01.869 Active Namespaces 00:22:01.869 ================= 00:22:01.869 Namespace ID:1 00:22:01.869 Error Recovery Timeout: Unlimited 00:22:01.869 Command Set Identifier: NVM (00h) 00:22:01.869 Deallocate: Supported 00:22:01.869 Deallocated/Unwritten Error: Not Supported 00:22:01.869 Deallocated Read Value: Unknown 00:22:01.869 Deallocate in Write Zeroes: Not Supported 00:22:01.869 Deallocated Guard Field: 0xFFFF 00:22:01.869 Flush: Supported 00:22:01.869 Reservation: Supported 00:22:01.869 Namespace Sharing Capabilities: Multiple Controllers 00:22:01.869 Size (in LBAs): 131072 (0GiB) 00:22:01.869 Capacity (in LBAs): 131072 (0GiB) 00:22:01.869 Utilization (in LBAs): 131072 (0GiB) 00:22:01.869 NGUID: ABCDEF0123456789ABCDEF0123456789 00:22:01.869 EUI64: ABCDEF0123456789 00:22:01.869 UUID: 23608a9f-f189-4175-b64d-14f53df5cdd1 00:22:01.869 Thin Provisioning: Not Supported 00:22:01.869 Per-NS Atomic Units: Yes 00:22:01.869 Atomic Boundary Size (Normal): 0 00:22:01.870 Atomic Boundary Size (PFail): 0 00:22:01.870 Atomic Boundary Offset: 0 00:22:01.870 Maximum Single Source Range Length: 65535 00:22:01.870 Maximum Copy Length: 65535 00:22:01.870 Maximum Source Range Count: 1 00:22:01.870 NGUID/EUI64 Never Reused: No 00:22:01.870 Namespace Write Protected: No 00:22:01.870 Number of LBA Formats: 1 00:22:01.870 Current LBA Format: LBA Format #00 00:22:01.870 LBA Format #00: Data Size: 512 Metadata Size: 0 00:22:01.870 00:22:01.870 14:16:09 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:22:01.870 14:16:09 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:01.870 14:16:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.870 14:16:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:01.870 14:16:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.870 14:16:09 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:22:01.870 14:16:09 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:22:01.870 14:16:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:01.870 14:16:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:22:01.870 14:16:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:01.870 14:16:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:22:01.870 14:16:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:01.870 14:16:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:01.870 rmmod nvme_tcp 00:22:01.870 rmmod nvme_fabrics 00:22:01.870 rmmod nvme_keyring 00:22:01.870 14:16:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:01.870 14:16:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:22:01.870 14:16:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:22:01.870 14:16:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 277676 ']' 00:22:01.870 14:16:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 277676 00:22:01.870 14:16:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@950 -- # '[' -z 277676 ']' 00:22:01.870 14:16:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # kill -0 277676 00:22:01.870 14:16:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # uname 00:22:01.870 14:16:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:01.870 14:16:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 277676 00:22:01.870 14:16:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:01.870 14:16:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:01.870 14:16:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@968 -- # echo 'killing process with pid 277676' 00:22:01.870 killing process with pid 277676 00:22:01.870 14:16:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@969 -- # kill 277676 00:22:01.870 14:16:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@974 -- # wait 277676 00:22:02.129 14:16:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:02.129 14:16:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:02.129 14:16:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:02.129 14:16:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:02.129 14:16:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:02.129 14:16:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:02.129 14:16:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:02.129 14:16:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:04.662 14:16:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:04.662 00:22:04.662 real 0m5.497s 00:22:04.662 user 0m4.770s 00:22:04.662 sys 0m1.830s 00:22:04.662 14:16:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:04.662 14:16:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:04.662 ************************************ 00:22:04.662 END TEST nvmf_identify 00:22:04.662 ************************************ 00:22:04.662 14:16:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:04.662 14:16:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:04.662 14:16:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:04.662 14:16:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:04.662 ************************************ 00:22:04.662 START TEST nvmf_perf 00:22:04.662 ************************************ 00:22:04.662 14:16:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:04.662 * Looking for test storage... 00:22:04.662 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:04.662 14:16:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:04.662 14:16:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:22:04.662 14:16:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:04.662 14:16:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:04.662 14:16:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:04.662 14:16:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:04.662 14:16:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:04.662 14:16:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:04.662 14:16:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:04.662 14:16:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:04.662 14:16:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:04.663 14:16:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:04.663 14:16:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:22:04.663 14:16:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:22:04.663 14:16:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:04.663 14:16:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:04.663 14:16:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:04.663 14:16:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:04.663 14:16:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:04.663 14:16:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:04.663 14:16:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:04.663 14:16:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:04.663 14:16:12 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:04.663 14:16:12 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:04.663 14:16:12 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:04.663 14:16:12 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:22:04.663 14:16:12 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:04.663 14:16:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:22:04.663 14:16:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:04.663 14:16:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:04.663 14:16:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:04.663 14:16:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:04.663 14:16:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:04.663 14:16:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:04.663 14:16:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:04.663 14:16:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:04.663 14:16:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:04.663 14:16:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:04.663 14:16:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:04.663 14:16:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:22:04.663 14:16:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:04.663 14:16:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:04.663 14:16:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:04.663 14:16:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:04.663 14:16:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:04.663 14:16:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:04.663 14:16:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:04.663 14:16:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:04.663 14:16:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:04.663 14:16:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:04.663 14:16:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:22:04.663 14:16:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:06.563 14:16:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:06.563 14:16:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:22:06.563 14:16:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:06.563 14:16:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:06.563 14:16:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:06.563 14:16:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:06.563 14:16:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:06.563 14:16:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:22:06.563 14:16:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:06.563 14:16:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:22:06.563 14:16:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:22:06.563 14:16:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:22:06.564 14:16:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:22:06.564 14:16:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:22:06.564 14:16:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:22:06.564 14:16:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:06.564 14:16:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:06.564 14:16:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:06.564 14:16:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:06.564 14:16:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:06.564 14:16:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:06.564 14:16:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:06.564 14:16:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:06.564 14:16:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:06.564 14:16:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:06.564 14:16:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:06.564 14:16:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:06.564 14:16:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:06.564 14:16:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:06.564 14:16:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:06.564 14:16:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:06.564 14:16:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:06.564 14:16:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:06.564 14:16:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:22:06.564 Found 0000:09:00.0 (0x8086 - 0x159b) 00:22:06.564 14:16:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:06.564 14:16:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:06.564 14:16:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:06.564 14:16:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:06.564 14:16:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:06.564 14:16:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:06.564 14:16:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:22:06.564 Found 0000:09:00.1 (0x8086 - 0x159b) 00:22:06.564 14:16:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:06.564 14:16:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:06.564 14:16:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:06.564 14:16:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:06.564 14:16:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:06.564 14:16:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:06.564 14:16:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:06.564 14:16:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:06.564 14:16:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:06.564 14:16:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:06.564 14:16:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:06.564 14:16:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:06.564 14:16:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:06.564 14:16:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:06.564 14:16:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:06.564 14:16:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:22:06.564 Found net devices under 0000:09:00.0: cvl_0_0 00:22:06.564 14:16:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:06.564 14:16:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:06.564 14:16:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:06.564 14:16:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:06.564 14:16:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:06.564 14:16:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:06.564 14:16:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:06.564 14:16:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:06.564 14:16:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:22:06.564 Found net devices under 0000:09:00.1: cvl_0_1 00:22:06.564 14:16:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:06.564 14:16:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:06.564 14:16:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:22:06.564 14:16:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:06.564 14:16:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:06.564 14:16:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:06.564 14:16:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:06.564 14:16:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:06.564 14:16:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:06.564 14:16:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:06.564 14:16:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:06.564 14:16:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:06.564 14:16:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:06.564 14:16:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:06.564 14:16:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:06.564 14:16:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:06.564 14:16:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:06.564 14:16:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:06.564 14:16:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:06.564 14:16:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:06.564 14:16:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:06.564 14:16:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:06.564 14:16:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:06.564 14:16:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:06.564 14:16:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:06.564 14:16:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:06.564 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:06.564 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.237 ms 00:22:06.564 00:22:06.564 --- 10.0.0.2 ping statistics --- 00:22:06.564 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:06.564 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:22:06.564 14:16:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:06.564 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:06.564 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.095 ms 00:22:06.564 00:22:06.564 --- 10.0.0.1 ping statistics --- 00:22:06.564 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:06.564 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:22:06.564 14:16:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:06.564 14:16:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:22:06.564 14:16:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:06.564 14:16:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:06.564 14:16:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:06.564 14:16:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:06.564 14:16:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:06.564 14:16:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:06.564 14:16:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:06.564 14:16:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:22:06.564 14:16:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:06.564 14:16:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:06.564 14:16:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:06.564 14:16:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=279717 00:22:06.564 14:16:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:06.564 14:16:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 279717 00:22:06.564 14:16:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@831 -- # '[' -z 279717 ']' 00:22:06.564 14:16:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:06.564 14:16:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:06.564 14:16:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:06.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:06.564 14:16:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:06.564 14:16:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:06.565 [2024-07-26 14:16:14.443236] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:22:06.565 [2024-07-26 14:16:14.443335] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:06.565 EAL: No free 2048 kB hugepages reported on node 1 00:22:06.565 [2024-07-26 14:16:14.515017] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:06.823 [2024-07-26 14:16:14.622330] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:06.823 [2024-07-26 14:16:14.622379] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:06.823 [2024-07-26 14:16:14.622402] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:06.823 [2024-07-26 14:16:14.622412] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:06.823 [2024-07-26 14:16:14.622421] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:06.823 [2024-07-26 14:16:14.622502] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:06.823 [2024-07-26 14:16:14.622560] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:06.823 [2024-07-26 14:16:14.622649] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:06.823 [2024-07-26 14:16:14.622649] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:06.823 14:16:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:06.823 14:16:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # return 0 00:22:06.823 14:16:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:06.823 14:16:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:06.823 14:16:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:06.823 14:16:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:06.823 14:16:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:22:06.823 14:16:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:22:10.100 14:16:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:22:10.100 14:16:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:22:10.358 14:16:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:0b:00.0 00:22:10.358 14:16:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:22:10.615 14:16:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:22:10.615 14:16:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:0b:00.0 ']' 00:22:10.615 14:16:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:22:10.615 14:16:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:22:10.616 14:16:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:10.873 [2024-07-26 14:16:18.695124] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:10.873 14:16:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:11.131 14:16:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:11.131 14:16:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:11.388 14:16:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:11.388 14:16:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:22:11.646 14:16:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:11.903 [2024-07-26 14:16:19.682710] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:11.903 14:16:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:12.161 14:16:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:0b:00.0 ']' 00:22:12.161 14:16:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:0b:00.0' 00:22:12.161 14:16:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:22:12.161 14:16:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:0b:00.0' 00:22:13.532 Initializing NVMe Controllers 00:22:13.532 Attached to NVMe Controller at 0000:0b:00.0 [8086:0a54] 00:22:13.532 Associating PCIE (0000:0b:00.0) NSID 1 with lcore 0 00:22:13.532 Initialization complete. Launching workers. 00:22:13.532 ======================================================== 00:22:13.532 Latency(us) 00:22:13.532 Device Information : IOPS MiB/s Average min max 00:22:13.532 PCIE (0000:0b:00.0) NSID 1 from core 0: 85466.01 333.85 373.80 37.55 4339.17 00:22:13.532 ======================================================== 00:22:13.532 Total : 85466.01 333.85 373.80 37.55 4339.17 00:22:13.532 00:22:13.532 14:16:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:13.532 EAL: No free 2048 kB hugepages reported on node 1 00:22:14.465 Initializing NVMe Controllers 00:22:14.465 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:14.465 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:14.465 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:14.465 Initialization complete. Launching workers. 00:22:14.465 ======================================================== 00:22:14.465 Latency(us) 00:22:14.465 Device Information : IOPS MiB/s Average min max 00:22:14.465 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 100.00 0.39 10394.75 147.13 45948.40 00:22:14.465 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 43.00 0.17 23731.49 6978.73 48884.08 00:22:14.465 ======================================================== 00:22:14.465 Total : 143.00 0.56 14405.10 147.13 48884.08 00:22:14.465 00:22:14.465 14:16:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:14.465 EAL: No free 2048 kB hugepages reported on node 1 00:22:15.836 Initializing NVMe Controllers 00:22:15.836 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:15.836 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:15.836 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:15.836 Initialization complete. Launching workers. 00:22:15.836 ======================================================== 00:22:15.836 Latency(us) 00:22:15.836 Device Information : IOPS MiB/s Average min max 00:22:15.836 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8590.52 33.56 3742.00 638.83 7346.77 00:22:15.836 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3876.78 15.14 8291.16 4587.72 15960.72 00:22:15.836 ======================================================== 00:22:15.836 Total : 12467.30 48.70 5156.59 638.83 15960.72 00:22:15.836 00:22:15.836 14:16:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:22:15.836 14:16:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:22:15.836 14:16:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:15.836 EAL: No free 2048 kB hugepages reported on node 1 00:22:18.366 Initializing NVMe Controllers 00:22:18.366 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:18.366 Controller IO queue size 128, less than required. 00:22:18.366 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:18.366 Controller IO queue size 128, less than required. 00:22:18.366 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:18.366 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:18.366 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:18.366 Initialization complete. Launching workers. 00:22:18.366 ======================================================== 00:22:18.366 Latency(us) 00:22:18.366 Device Information : IOPS MiB/s Average min max 00:22:18.366 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1696.37 424.09 76688.90 40334.07 110336.70 00:22:18.366 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 580.11 145.03 226495.18 102701.62 368174.00 00:22:18.366 ======================================================== 00:22:18.366 Total : 2276.49 569.12 114863.85 40334.07 368174.00 00:22:18.366 00:22:18.366 14:16:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:22:18.366 EAL: No free 2048 kB hugepages reported on node 1 00:22:18.623 No valid NVMe controllers or AIO or URING devices found 00:22:18.623 Initializing NVMe Controllers 00:22:18.623 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:18.623 Controller IO queue size 128, less than required. 00:22:18.623 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:18.623 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:22:18.623 Controller IO queue size 128, less than required. 00:22:18.623 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:18.623 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:22:18.623 WARNING: Some requested NVMe devices were skipped 00:22:18.623 14:16:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:22:18.623 EAL: No free 2048 kB hugepages reported on node 1 00:22:21.904 Initializing NVMe Controllers 00:22:21.904 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:21.904 Controller IO queue size 128, less than required. 00:22:21.904 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:21.904 Controller IO queue size 128, less than required. 00:22:21.904 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:21.904 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:21.904 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:21.904 Initialization complete. Launching workers. 00:22:21.904 00:22:21.904 ==================== 00:22:21.904 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:22:21.904 TCP transport: 00:22:21.904 polls: 9924 00:22:21.904 idle_polls: 6974 00:22:21.904 sock_completions: 2950 00:22:21.905 nvme_completions: 5693 00:22:21.905 submitted_requests: 8668 00:22:21.905 queued_requests: 1 00:22:21.905 00:22:21.905 ==================== 00:22:21.905 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:22:21.905 TCP transport: 00:22:21.905 polls: 10227 00:22:21.905 idle_polls: 6532 00:22:21.905 sock_completions: 3695 00:22:21.905 nvme_completions: 6513 00:22:21.905 submitted_requests: 9710 00:22:21.905 queued_requests: 1 00:22:21.905 ======================================================== 00:22:21.905 Latency(us) 00:22:21.905 Device Information : IOPS MiB/s Average min max 00:22:21.905 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1420.96 355.24 92953.29 56294.19 157340.88 00:22:21.905 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1625.67 406.42 80205.61 49217.45 119732.28 00:22:21.905 ======================================================== 00:22:21.905 Total : 3046.64 761.66 86151.18 49217.45 157340.88 00:22:21.905 00:22:21.905 14:16:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:22:21.905 14:16:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:21.905 14:16:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:22:21.905 14:16:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:22:21.905 14:16:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:22:21.905 14:16:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:21.905 14:16:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:22:21.905 14:16:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:21.905 14:16:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:22:21.905 14:16:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:21.905 14:16:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:21.905 rmmod nvme_tcp 00:22:21.905 rmmod nvme_fabrics 00:22:21.905 rmmod nvme_keyring 00:22:21.905 14:16:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:21.905 14:16:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:22:21.905 14:16:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:22:21.905 14:16:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 279717 ']' 00:22:21.905 14:16:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 279717 00:22:21.905 14:16:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@950 -- # '[' -z 279717 ']' 00:22:21.905 14:16:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # kill -0 279717 00:22:21.905 14:16:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # uname 00:22:21.905 14:16:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:21.905 14:16:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 279717 00:22:21.905 14:16:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:21.905 14:16:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:21.905 14:16:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 279717' 00:22:21.905 killing process with pid 279717 00:22:21.905 14:16:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@969 -- # kill 279717 00:22:21.905 14:16:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@974 -- # wait 279717 00:22:23.277 14:16:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:23.277 14:16:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:23.277 14:16:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:23.277 14:16:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:23.277 14:16:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:23.277 14:16:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:23.277 14:16:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:23.277 14:16:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:25.363 14:16:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:25.363 00:22:25.363 real 0m21.080s 00:22:25.363 user 1m4.303s 00:22:25.363 sys 0m5.477s 00:22:25.363 14:16:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:25.363 14:16:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:25.363 ************************************ 00:22:25.363 END TEST nvmf_perf 00:22:25.363 ************************************ 00:22:25.363 14:16:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:22:25.363 14:16:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:25.363 14:16:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:25.363 14:16:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:25.363 ************************************ 00:22:25.363 START TEST nvmf_fio_host 00:22:25.363 ************************************ 00:22:25.363 14:16:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:22:25.363 * Looking for test storage... 00:22:25.363 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:25.363 14:16:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:25.363 14:16:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:25.363 14:16:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:25.363 14:16:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:25.363 14:16:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:25.364 14:16:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:25.364 14:16:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:25.364 14:16:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:22:25.364 14:16:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:25.364 14:16:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:25.364 14:16:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:22:25.364 14:16:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:25.364 14:16:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:25.364 14:16:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:25.364 14:16:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:25.364 14:16:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:25.364 14:16:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:25.364 14:16:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:25.364 14:16:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:25.364 14:16:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:25.364 14:16:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:25.364 14:16:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:22:25.364 14:16:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:22:25.364 14:16:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:25.364 14:16:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:25.364 14:16:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:25.364 14:16:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:25.364 14:16:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:25.364 14:16:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:25.364 14:16:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:25.364 14:16:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:25.364 14:16:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:25.364 14:16:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:25.364 14:16:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:25.364 14:16:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:22:25.364 14:16:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:25.364 14:16:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:22:25.364 14:16:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:25.364 14:16:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:25.364 14:16:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:25.364 14:16:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:25.364 14:16:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:25.364 14:16:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:25.364 14:16:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:25.364 14:16:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:25.364 14:16:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:25.364 14:16:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:22:25.364 14:16:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:25.364 14:16:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:25.364 14:16:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:25.364 14:16:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:25.364 14:16:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:25.364 14:16:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:25.364 14:16:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:25.364 14:16:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:25.364 14:16:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:25.364 14:16:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:25.364 14:16:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:22:25.364 14:16:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:27.940 14:16:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:27.940 14:16:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:22:27.940 14:16:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:27.940 14:16:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:27.940 14:16:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:27.940 14:16:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:27.940 14:16:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:27.940 14:16:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:22:27.940 14:16:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:27.940 14:16:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:22:27.940 14:16:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:22:27.940 14:16:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:22:27.940 14:16:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:22:27.940 14:16:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:22:27.940 14:16:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:22:27.940 14:16:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:27.940 14:16:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:27.940 14:16:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:27.940 14:16:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:27.940 14:16:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:27.940 14:16:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:27.940 14:16:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:27.940 14:16:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:27.940 14:16:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:27.940 14:16:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:27.940 14:16:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:27.940 14:16:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:27.940 14:16:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:27.940 14:16:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:27.940 14:16:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:27.940 14:16:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:27.940 14:16:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:27.940 14:16:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:27.940 14:16:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:22:27.940 Found 0000:09:00.0 (0x8086 - 0x159b) 00:22:27.940 14:16:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:27.940 14:16:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:27.940 14:16:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:27.940 14:16:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:27.940 14:16:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:27.940 14:16:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:27.940 14:16:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:22:27.940 Found 0000:09:00.1 (0x8086 - 0x159b) 00:22:27.941 14:16:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:27.941 14:16:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:27.941 14:16:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:27.941 14:16:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:27.941 14:16:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:27.941 14:16:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:27.941 14:16:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:27.941 14:16:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:27.941 14:16:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:27.941 14:16:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:27.941 14:16:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:27.941 14:16:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:27.941 14:16:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:27.941 14:16:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:27.941 14:16:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:27.941 14:16:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:22:27.941 Found net devices under 0000:09:00.0: cvl_0_0 00:22:27.941 14:16:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:27.941 14:16:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:27.941 14:16:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:27.941 14:16:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:27.941 14:16:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:27.941 14:16:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:27.941 14:16:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:27.941 14:16:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:27.941 14:16:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:22:27.941 Found net devices under 0000:09:00.1: cvl_0_1 00:22:27.941 14:16:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:27.941 14:16:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:27.941 14:16:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:22:27.941 14:16:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:27.941 14:16:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:27.941 14:16:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:27.941 14:16:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:27.941 14:16:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:27.941 14:16:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:27.941 14:16:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:27.941 14:16:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:27.941 14:16:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:27.941 14:16:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:27.941 14:16:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:27.941 14:16:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:27.941 14:16:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:27.941 14:16:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:27.941 14:16:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:27.941 14:16:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:27.941 14:16:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:27.941 14:16:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:27.941 14:16:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:27.941 14:16:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:27.941 14:16:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:27.941 14:16:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:27.941 14:16:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:27.941 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:27.941 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.239 ms 00:22:27.941 00:22:27.941 --- 10.0.0.2 ping statistics --- 00:22:27.941 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:27.941 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:22:27.941 14:16:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:27.941 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:27.941 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.169 ms 00:22:27.941 00:22:27.941 --- 10.0.0.1 ping statistics --- 00:22:27.941 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:27.941 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:22:27.941 14:16:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:27.941 14:16:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:22:27.941 14:16:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:27.941 14:16:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:27.941 14:16:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:27.941 14:16:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:27.941 14:16:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:27.941 14:16:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:27.941 14:16:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:27.941 14:16:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:22:27.941 14:16:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:22:27.941 14:16:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:27.941 14:16:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:27.941 14:16:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=283678 00:22:27.941 14:16:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:27.941 14:16:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 283678 00:22:27.941 14:16:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@831 -- # '[' -z 283678 ']' 00:22:27.941 14:16:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:27.941 14:16:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:27.941 14:16:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:27.941 14:16:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:27.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:27.941 14:16:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:27.941 14:16:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:27.941 [2024-07-26 14:16:35.622716] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:22:27.941 [2024-07-26 14:16:35.622826] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:27.941 EAL: No free 2048 kB hugepages reported on node 1 00:22:27.941 [2024-07-26 14:16:35.691017] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:27.941 [2024-07-26 14:16:35.805814] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:27.941 [2024-07-26 14:16:35.805883] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:27.941 [2024-07-26 14:16:35.805897] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:27.941 [2024-07-26 14:16:35.805908] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:27.941 [2024-07-26 14:16:35.805918] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:27.941 [2024-07-26 14:16:35.806000] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:27.941 [2024-07-26 14:16:35.806025] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:27.941 [2024-07-26 14:16:35.806083] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:27.941 [2024-07-26 14:16:35.806086] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:27.941 14:16:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:27.942 14:16:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # return 0 00:22:27.942 14:16:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:28.200 [2024-07-26 14:16:36.204971] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:28.458 14:16:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:22:28.458 14:16:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:28.458 14:16:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:28.458 14:16:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:22:28.716 Malloc1 00:22:28.716 14:16:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:28.973 14:16:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:29.231 14:16:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:29.488 [2024-07-26 14:16:37.373046] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:29.488 14:16:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:29.746 14:16:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:22:29.746 14:16:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:29.746 14:16:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:29.746 14:16:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:22:29.746 14:16:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:29.746 14:16:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:22:29.746 14:16:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:29.746 14:16:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:22:29.746 14:16:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:22:29.746 14:16:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:29.746 14:16:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:29.746 14:16:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:22:29.746 14:16:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:29.746 14:16:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:29.746 14:16:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:29.746 14:16:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:29.746 14:16:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:29.746 14:16:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:22:29.746 14:16:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:29.746 14:16:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:29.746 14:16:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:29.746 14:16:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:22:29.746 14:16:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:30.004 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:22:30.004 fio-3.35 00:22:30.004 Starting 1 thread 00:22:30.004 EAL: No free 2048 kB hugepages reported on node 1 00:22:32.530 00:22:32.530 test: (groupid=0, jobs=1): err= 0: pid=284044: Fri Jul 26 14:16:40 2024 00:22:32.530 read: IOPS=8843, BW=34.5MiB/s (36.2MB/s)(69.3MiB/2006msec) 00:22:32.530 slat (usec): min=2, max=184, avg= 2.63, stdev= 2.00 00:22:32.530 clat (usec): min=2652, max=14173, avg=7864.81, stdev=710.45 00:22:32.530 lat (usec): min=2682, max=14176, avg=7867.45, stdev=710.38 00:22:32.530 clat percentiles (usec): 00:22:32.530 | 1.00th=[ 6325], 5.00th=[ 6783], 10.00th=[ 7046], 20.00th=[ 7308], 00:22:32.530 | 30.00th=[ 7504], 40.00th=[ 7701], 50.00th=[ 7832], 60.00th=[ 8029], 00:22:32.530 | 70.00th=[ 8225], 80.00th=[ 8455], 90.00th=[ 8717], 95.00th=[ 8979], 00:22:32.530 | 99.00th=[ 9634], 99.50th=[ 9765], 99.90th=[11994], 99.95th=[12518], 00:22:32.530 | 99.99th=[14222] 00:22:32.530 bw ( KiB/s): min=33056, max=36192, per=99.92%, avg=35346.00, stdev=1527.82, samples=4 00:22:32.530 iops : min= 8264, max= 9048, avg=8836.50, stdev=381.95, samples=4 00:22:32.530 write: IOPS=8860, BW=34.6MiB/s (36.3MB/s)(69.4MiB/2006msec); 0 zone resets 00:22:32.530 slat (usec): min=2, max=128, avg= 2.76, stdev= 1.51 00:22:32.530 clat (usec): min=1420, max=12028, avg=6538.90, stdev=588.11 00:22:32.530 lat (usec): min=1429, max=12031, avg=6541.66, stdev=588.09 00:22:32.530 clat percentiles (usec): 00:22:32.530 | 1.00th=[ 5211], 5.00th=[ 5669], 10.00th=[ 5866], 20.00th=[ 6063], 00:22:32.530 | 30.00th=[ 6259], 40.00th=[ 6390], 50.00th=[ 6521], 60.00th=[ 6652], 00:22:32.530 | 70.00th=[ 6849], 80.00th=[ 6980], 90.00th=[ 7242], 95.00th=[ 7439], 00:22:32.530 | 99.00th=[ 7832], 99.50th=[ 8029], 99.90th=[10421], 99.95th=[10945], 00:22:32.530 | 99.99th=[11994] 00:22:32.530 bw ( KiB/s): min=34024, max=36416, per=99.97%, avg=35430.00, stdev=1092.10, samples=4 00:22:32.530 iops : min= 8506, max= 9104, avg=8857.50, stdev=273.02, samples=4 00:22:32.530 lat (msec) : 2=0.03%, 4=0.10%, 10=99.65%, 20=0.22% 00:22:32.530 cpu : usr=64.39%, sys=33.92%, ctx=108, majf=0, minf=40 00:22:32.530 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:22:32.530 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:32.530 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:32.530 issued rwts: total=17741,17774,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:32.530 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:32.530 00:22:32.530 Run status group 0 (all jobs): 00:22:32.530 READ: bw=34.5MiB/s (36.2MB/s), 34.5MiB/s-34.5MiB/s (36.2MB/s-36.2MB/s), io=69.3MiB (72.7MB), run=2006-2006msec 00:22:32.530 WRITE: bw=34.6MiB/s (36.3MB/s), 34.6MiB/s-34.6MiB/s (36.3MB/s-36.3MB/s), io=69.4MiB (72.8MB), run=2006-2006msec 00:22:32.530 14:16:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:32.530 14:16:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:32.530 14:16:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:22:32.530 14:16:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:32.530 14:16:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:22:32.530 14:16:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:32.530 14:16:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:22:32.530 14:16:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:22:32.530 14:16:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:32.530 14:16:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:32.530 14:16:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:22:32.530 14:16:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:32.530 14:16:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:32.530 14:16:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:32.530 14:16:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:32.530 14:16:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:32.530 14:16:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:22:32.530 14:16:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:32.530 14:16:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:32.530 14:16:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:32.530 14:16:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:22:32.530 14:16:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:32.530 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:22:32.530 fio-3.35 00:22:32.530 Starting 1 thread 00:22:32.530 EAL: No free 2048 kB hugepages reported on node 1 00:22:35.056 00:22:35.056 test: (groupid=0, jobs=1): err= 0: pid=284496: Fri Jul 26 14:16:42 2024 00:22:35.056 read: IOPS=8478, BW=132MiB/s (139MB/s)(266MiB/2005msec) 00:22:35.056 slat (nsec): min=2852, max=96697, avg=3835.11, stdev=1728.74 00:22:35.056 clat (usec): min=2692, max=22317, avg=8695.99, stdev=2032.57 00:22:35.056 lat (usec): min=2696, max=22327, avg=8699.82, stdev=2032.72 00:22:35.056 clat percentiles (usec): 00:22:35.056 | 1.00th=[ 4555], 5.00th=[ 5604], 10.00th=[ 6194], 20.00th=[ 6980], 00:22:35.056 | 30.00th=[ 7635], 40.00th=[ 8160], 50.00th=[ 8717], 60.00th=[ 9110], 00:22:35.056 | 70.00th=[ 9503], 80.00th=[10159], 90.00th=[11338], 95.00th=[12387], 00:22:35.056 | 99.00th=[14353], 99.50th=[15533], 99.90th=[17171], 99.95th=[17433], 00:22:35.056 | 99.99th=[17695] 00:22:35.056 bw ( KiB/s): min=63264, max=73984, per=50.48%, avg=68488.00, stdev=5238.13, samples=4 00:22:35.056 iops : min= 3954, max= 4624, avg=4280.50, stdev=327.38, samples=4 00:22:35.056 write: IOPS=4896, BW=76.5MiB/s (80.2MB/s)(140MiB/1834msec); 0 zone resets 00:22:35.056 slat (usec): min=30, max=322, avg=34.29, stdev= 8.26 00:22:35.056 clat (usec): min=4151, max=19064, avg=11442.22, stdev=1997.58 00:22:35.056 lat (usec): min=4188, max=19096, avg=11476.50, stdev=1999.19 00:22:35.056 clat percentiles (usec): 00:22:35.056 | 1.00th=[ 7570], 5.00th=[ 8455], 10.00th=[ 8979], 20.00th=[ 9765], 00:22:35.056 | 30.00th=[10290], 40.00th=[10814], 50.00th=[11207], 60.00th=[11731], 00:22:35.056 | 70.00th=[12256], 80.00th=[13042], 90.00th=[14222], 95.00th=[15008], 00:22:35.056 | 99.00th=[16909], 99.50th=[17695], 99.90th=[18482], 99.95th=[18744], 00:22:35.056 | 99.99th=[19006] 00:22:35.056 bw ( KiB/s): min=65504, max=77216, per=91.12%, avg=71392.00, stdev=5512.68, samples=4 00:22:35.056 iops : min= 4094, max= 4826, avg=4462.00, stdev=344.54, samples=4 00:22:35.056 lat (msec) : 4=0.24%, 10=59.10%, 20=40.66%, 50=0.01% 00:22:35.056 cpu : usr=77.74%, sys=21.06%, ctx=32, majf=0, minf=66 00:22:35.056 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:22:35.056 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:35.057 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:35.057 issued rwts: total=17000,8981,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:35.057 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:35.057 00:22:35.057 Run status group 0 (all jobs): 00:22:35.057 READ: bw=132MiB/s (139MB/s), 132MiB/s-132MiB/s (139MB/s-139MB/s), io=266MiB (279MB), run=2005-2005msec 00:22:35.057 WRITE: bw=76.5MiB/s (80.2MB/s), 76.5MiB/s-76.5MiB/s (80.2MB/s-80.2MB/s), io=140MiB (147MB), run=1834-1834msec 00:22:35.057 14:16:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:35.314 14:16:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:22:35.314 14:16:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:22:35.314 14:16:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:22:35.314 14:16:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:22:35.314 14:16:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:35.314 14:16:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:22:35.314 14:16:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:35.314 14:16:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:22:35.314 14:16:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:35.314 14:16:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:35.314 rmmod nvme_tcp 00:22:35.314 rmmod nvme_fabrics 00:22:35.314 rmmod nvme_keyring 00:22:35.314 14:16:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:35.314 14:16:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:22:35.314 14:16:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:22:35.314 14:16:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 283678 ']' 00:22:35.314 14:16:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 283678 00:22:35.314 14:16:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@950 -- # '[' -z 283678 ']' 00:22:35.314 14:16:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # kill -0 283678 00:22:35.314 14:16:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # uname 00:22:35.314 14:16:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:35.314 14:16:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 283678 00:22:35.314 14:16:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:35.314 14:16:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:35.314 14:16:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 283678' 00:22:35.314 killing process with pid 283678 00:22:35.314 14:16:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@969 -- # kill 283678 00:22:35.314 14:16:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@974 -- # wait 283678 00:22:35.588 14:16:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:35.588 14:16:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:35.588 14:16:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:35.588 14:16:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:35.588 14:16:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:35.588 14:16:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:35.588 14:16:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:35.588 14:16:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:38.129 14:16:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:38.129 00:22:38.129 real 0m12.315s 00:22:38.129 user 0m36.513s 00:22:38.129 sys 0m3.981s 00:22:38.129 14:16:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:38.129 14:16:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:38.129 ************************************ 00:22:38.129 END TEST nvmf_fio_host 00:22:38.129 ************************************ 00:22:38.130 14:16:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:22:38.130 14:16:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:38.130 14:16:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:38.130 14:16:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:38.130 ************************************ 00:22:38.130 START TEST nvmf_failover 00:22:38.130 ************************************ 00:22:38.130 14:16:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:22:38.130 * Looking for test storage... 00:22:38.130 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:38.130 14:16:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:38.130 14:16:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:22:38.130 14:16:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:38.130 14:16:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:38.130 14:16:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:38.130 14:16:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:38.130 14:16:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:38.130 14:16:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:38.130 14:16:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:38.130 14:16:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:38.130 14:16:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:38.130 14:16:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:38.130 14:16:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:22:38.130 14:16:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:22:38.130 14:16:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:38.130 14:16:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:38.130 14:16:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:38.130 14:16:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:38.130 14:16:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:38.130 14:16:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:38.130 14:16:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:38.130 14:16:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:38.130 14:16:45 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:38.130 14:16:45 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:38.130 14:16:45 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:38.130 14:16:45 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:22:38.130 14:16:45 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:38.130 14:16:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:22:38.130 14:16:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:38.130 14:16:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:38.130 14:16:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:38.130 14:16:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:38.130 14:16:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:38.130 14:16:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:38.130 14:16:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:38.130 14:16:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:38.130 14:16:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:38.130 14:16:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:38.130 14:16:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:38.130 14:16:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:38.130 14:16:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:22:38.130 14:16:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:38.130 14:16:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:38.130 14:16:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:38.130 14:16:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:38.130 14:16:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:38.130 14:16:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:38.130 14:16:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:38.130 14:16:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:38.130 14:16:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:38.130 14:16:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:38.130 14:16:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:22:38.130 14:16:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:40.034 14:16:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:40.035 14:16:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:22:40.035 14:16:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:40.035 14:16:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:40.035 14:16:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:40.035 14:16:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:40.035 14:16:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:40.035 14:16:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:22:40.035 14:16:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:40.035 14:16:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:22:40.035 14:16:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:22:40.035 14:16:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:22:40.035 14:16:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:22:40.035 14:16:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:22:40.035 14:16:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:22:40.035 14:16:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:40.035 14:16:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:40.035 14:16:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:40.035 14:16:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:40.035 14:16:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:40.035 14:16:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:40.035 14:16:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:40.035 14:16:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:40.035 14:16:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:40.035 14:16:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:40.035 14:16:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:40.035 14:16:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:40.035 14:16:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:40.035 14:16:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:40.035 14:16:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:40.035 14:16:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:40.035 14:16:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:40.035 14:16:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:40.035 14:16:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:22:40.035 Found 0000:09:00.0 (0x8086 - 0x159b) 00:22:40.035 14:16:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:40.035 14:16:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:40.035 14:16:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:40.035 14:16:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:40.035 14:16:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:40.035 14:16:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:40.035 14:16:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:22:40.035 Found 0000:09:00.1 (0x8086 - 0x159b) 00:22:40.035 14:16:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:40.035 14:16:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:40.035 14:16:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:40.035 14:16:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:40.035 14:16:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:40.035 14:16:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:40.035 14:16:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:40.035 14:16:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:40.035 14:16:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:40.035 14:16:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:40.035 14:16:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:40.035 14:16:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:40.035 14:16:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:40.035 14:16:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:40.035 14:16:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:40.035 14:16:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:22:40.035 Found net devices under 0000:09:00.0: cvl_0_0 00:22:40.035 14:16:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:40.035 14:16:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:40.035 14:16:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:40.035 14:16:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:40.035 14:16:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:40.035 14:16:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:40.035 14:16:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:40.035 14:16:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:40.035 14:16:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:22:40.035 Found net devices under 0000:09:00.1: cvl_0_1 00:22:40.035 14:16:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:40.035 14:16:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:40.035 14:16:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:22:40.035 14:16:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:40.035 14:16:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:40.035 14:16:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:40.035 14:16:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:40.035 14:16:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:40.035 14:16:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:40.035 14:16:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:40.035 14:16:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:40.035 14:16:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:40.035 14:16:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:40.035 14:16:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:40.035 14:16:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:40.035 14:16:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:40.035 14:16:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:40.035 14:16:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:40.035 14:16:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:40.035 14:16:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:40.035 14:16:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:40.035 14:16:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:40.035 14:16:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:40.035 14:16:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:40.035 14:16:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:40.035 14:16:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:40.036 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:40.036 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.256 ms 00:22:40.036 00:22:40.036 --- 10.0.0.2 ping statistics --- 00:22:40.036 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:40.036 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:22:40.036 14:16:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:40.036 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:40.036 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.162 ms 00:22:40.036 00:22:40.036 --- 10.0.0.1 ping statistics --- 00:22:40.036 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:40.036 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:22:40.036 14:16:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:40.036 14:16:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:22:40.036 14:16:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:40.036 14:16:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:40.036 14:16:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:40.036 14:16:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:40.036 14:16:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:40.036 14:16:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:40.036 14:16:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:40.036 14:16:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:22:40.036 14:16:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:40.036 14:16:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:40.036 14:16:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:40.036 14:16:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=286690 00:22:40.036 14:16:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 286690 00:22:40.036 14:16:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:22:40.036 14:16:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 286690 ']' 00:22:40.036 14:16:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:40.036 14:16:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:40.036 14:16:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:40.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:40.036 14:16:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:40.036 14:16:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:40.036 [2024-07-26 14:16:47.840889] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:22:40.036 [2024-07-26 14:16:47.840973] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:40.036 EAL: No free 2048 kB hugepages reported on node 1 00:22:40.036 [2024-07-26 14:16:47.908688] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:40.036 [2024-07-26 14:16:48.024390] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:40.036 [2024-07-26 14:16:48.024443] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:40.036 [2024-07-26 14:16:48.024457] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:40.036 [2024-07-26 14:16:48.024467] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:40.036 [2024-07-26 14:16:48.024477] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:40.036 [2024-07-26 14:16:48.024561] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:40.036 [2024-07-26 14:16:48.028547] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:40.036 [2024-07-26 14:16:48.028557] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:40.294 14:16:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:40.294 14:16:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:22:40.294 14:16:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:40.294 14:16:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:40.294 14:16:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:40.294 14:16:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:40.294 14:16:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:40.552 [2024-07-26 14:16:48.377315] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:40.552 14:16:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:40.810 Malloc0 00:22:40.810 14:16:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:41.068 14:16:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:41.325 14:16:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:41.583 [2024-07-26 14:16:49.399359] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:41.583 14:16:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:41.840 [2024-07-26 14:16:49.652115] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:41.841 14:16:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:42.099 [2024-07-26 14:16:49.901025] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:22:42.099 14:16:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=286978 00:22:42.099 14:16:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:22:42.099 14:16:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:42.099 14:16:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 286978 /var/tmp/bdevperf.sock 00:22:42.099 14:16:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 286978 ']' 00:22:42.099 14:16:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:42.099 14:16:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:42.099 14:16:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:42.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:42.099 14:16:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:42.099 14:16:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:42.357 14:16:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:42.357 14:16:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:22:42.357 14:16:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:42.615 NVMe0n1 00:22:42.615 14:16:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:43.179 00:22:43.179 14:16:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=287108 00:22:43.179 14:16:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:43.179 14:16:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:22:44.112 14:16:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:44.370 [2024-07-26 14:16:52.204930] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1624f40 is same with the state(5) to be set 00:22:44.370 [2024-07-26 14:16:52.204998] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1624f40 is same with the state(5) to be set 00:22:44.370 [2024-07-26 14:16:52.205014] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1624f40 is same with the state(5) to be set 00:22:44.370 [2024-07-26 14:16:52.205027] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1624f40 is same with the state(5) to be set 00:22:44.370 [2024-07-26 14:16:52.205040] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1624f40 is same with the state(5) to be set 00:22:44.370 [2024-07-26 14:16:52.205052] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1624f40 is same with the state(5) to be set 00:22:44.370 [2024-07-26 14:16:52.205064] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1624f40 is same with the state(5) to be set 00:22:44.370 [2024-07-26 14:16:52.205076] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1624f40 is same with the state(5) to be set 00:22:44.370 [2024-07-26 14:16:52.205089] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1624f40 is same with the state(5) to be set 00:22:44.370 [2024-07-26 14:16:52.205101] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1624f40 is same with the state(5) to be set 00:22:44.370 [2024-07-26 14:16:52.205113] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1624f40 is same with the state(5) to be set 00:22:44.370 [2024-07-26 14:16:52.205125] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1624f40 is same with the state(5) to be set 00:22:44.370 [2024-07-26 14:16:52.205137] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1624f40 is same with the state(5) to be set 00:22:44.370 [2024-07-26 14:16:52.205158] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1624f40 is same with the state(5) to be set 00:22:44.370 14:16:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:22:47.651 14:16:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:47.908 00:22:47.908 14:16:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:48.165 [2024-07-26 14:16:55.966547] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625d10 is same with the state(5) to be set 00:22:48.165 [2024-07-26 14:16:55.966609] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625d10 is same with the state(5) to be set 00:22:48.165 [2024-07-26 14:16:55.966625] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625d10 is same with the state(5) to be set 00:22:48.165 [2024-07-26 14:16:55.966637] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625d10 is same with the state(5) to be set 00:22:48.165 [2024-07-26 14:16:55.966650] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625d10 is same with the state(5) to be set 00:22:48.165 [2024-07-26 14:16:55.966662] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625d10 is same with the state(5) to be set 00:22:48.165 [2024-07-26 14:16:55.966675] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625d10 is same with the state(5) to be set 00:22:48.165 [2024-07-26 14:16:55.966687] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625d10 is same with the state(5) to be set 00:22:48.166 [2024-07-26 14:16:55.966700] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625d10 is same with the state(5) to be set 00:22:48.166 [2024-07-26 14:16:55.966712] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625d10 is same with the state(5) to be set 00:22:48.166 [2024-07-26 14:16:55.966724] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625d10 is same with the state(5) to be set 00:22:48.166 [2024-07-26 14:16:55.966736] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625d10 is same with the state(5) to be set 00:22:48.166 [2024-07-26 14:16:55.966748] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625d10 is same with the state(5) to be set 00:22:48.166 [2024-07-26 14:16:55.966760] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625d10 is same with the state(5) to be set 00:22:48.166 [2024-07-26 14:16:55.966772] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625d10 is same with the state(5) to be set 00:22:48.166 [2024-07-26 14:16:55.966786] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625d10 is same with the state(5) to be set 00:22:48.166 [2024-07-26 14:16:55.966798] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625d10 is same with the state(5) to be set 00:22:48.166 [2024-07-26 14:16:55.966810] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625d10 is same with the state(5) to be set 00:22:48.166 [2024-07-26 14:16:55.966822] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625d10 is same with the state(5) to be set 00:22:48.166 [2024-07-26 14:16:55.966850] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625d10 is same with the state(5) to be set 00:22:48.166 [2024-07-26 14:16:55.966863] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625d10 is same with the state(5) to be set 00:22:48.166 [2024-07-26 14:16:55.966876] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625d10 is same with the state(5) to be set 00:22:48.166 [2024-07-26 14:16:55.966897] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625d10 is same with the state(5) to be set 00:22:48.166 [2024-07-26 14:16:55.966909] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625d10 is same with the state(5) to be set 00:22:48.166 [2024-07-26 14:16:55.966921] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625d10 is same with the state(5) to be set 00:22:48.166 [2024-07-26 14:16:55.966932] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625d10 is same with the state(5) to be set 00:22:48.166 [2024-07-26 14:16:55.966944] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625d10 is same with the state(5) to be set 00:22:48.166 [2024-07-26 14:16:55.966955] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625d10 is same with the state(5) to be set 00:22:48.166 [2024-07-26 14:16:55.966967] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625d10 is same with the state(5) to be set 00:22:48.166 [2024-07-26 14:16:55.966978] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625d10 is same with the state(5) to be set 00:22:48.166 [2024-07-26 14:16:55.966991] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625d10 is same with the state(5) to be set 00:22:48.166 [2024-07-26 14:16:55.967004] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625d10 is same with the state(5) to be set 00:22:48.166 [2024-07-26 14:16:55.967017] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625d10 is same with the state(5) to be set 00:22:48.166 [2024-07-26 14:16:55.967029] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625d10 is same with the state(5) to be set 00:22:48.166 [2024-07-26 14:16:55.967042] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625d10 is same with the state(5) to be set 00:22:48.166 [2024-07-26 14:16:55.967054] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625d10 is same with the state(5) to be set 00:22:48.166 [2024-07-26 14:16:55.967065] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625d10 is same with the state(5) to be set 00:22:48.166 [2024-07-26 14:16:55.967077] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625d10 is same with the state(5) to be set 00:22:48.166 [2024-07-26 14:16:55.967105] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625d10 is same with the state(5) to be set 00:22:48.166 [2024-07-26 14:16:55.967117] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625d10 is same with the state(5) to be set 00:22:48.166 [2024-07-26 14:16:55.967128] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625d10 is same with the state(5) to be set 00:22:48.166 [2024-07-26 14:16:55.967141] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625d10 is same with the state(5) to be set 00:22:48.166 [2024-07-26 14:16:55.967153] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625d10 is same with the state(5) to be set 00:22:48.166 [2024-07-26 14:16:55.967164] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625d10 is same with the state(5) to be set 00:22:48.166 [2024-07-26 14:16:55.967175] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625d10 is same with the state(5) to be set 00:22:48.166 [2024-07-26 14:16:55.967187] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625d10 is same with the state(5) to be set 00:22:48.166 [2024-07-26 14:16:55.967199] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625d10 is same with the state(5) to be set 00:22:48.166 [2024-07-26 14:16:55.967210] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625d10 is same with the state(5) to be set 00:22:48.166 [2024-07-26 14:16:55.967222] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625d10 is same with the state(5) to be set 00:22:48.166 [2024-07-26 14:16:55.967238] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625d10 is same with the state(5) to be set 00:22:48.166 [2024-07-26 14:16:55.967250] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625d10 is same with the state(5) to be set 00:22:48.166 [2024-07-26 14:16:55.967261] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625d10 is same with the state(5) to be set 00:22:48.166 [2024-07-26 14:16:55.967272] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625d10 is same with the state(5) to be set 00:22:48.166 14:16:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:22:51.446 14:16:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:51.446 [2024-07-26 14:16:59.227132] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:51.446 14:16:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:22:52.379 14:17:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:52.637 14:17:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 287108 00:22:59.213 0 00:22:59.213 14:17:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 286978 00:22:59.213 14:17:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 286978 ']' 00:22:59.213 14:17:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 286978 00:22:59.213 14:17:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:22:59.213 14:17:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:59.213 14:17:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 286978 00:22:59.213 14:17:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:59.213 14:17:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:59.213 14:17:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 286978' 00:22:59.213 killing process with pid 286978 00:22:59.213 14:17:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 286978 00:22:59.213 14:17:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 286978 00:22:59.213 14:17:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:59.213 [2024-07-26 14:16:49.965057] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:22:59.213 [2024-07-26 14:16:49.965156] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid286978 ] 00:22:59.213 EAL: No free 2048 kB hugepages reported on node 1 00:22:59.213 [2024-07-26 14:16:50.027701] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:59.213 [2024-07-26 14:16:50.140252] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:59.213 Running I/O for 15 seconds... 00:22:59.213 [2024-07-26 14:16:52.205568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:79032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.213 [2024-07-26 14:16:52.205610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.213 [2024-07-26 14:16:52.205639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:79040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.213 [2024-07-26 14:16:52.205655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.213 [2024-07-26 14:16:52.205672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:79048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.213 [2024-07-26 14:16:52.205686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.213 [2024-07-26 14:16:52.205702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:79056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.213 [2024-07-26 14:16:52.205716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.213 [2024-07-26 14:16:52.205731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:79064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.213 [2024-07-26 14:16:52.205745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.213 [2024-07-26 14:16:52.205760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:79072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.213 [2024-07-26 14:16:52.205773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.213 [2024-07-26 14:16:52.205788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:79080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.213 [2024-07-26 14:16:52.205803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.213 [2024-07-26 14:16:52.205818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:79088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.213 [2024-07-26 14:16:52.205847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.213 [2024-07-26 14:16:52.205863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:79096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.213 [2024-07-26 14:16:52.205876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.213 [2024-07-26 14:16:52.205891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:79104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.213 [2024-07-26 14:16:52.205904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.213 [2024-07-26 14:16:52.205935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:79112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.213 [2024-07-26 14:16:52.205948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.213 [2024-07-26 14:16:52.205971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:79120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.213 [2024-07-26 14:16:52.205985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.213 [2024-07-26 14:16:52.206000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:79128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.213 [2024-07-26 14:16:52.206015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.213 [2024-07-26 14:16:52.206030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:79136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.213 [2024-07-26 14:16:52.206043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.213 [2024-07-26 14:16:52.206058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:79144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.213 [2024-07-26 14:16:52.206071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.213 [2024-07-26 14:16:52.206085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.213 [2024-07-26 14:16:52.206097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.213 [2024-07-26 14:16:52.206111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:79160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.213 [2024-07-26 14:16:52.206124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.213 [2024-07-26 14:16:52.206138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:79168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.213 [2024-07-26 14:16:52.206151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.213 [2024-07-26 14:16:52.206166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:79176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.213 [2024-07-26 14:16:52.206179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.213 [2024-07-26 14:16:52.206193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:79184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.213 [2024-07-26 14:16:52.206205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.213 [2024-07-26 14:16:52.206220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:79192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.213 [2024-07-26 14:16:52.206232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.213 [2024-07-26 14:16:52.206246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:79200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.213 [2024-07-26 14:16:52.206259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.213 [2024-07-26 14:16:52.206273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:79208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.213 [2024-07-26 14:16:52.206285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.213 [2024-07-26 14:16:52.206300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:79216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.213 [2024-07-26 14:16:52.206316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.213 [2024-07-26 14:16:52.206331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:79224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.213 [2024-07-26 14:16:52.206345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.213 [2024-07-26 14:16:52.206360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:79232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.213 [2024-07-26 14:16:52.206372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.213 [2024-07-26 14:16:52.206387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:79240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.213 [2024-07-26 14:16:52.206399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.213 [2024-07-26 14:16:52.206413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:79248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.213 [2024-07-26 14:16:52.206425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.213 [2024-07-26 14:16:52.206440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:79256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.213 [2024-07-26 14:16:52.206452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.213 [2024-07-26 14:16:52.206467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:79264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.213 [2024-07-26 14:16:52.206479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.213 [2024-07-26 14:16:52.206494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:79272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.213 [2024-07-26 14:16:52.206506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.213 [2024-07-26 14:16:52.206546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:79280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.213 [2024-07-26 14:16:52.206561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.213 [2024-07-26 14:16:52.206576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:79288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.213 [2024-07-26 14:16:52.206590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.213 [2024-07-26 14:16:52.206605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:79296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.213 [2024-07-26 14:16:52.206619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.213 [2024-07-26 14:16:52.206635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:79304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.213 [2024-07-26 14:16:52.206649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.213 [2024-07-26 14:16:52.206664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:79312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.213 [2024-07-26 14:16:52.206677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.213 [2024-07-26 14:16:52.206696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:79320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.213 [2024-07-26 14:16:52.206711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.213 [2024-07-26 14:16:52.206726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:79328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.213 [2024-07-26 14:16:52.206739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.213 [2024-07-26 14:16:52.206754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:78456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.213 [2024-07-26 14:16:52.206768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.213 [2024-07-26 14:16:52.206783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:78464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.213 [2024-07-26 14:16:52.206796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.213 [2024-07-26 14:16:52.206827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:78472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.214 [2024-07-26 14:16:52.206840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.214 [2024-07-26 14:16:52.206855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:78480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.214 [2024-07-26 14:16:52.206868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.214 [2024-07-26 14:16:52.206898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:78488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.214 [2024-07-26 14:16:52.206910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.214 [2024-07-26 14:16:52.206924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:78496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.214 [2024-07-26 14:16:52.206937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.214 [2024-07-26 14:16:52.206951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:78504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.214 [2024-07-26 14:16:52.206964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.214 [2024-07-26 14:16:52.206978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:78512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.214 [2024-07-26 14:16:52.206991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.214 [2024-07-26 14:16:52.207005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:78520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.214 [2024-07-26 14:16:52.207018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.214 [2024-07-26 14:16:52.207032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:78528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.214 [2024-07-26 14:16:52.207044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.214 [2024-07-26 14:16:52.207059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:78536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.214 [2024-07-26 14:16:52.207074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.214 [2024-07-26 14:16:52.207089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:78544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.214 [2024-07-26 14:16:52.207103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.214 [2024-07-26 14:16:52.207118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:78552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.214 [2024-07-26 14:16:52.207131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.214 [2024-07-26 14:16:52.207146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:78560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.214 [2024-07-26 14:16:52.207158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.214 [2024-07-26 14:16:52.207173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:78568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.214 [2024-07-26 14:16:52.207185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.214 [2024-07-26 14:16:52.207199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:78576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.214 [2024-07-26 14:16:52.207212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.214 [2024-07-26 14:16:52.207226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:78584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.214 [2024-07-26 14:16:52.207239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.214 [2024-07-26 14:16:52.207253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:78592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.214 [2024-07-26 14:16:52.207265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.214 [2024-07-26 14:16:52.207280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:78600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.214 [2024-07-26 14:16:52.207292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.214 [2024-07-26 14:16:52.207306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:78608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.214 [2024-07-26 14:16:52.207319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.214 [2024-07-26 14:16:52.207333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:78616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.214 [2024-07-26 14:16:52.207345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.214 [2024-07-26 14:16:52.207359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:78624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.214 [2024-07-26 14:16:52.207372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.214 [2024-07-26 14:16:52.207387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:78632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.214 [2024-07-26 14:16:52.207399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.214 [2024-07-26 14:16:52.207413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:78640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.214 [2024-07-26 14:16:52.207430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.214 [2024-07-26 14:16:52.207444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:78648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.214 [2024-07-26 14:16:52.207457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.214 [2024-07-26 14:16:52.207471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:78656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.214 [2024-07-26 14:16:52.207484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.214 [2024-07-26 14:16:52.207498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:78664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.214 [2024-07-26 14:16:52.207525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.214 [2024-07-26 14:16:52.207552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:78672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.214 [2024-07-26 14:16:52.207578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.214 [2024-07-26 14:16:52.207593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:78680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.214 [2024-07-26 14:16:52.207606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.214 [2024-07-26 14:16:52.207621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:78688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.214 [2024-07-26 14:16:52.207634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.214 [2024-07-26 14:16:52.207648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:78696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.214 [2024-07-26 14:16:52.207661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.214 [2024-07-26 14:16:52.207676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:78704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.214 [2024-07-26 14:16:52.207689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.214 [2024-07-26 14:16:52.207704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:78712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.214 [2024-07-26 14:16:52.207716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.214 [2024-07-26 14:16:52.207731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:78720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.214 [2024-07-26 14:16:52.207744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.214 [2024-07-26 14:16:52.207759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:78728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.214 [2024-07-26 14:16:52.207772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.214 [2024-07-26 14:16:52.207787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:78736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.214 [2024-07-26 14:16:52.207799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.214 [2024-07-26 14:16:52.207832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:78744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.214 [2024-07-26 14:16:52.207846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.214 [2024-07-26 14:16:52.207860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:78752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.214 [2024-07-26 14:16:52.207873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.214 [2024-07-26 14:16:52.207887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:78760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.214 [2024-07-26 14:16:52.207900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.214 [2024-07-26 14:16:52.207914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:78768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.214 [2024-07-26 14:16:52.207927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.214 [2024-07-26 14:16:52.207941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:79336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.214 [2024-07-26 14:16:52.207954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.214 [2024-07-26 14:16:52.207969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:79344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.214 [2024-07-26 14:16:52.207981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.214 [2024-07-26 14:16:52.207995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:79352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.214 [2024-07-26 14:16:52.208008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.214 [2024-07-26 14:16:52.208022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:78776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.214 [2024-07-26 14:16:52.208035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.214 [2024-07-26 14:16:52.208049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:78784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.214 [2024-07-26 14:16:52.208061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.214 [2024-07-26 14:16:52.208076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:78792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.214 [2024-07-26 14:16:52.208088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.214 [2024-07-26 14:16:52.208102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:78800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.214 [2024-07-26 14:16:52.208115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.214 [2024-07-26 14:16:52.208129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:78808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.214 [2024-07-26 14:16:52.208142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.214 [2024-07-26 14:16:52.208156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:78816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.214 [2024-07-26 14:16:52.208172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.214 [2024-07-26 14:16:52.208187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:78824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.214 [2024-07-26 14:16:52.208200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.214 [2024-07-26 14:16:52.208214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:78832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.214 [2024-07-26 14:16:52.208242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.214 [2024-07-26 14:16:52.208258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:78840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.214 [2024-07-26 14:16:52.208271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.214 [2024-07-26 14:16:52.208285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:78848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.214 [2024-07-26 14:16:52.208298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.214 [2024-07-26 14:16:52.208319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:78856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.214 [2024-07-26 14:16:52.208332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.214 [2024-07-26 14:16:52.208347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:78864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.214 [2024-07-26 14:16:52.208361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.214 [2024-07-26 14:16:52.208375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:78872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.214 [2024-07-26 14:16:52.208388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.214 [2024-07-26 14:16:52.208403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:78880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.214 [2024-07-26 14:16:52.208416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.214 [2024-07-26 14:16:52.208431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:78888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.214 [2024-07-26 14:16:52.208445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.214 [2024-07-26 14:16:52.208460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:78896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.214 [2024-07-26 14:16:52.208473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.214 [2024-07-26 14:16:52.208488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:78904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.214 [2024-07-26 14:16:52.208501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.214 [2024-07-26 14:16:52.208515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:78912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.214 [2024-07-26 14:16:52.208551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.214 [2024-07-26 14:16:52.208575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:78920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.214 [2024-07-26 14:16:52.208590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.214 [2024-07-26 14:16:52.208606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:78928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.214 [2024-07-26 14:16:52.208619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.214 [2024-07-26 14:16:52.208634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:78936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.214 [2024-07-26 14:16:52.208648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.214 [2024-07-26 14:16:52.208663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:78944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.214 [2024-07-26 14:16:52.208676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.214 [2024-07-26 14:16:52.208692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:78952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.214 [2024-07-26 14:16:52.208705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.214 [2024-07-26 14:16:52.208720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:78960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.214 [2024-07-26 14:16:52.208733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.214 [2024-07-26 14:16:52.208749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:78968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.214 [2024-07-26 14:16:52.208762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.214 [2024-07-26 14:16:52.208777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:78976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.214 [2024-07-26 14:16:52.208790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.214 [2024-07-26 14:16:52.208811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:78984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.214 [2024-07-26 14:16:52.208825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.214 [2024-07-26 14:16:52.208856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:78992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.214 [2024-07-26 14:16:52.208870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.214 [2024-07-26 14:16:52.208885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:79000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.214 [2024-07-26 14:16:52.208898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.214 [2024-07-26 14:16:52.208914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:79008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.214 [2024-07-26 14:16:52.208927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.215 [2024-07-26 14:16:52.208942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:79016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.215 [2024-07-26 14:16:52.208959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.215 [2024-07-26 14:16:52.208975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:79024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.215 [2024-07-26 14:16:52.208988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.215 [2024-07-26 14:16:52.209003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:79360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.215 [2024-07-26 14:16:52.209016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.215 [2024-07-26 14:16:52.209031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:79368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.215 [2024-07-26 14:16:52.209044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.215 [2024-07-26 14:16:52.209059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:79376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.215 [2024-07-26 14:16:52.209072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.215 [2024-07-26 14:16:52.209086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:79384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.215 [2024-07-26 14:16:52.209099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.215 [2024-07-26 14:16:52.209114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:79392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.215 [2024-07-26 14:16:52.209127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.215 [2024-07-26 14:16:52.209142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:79400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.215 [2024-07-26 14:16:52.209154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.215 [2024-07-26 14:16:52.209169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:79408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.215 [2024-07-26 14:16:52.209182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.215 [2024-07-26 14:16:52.209197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:79416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.215 [2024-07-26 14:16:52.209209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.215 [2024-07-26 14:16:52.209224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:79424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.215 [2024-07-26 14:16:52.209237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.215 [2024-07-26 14:16:52.209251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:79432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.215 [2024-07-26 14:16:52.209264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.215 [2024-07-26 14:16:52.209284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:79440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.215 [2024-07-26 14:16:52.209298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.215 [2024-07-26 14:16:52.209313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:79448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.215 [2024-07-26 14:16:52.209333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.215 [2024-07-26 14:16:52.209348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:79456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.215 [2024-07-26 14:16:52.209362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.215 [2024-07-26 14:16:52.209377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:79464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.215 [2024-07-26 14:16:52.209390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.215 [2024-07-26 14:16:52.209418] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:59.215 [2024-07-26 14:16:52.209433] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:59.215 [2024-07-26 14:16:52.209445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79472 len:8 PRP1 0x0 PRP2 0x0 00:22:59.215 [2024-07-26 14:16:52.209458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.215 [2024-07-26 14:16:52.209517] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xda6c10 was disconnected and freed. reset controller. 00:22:59.215 [2024-07-26 14:16:52.209559] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:22:59.215 [2024-07-26 14:16:52.209594] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.215 [2024-07-26 14:16:52.209613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.215 [2024-07-26 14:16:52.209628] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.215 [2024-07-26 14:16:52.209641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.215 [2024-07-26 14:16:52.209655] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.215 [2024-07-26 14:16:52.209668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.215 [2024-07-26 14:16:52.209682] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.215 [2024-07-26 14:16:52.209695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.215 [2024-07-26 14:16:52.209708] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:59.215 [2024-07-26 14:16:52.209754] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd890f0 (9): Bad file descriptor 00:22:59.215 [2024-07-26 14:16:52.213044] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:59.215 [2024-07-26 14:16:52.402485] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:59.215 [2024-07-26 14:16:55.967557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:129208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.215 [2024-07-26 14:16:55.967601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.215 [2024-07-26 14:16:55.967630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:129216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.215 [2024-07-26 14:16:55.967646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.215 [2024-07-26 14:16:55.967669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:129224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.215 [2024-07-26 14:16:55.967684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.215 [2024-07-26 14:16:55.967700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:129232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.215 [2024-07-26 14:16:55.967713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.215 [2024-07-26 14:16:55.967728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:129240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.215 [2024-07-26 14:16:55.967741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.215 [2024-07-26 14:16:55.967757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:129248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.215 [2024-07-26 14:16:55.967770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.215 [2024-07-26 14:16:55.967785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:129256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.215 [2024-07-26 14:16:55.967798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.215 [2024-07-26 14:16:55.967813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:129264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.215 [2024-07-26 14:16:55.967841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.215 [2024-07-26 14:16:55.967856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:129272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.215 [2024-07-26 14:16:55.967869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.215 [2024-07-26 14:16:55.967884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:129280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.215 [2024-07-26 14:16:55.967911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.215 [2024-07-26 14:16:55.967925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:129288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.215 [2024-07-26 14:16:55.967938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.215 [2024-07-26 14:16:55.967952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:129296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.215 [2024-07-26 14:16:55.967964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.215 [2024-07-26 14:16:55.967978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:129304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.215 [2024-07-26 14:16:55.967990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.215 [2024-07-26 14:16:55.968005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:129312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.215 [2024-07-26 14:16:55.968017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.215 [2024-07-26 14:16:55.968031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:129320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.215 [2024-07-26 14:16:55.968046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.215 [2024-07-26 14:16:55.968061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:129328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.215 [2024-07-26 14:16:55.968073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.215 [2024-07-26 14:16:55.968087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:129336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.215 [2024-07-26 14:16:55.968101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.215 [2024-07-26 14:16:55.968116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:129344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.215 [2024-07-26 14:16:55.968128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.215 [2024-07-26 14:16:55.968143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:129352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.215 [2024-07-26 14:16:55.968155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.215 [2024-07-26 14:16:55.968169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:129360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.215 [2024-07-26 14:16:55.968181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.215 [2024-07-26 14:16:55.968195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:129368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.215 [2024-07-26 14:16:55.968207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.215 [2024-07-26 14:16:55.968221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:129376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.215 [2024-07-26 14:16:55.968234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.215 [2024-07-26 14:16:55.968248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:129384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.215 [2024-07-26 14:16:55.968260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.215 [2024-07-26 14:16:55.968274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:129392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.215 [2024-07-26 14:16:55.968287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.215 [2024-07-26 14:16:55.968301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:129400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.215 [2024-07-26 14:16:55.968313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.215 [2024-07-26 14:16:55.968327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:129408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.215 [2024-07-26 14:16:55.968340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.215 [2024-07-26 14:16:55.968354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:129416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.215 [2024-07-26 14:16:55.968366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.215 [2024-07-26 14:16:55.968384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:129424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.215 [2024-07-26 14:16:55.968397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.215 [2024-07-26 14:16:55.968412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:129432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.215 [2024-07-26 14:16:55.968424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.215 [2024-07-26 14:16:55.968439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:129440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.215 [2024-07-26 14:16:55.968451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.215 [2024-07-26 14:16:55.968465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:129448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.215 [2024-07-26 14:16:55.968478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.215 [2024-07-26 14:16:55.968493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:129456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.215 [2024-07-26 14:16:55.968505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.215 [2024-07-26 14:16:55.968548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:129464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.215 [2024-07-26 14:16:55.968563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.215 [2024-07-26 14:16:55.968578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:129472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.215 [2024-07-26 14:16:55.968592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.215 [2024-07-26 14:16:55.968607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:129480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.215 [2024-07-26 14:16:55.968620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.215 [2024-07-26 14:16:55.968634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:129488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.215 [2024-07-26 14:16:55.968647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.215 [2024-07-26 14:16:55.968662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:129496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.216 [2024-07-26 14:16:55.968675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.216 [2024-07-26 14:16:55.968689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:129504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.216 [2024-07-26 14:16:55.968702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.216 [2024-07-26 14:16:55.968717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:129512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.216 [2024-07-26 14:16:55.968730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.216 [2024-07-26 14:16:55.968744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:129520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.216 [2024-07-26 14:16:55.968761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.216 [2024-07-26 14:16:55.968776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:129528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.216 [2024-07-26 14:16:55.968789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.216 [2024-07-26 14:16:55.968804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:129536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.216 [2024-07-26 14:16:55.968832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.216 [2024-07-26 14:16:55.968847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:129544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.216 [2024-07-26 14:16:55.968859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.216 [2024-07-26 14:16:55.968873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:129552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.216 [2024-07-26 14:16:55.968886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.216 [2024-07-26 14:16:55.968900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:129560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.216 [2024-07-26 14:16:55.968912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.216 [2024-07-26 14:16:55.968926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:129568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.216 [2024-07-26 14:16:55.968938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.216 [2024-07-26 14:16:55.968953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:129576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.216 [2024-07-26 14:16:55.968965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.216 [2024-07-26 14:16:55.968980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:129584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.216 [2024-07-26 14:16:55.968993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.216 [2024-07-26 14:16:55.969006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:129592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.216 [2024-07-26 14:16:55.969019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.216 [2024-07-26 14:16:55.969033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:129600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.216 [2024-07-26 14:16:55.969045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.216 [2024-07-26 14:16:55.969059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:129608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.216 [2024-07-26 14:16:55.969071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.216 [2024-07-26 14:16:55.969085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:129616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.216 [2024-07-26 14:16:55.969097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.216 [2024-07-26 14:16:55.969115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:129624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.216 [2024-07-26 14:16:55.969128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.216 [2024-07-26 14:16:55.969142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:129632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.216 [2024-07-26 14:16:55.969155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.216 [2024-07-26 14:16:55.969169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:129640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.216 [2024-07-26 14:16:55.969181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.216 [2024-07-26 14:16:55.969196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:129656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.216 [2024-07-26 14:16:55.969208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.216 [2024-07-26 14:16:55.969222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:129664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.216 [2024-07-26 14:16:55.969234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.216 [2024-07-26 14:16:55.969248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:129672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.216 [2024-07-26 14:16:55.969261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.216 [2024-07-26 14:16:55.969275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:129680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.216 [2024-07-26 14:16:55.969287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.216 [2024-07-26 14:16:55.969302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:129688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.216 [2024-07-26 14:16:55.969314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.216 [2024-07-26 14:16:55.969328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:129696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.216 [2024-07-26 14:16:55.969340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.216 [2024-07-26 14:16:55.969354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:129704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.216 [2024-07-26 14:16:55.969367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.216 [2024-07-26 14:16:55.969381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:129712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.216 [2024-07-26 14:16:55.969394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.216 [2024-07-26 14:16:55.969409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:129720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.216 [2024-07-26 14:16:55.969421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.216 [2024-07-26 14:16:55.969435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:129728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.216 [2024-07-26 14:16:55.969451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.216 [2024-07-26 14:16:55.969465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:129736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.216 [2024-07-26 14:16:55.969478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.216 [2024-07-26 14:16:55.969492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:129744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.216 [2024-07-26 14:16:55.969504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.216 [2024-07-26 14:16:55.969542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:129752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.216 [2024-07-26 14:16:55.969557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.216 [2024-07-26 14:16:55.969587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:129760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.216 [2024-07-26 14:16:55.969601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.216 [2024-07-26 14:16:55.969617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:129768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.216 [2024-07-26 14:16:55.969630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.216 [2024-07-26 14:16:55.969645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:129776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.216 [2024-07-26 14:16:55.969658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.216 [2024-07-26 14:16:55.969680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:129784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.216 [2024-07-26 14:16:55.969694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.216 [2024-07-26 14:16:55.969709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:129792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.216 [2024-07-26 14:16:55.969723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.216 [2024-07-26 14:16:55.969737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:129800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.216 [2024-07-26 14:16:55.969750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.216 [2024-07-26 14:16:55.969765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:129808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.216 [2024-07-26 14:16:55.969778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.216 [2024-07-26 14:16:55.969793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:129816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.216 [2024-07-26 14:16:55.969806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.216 [2024-07-26 14:16:55.969836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:129824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.216 [2024-07-26 14:16:55.969848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.216 [2024-07-26 14:16:55.969868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:129832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.216 [2024-07-26 14:16:55.969886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.216 [2024-07-26 14:16:55.969901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:129840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.216 [2024-07-26 14:16:55.969914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.216 [2024-07-26 14:16:55.969928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:129848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.216 [2024-07-26 14:16:55.969941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.216 [2024-07-26 14:16:55.969956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:129856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.216 [2024-07-26 14:16:55.969969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.216 [2024-07-26 14:16:55.969983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:129864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.216 [2024-07-26 14:16:55.969996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.216 [2024-07-26 14:16:55.970010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:129872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.216 [2024-07-26 14:16:55.970023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.216 [2024-07-26 14:16:55.970037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:129880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.216 [2024-07-26 14:16:55.970050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.216 [2024-07-26 14:16:55.970065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:129888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.216 [2024-07-26 14:16:55.970077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.216 [2024-07-26 14:16:55.970092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:129896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.216 [2024-07-26 14:16:55.970104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.216 [2024-07-26 14:16:55.970119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:129904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.216 [2024-07-26 14:16:55.970132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.216 [2024-07-26 14:16:55.970151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:129912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.216 [2024-07-26 14:16:55.970165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.216 [2024-07-26 14:16:55.970180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:129920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.216 [2024-07-26 14:16:55.970192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.216 [2024-07-26 14:16:55.970207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:129928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.216 [2024-07-26 14:16:55.970220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.216 [2024-07-26 14:16:55.970238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:129936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.216 [2024-07-26 14:16:55.970251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.216 [2024-07-26 14:16:55.970266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:129944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.216 [2024-07-26 14:16:55.970279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.216 [2024-07-26 14:16:55.970293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:129952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.216 [2024-07-26 14:16:55.970306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.216 [2024-07-26 14:16:55.970325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:129960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.216 [2024-07-26 14:16:55.970339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.216 [2024-07-26 14:16:55.970353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:129968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.216 [2024-07-26 14:16:55.970366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.216 [2024-07-26 14:16:55.970380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:129976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.216 [2024-07-26 14:16:55.970393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.216 [2024-07-26 14:16:55.970407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:129984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.216 [2024-07-26 14:16:55.970420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.216 [2024-07-26 14:16:55.970434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:129992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.216 [2024-07-26 14:16:55.970448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.216 [2024-07-26 14:16:55.970461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:130000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.216 [2024-07-26 14:16:55.970474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.216 [2024-07-26 14:16:55.970489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:130008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.216 [2024-07-26 14:16:55.970502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.216 [2024-07-26 14:16:55.970517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:130016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.216 [2024-07-26 14:16:55.970550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.216 [2024-07-26 14:16:55.970569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:130024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.216 [2024-07-26 14:16:55.970582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.216 [2024-07-26 14:16:55.970597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:130032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.216 [2024-07-26 14:16:55.970615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.216 [2024-07-26 14:16:55.970636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:130040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.216 [2024-07-26 14:16:55.970650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.216 [2024-07-26 14:16:55.970665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:130048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.216 [2024-07-26 14:16:55.970679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.216 [2024-07-26 14:16:55.970693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:130056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.216 [2024-07-26 14:16:55.970707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.216 [2024-07-26 14:16:55.970721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:130064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.216 [2024-07-26 14:16:55.970735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.216 [2024-07-26 14:16:55.970750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:130072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.216 [2024-07-26 14:16:55.970763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.216 [2024-07-26 14:16:55.970778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:130080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.216 [2024-07-26 14:16:55.970791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.216 [2024-07-26 14:16:55.970813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:130088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.216 [2024-07-26 14:16:55.970827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.216 [2024-07-26 14:16:55.970857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:130096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.217 [2024-07-26 14:16:55.970870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.217 [2024-07-26 14:16:55.970884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:130104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.217 [2024-07-26 14:16:55.970897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.217 [2024-07-26 14:16:55.970911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:130112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.217 [2024-07-26 14:16:55.970924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.217 [2024-07-26 14:16:55.970939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:130120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.217 [2024-07-26 14:16:55.970951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.217 [2024-07-26 14:16:55.970965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:130128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.217 [2024-07-26 14:16:55.970979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.217 [2024-07-26 14:16:55.970996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:130136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.217 [2024-07-26 14:16:55.971010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.217 [2024-07-26 14:16:55.971025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:130144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.217 [2024-07-26 14:16:55.971037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.217 [2024-07-26 14:16:55.971051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:130152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.217 [2024-07-26 14:16:55.971064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.217 [2024-07-26 14:16:55.971079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:130160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.217 [2024-07-26 14:16:55.971091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.217 [2024-07-26 14:16:55.971112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:130168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.217 [2024-07-26 14:16:55.971126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.217 [2024-07-26 14:16:55.971141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:130176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.217 [2024-07-26 14:16:55.971154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.217 [2024-07-26 14:16:55.971169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:130184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.217 [2024-07-26 14:16:55.971181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.217 [2024-07-26 14:16:55.971196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:130192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.217 [2024-07-26 14:16:55.971209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.217 [2024-07-26 14:16:55.971223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:130200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.217 [2024-07-26 14:16:55.971236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.217 [2024-07-26 14:16:55.971251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:130208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.217 [2024-07-26 14:16:55.971264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.217 [2024-07-26 14:16:55.971284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:130216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.217 [2024-07-26 14:16:55.971297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.217 [2024-07-26 14:16:55.971312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:130224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.217 [2024-07-26 14:16:55.971325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.217 [2024-07-26 14:16:55.971359] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:59.217 [2024-07-26 14:16:55.971376] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:59.217 [2024-07-26 14:16:55.971392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:129648 len:8 PRP1 0x0 PRP2 0x0 00:22:59.217 [2024-07-26 14:16:55.971405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.217 [2024-07-26 14:16:55.971471] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xdb7d40 was disconnected and freed. reset controller. 00:22:59.217 [2024-07-26 14:16:55.971488] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:22:59.217 [2024-07-26 14:16:55.971545] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.217 [2024-07-26 14:16:55.971566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.217 [2024-07-26 14:16:55.971582] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.217 [2024-07-26 14:16:55.971595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.217 [2024-07-26 14:16:55.971609] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.217 [2024-07-26 14:16:55.971621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.217 [2024-07-26 14:16:55.971636] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.217 [2024-07-26 14:16:55.971648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.217 [2024-07-26 14:16:55.971664] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:59.217 [2024-07-26 14:16:55.974965] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:59.217 [2024-07-26 14:16:55.975007] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd890f0 (9): Bad file descriptor 00:22:59.217 [2024-07-26 14:16:56.166111] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:59.217 [2024-07-26 14:17:00.488724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:101456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.217 [2024-07-26 14:17:00.488785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.217 [2024-07-26 14:17:00.488822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:101592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.217 [2024-07-26 14:17:00.488852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.217 [2024-07-26 14:17:00.488868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:101600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.217 [2024-07-26 14:17:00.488881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.217 [2024-07-26 14:17:00.488895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:101608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.217 [2024-07-26 14:17:00.488908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.217 [2024-07-26 14:17:00.488923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:101616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.217 [2024-07-26 14:17:00.488936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.217 [2024-07-26 14:17:00.488951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:101624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.217 [2024-07-26 14:17:00.488973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.217 [2024-07-26 14:17:00.488989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:101632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.217 [2024-07-26 14:17:00.489002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.217 [2024-07-26 14:17:00.489016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:101640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.217 [2024-07-26 14:17:00.489030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.217 [2024-07-26 14:17:00.489044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:101648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.217 [2024-07-26 14:17:00.489058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.217 [2024-07-26 14:17:00.489073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:101656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.217 [2024-07-26 14:17:00.489087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.217 [2024-07-26 14:17:00.489102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:101664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.217 [2024-07-26 14:17:00.489114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.217 [2024-07-26 14:17:00.489128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:101672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.217 [2024-07-26 14:17:00.489140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.217 [2024-07-26 14:17:00.489154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:101680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.217 [2024-07-26 14:17:00.489167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.217 [2024-07-26 14:17:00.489180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:101688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.217 [2024-07-26 14:17:00.489193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.217 [2024-07-26 14:17:00.489207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:101696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.217 [2024-07-26 14:17:00.489219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.217 [2024-07-26 14:17:00.489233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:101704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.217 [2024-07-26 14:17:00.489246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.217 [2024-07-26 14:17:00.489259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:101712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.217 [2024-07-26 14:17:00.489272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.217 [2024-07-26 14:17:00.489287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:101720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.217 [2024-07-26 14:17:00.489301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.217 [2024-07-26 14:17:00.489318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:101728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.217 [2024-07-26 14:17:00.489332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.217 [2024-07-26 14:17:00.489346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:101736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.217 [2024-07-26 14:17:00.489359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.217 [2024-07-26 14:17:00.489373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:101744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.217 [2024-07-26 14:17:00.489385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.217 [2024-07-26 14:17:00.489399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:101752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.217 [2024-07-26 14:17:00.489412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.217 [2024-07-26 14:17:00.489426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:101760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.217 [2024-07-26 14:17:00.489438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.217 [2024-07-26 14:17:00.489452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:101768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.217 [2024-07-26 14:17:00.489465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.217 [2024-07-26 14:17:00.489479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:101776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.217 [2024-07-26 14:17:00.489492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.217 [2024-07-26 14:17:00.489506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:101784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.217 [2024-07-26 14:17:00.489541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.217 [2024-07-26 14:17:00.489558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:101792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.217 [2024-07-26 14:17:00.489571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.217 [2024-07-26 14:17:00.489586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:101800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.217 [2024-07-26 14:17:00.489599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.217 [2024-07-26 14:17:00.489613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:101808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.217 [2024-07-26 14:17:00.489626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.217 [2024-07-26 14:17:00.489640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:101816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.217 [2024-07-26 14:17:00.489653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.217 [2024-07-26 14:17:00.489668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:101824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.217 [2024-07-26 14:17:00.489685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.217 [2024-07-26 14:17:00.489701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:101832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.217 [2024-07-26 14:17:00.489714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.217 [2024-07-26 14:17:00.489729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:101840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.217 [2024-07-26 14:17:00.489742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.217 [2024-07-26 14:17:00.489756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:101848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.217 [2024-07-26 14:17:00.489769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.217 [2024-07-26 14:17:00.489784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:101856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.217 [2024-07-26 14:17:00.489797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.217 [2024-07-26 14:17:00.489826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:101864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.217 [2024-07-26 14:17:00.489839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.217 [2024-07-26 14:17:00.489854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:101872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.217 [2024-07-26 14:17:00.489866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.217 [2024-07-26 14:17:00.489880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:101880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.217 [2024-07-26 14:17:00.489893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.217 [2024-07-26 14:17:00.489907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:101888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.217 [2024-07-26 14:17:00.489919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.217 [2024-07-26 14:17:00.489933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:101896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.217 [2024-07-26 14:17:00.489946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.217 [2024-07-26 14:17:00.489960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:101904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.217 [2024-07-26 14:17:00.489972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.217 [2024-07-26 14:17:00.489986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:101464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.217 [2024-07-26 14:17:00.489999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.217 [2024-07-26 14:17:00.490013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:101472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.217 [2024-07-26 14:17:00.490025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.217 [2024-07-26 14:17:00.490046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:101480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.217 [2024-07-26 14:17:00.490060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.217 [2024-07-26 14:17:00.490074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:101488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.217 [2024-07-26 14:17:00.490086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.217 [2024-07-26 14:17:00.490100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:101496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.217 [2024-07-26 14:17:00.490113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.218 [2024-07-26 14:17:00.490127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:101504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.218 [2024-07-26 14:17:00.490140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.218 [2024-07-26 14:17:00.490155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:101512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.218 [2024-07-26 14:17:00.490168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.218 [2024-07-26 14:17:00.490182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:101912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.218 [2024-07-26 14:17:00.490194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.218 [2024-07-26 14:17:00.490209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:101920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.218 [2024-07-26 14:17:00.490221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.218 [2024-07-26 14:17:00.490235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:101928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.218 [2024-07-26 14:17:00.490247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.218 [2024-07-26 14:17:00.490261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:101936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.218 [2024-07-26 14:17:00.490274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.218 [2024-07-26 14:17:00.490288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:101944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.218 [2024-07-26 14:17:00.490300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.218 [2024-07-26 14:17:00.490314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:101952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.218 [2024-07-26 14:17:00.490327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.218 [2024-07-26 14:17:00.490341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:101960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.218 [2024-07-26 14:17:00.490354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.218 [2024-07-26 14:17:00.490367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:101968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.218 [2024-07-26 14:17:00.490383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.218 [2024-07-26 14:17:00.490398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:101520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.218 [2024-07-26 14:17:00.490410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.218 [2024-07-26 14:17:00.490424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:101528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.218 [2024-07-26 14:17:00.490437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.218 [2024-07-26 14:17:00.490451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:101976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.218 [2024-07-26 14:17:00.490464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.218 [2024-07-26 14:17:00.490478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:101984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.218 [2024-07-26 14:17:00.490491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.218 [2024-07-26 14:17:00.490504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:101992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.218 [2024-07-26 14:17:00.490540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.218 [2024-07-26 14:17:00.490557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:102000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.218 [2024-07-26 14:17:00.490570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.218 [2024-07-26 14:17:00.490584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:102008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.218 [2024-07-26 14:17:00.490598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.218 [2024-07-26 14:17:00.490613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:102016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.218 [2024-07-26 14:17:00.490626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.218 [2024-07-26 14:17:00.490641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:102024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.218 [2024-07-26 14:17:00.490654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.218 [2024-07-26 14:17:00.490669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:102032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.218 [2024-07-26 14:17:00.490681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.218 [2024-07-26 14:17:00.490696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:102040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.218 [2024-07-26 14:17:00.490709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.218 [2024-07-26 14:17:00.490723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:102048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.218 [2024-07-26 14:17:00.490736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.218 [2024-07-26 14:17:00.490754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:102056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.218 [2024-07-26 14:17:00.490768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.219 [2024-07-26 14:17:00.490783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:102064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.219 [2024-07-26 14:17:00.490796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.219 [2024-07-26 14:17:00.490811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:102072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.219 [2024-07-26 14:17:00.490824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.219 [2024-07-26 14:17:00.490838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:102080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.219 [2024-07-26 14:17:00.490866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.219 [2024-07-26 14:17:00.490880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:102088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.219 [2024-07-26 14:17:00.490893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.219 [2024-07-26 14:17:00.490907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:102096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.219 [2024-07-26 14:17:00.490919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.219 [2024-07-26 14:17:00.490933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:102104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.219 [2024-07-26 14:17:00.490945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.219 [2024-07-26 14:17:00.490959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:102112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.219 [2024-07-26 14:17:00.490972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.219 [2024-07-26 14:17:00.490985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:102120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.219 [2024-07-26 14:17:00.490998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.219 [2024-07-26 14:17:00.491012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:102128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.219 [2024-07-26 14:17:00.491040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.219 [2024-07-26 14:17:00.491055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:102136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.219 [2024-07-26 14:17:00.491068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.219 [2024-07-26 14:17:00.491083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:102144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.219 [2024-07-26 14:17:00.491096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.219 [2024-07-26 14:17:00.491111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:102152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.219 [2024-07-26 14:17:00.491124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.219 [2024-07-26 14:17:00.491142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:102160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.219 [2024-07-26 14:17:00.491156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.219 [2024-07-26 14:17:00.491170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:102168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.219 [2024-07-26 14:17:00.491183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.219 [2024-07-26 14:17:00.491197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:102176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.219 [2024-07-26 14:17:00.491210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.219 [2024-07-26 14:17:00.491225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:102184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.219 [2024-07-26 14:17:00.491238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.219 [2024-07-26 14:17:00.491253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:102192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.219 [2024-07-26 14:17:00.491266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.219 [2024-07-26 14:17:00.491280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:102200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.219 [2024-07-26 14:17:00.491294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.219 [2024-07-26 14:17:00.491308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:102208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.219 [2024-07-26 14:17:00.491321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.219 [2024-07-26 14:17:00.491336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:102216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.219 [2024-07-26 14:17:00.491348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.219 [2024-07-26 14:17:00.491363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:102224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.219 [2024-07-26 14:17:00.491376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.219 [2024-07-26 14:17:00.491390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:102232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.219 [2024-07-26 14:17:00.491403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.219 [2024-07-26 14:17:00.491417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:102240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.219 [2024-07-26 14:17:00.491430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.219 [2024-07-26 14:17:00.491445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:102248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.219 [2024-07-26 14:17:00.491458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.219 [2024-07-26 14:17:00.491472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:102256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.219 [2024-07-26 14:17:00.491488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.219 [2024-07-26 14:17:00.491503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:102264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.219 [2024-07-26 14:17:00.491516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.219 [2024-07-26 14:17:00.491552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:102272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.219 [2024-07-26 14:17:00.491568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.219 [2024-07-26 14:17:00.491584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:102280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.219 [2024-07-26 14:17:00.491598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.219 [2024-07-26 14:17:00.491613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:102288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.219 [2024-07-26 14:17:00.491626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.219 [2024-07-26 14:17:00.491641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:102296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.219 [2024-07-26 14:17:00.491654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.219 [2024-07-26 14:17:00.491669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:102304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.219 [2024-07-26 14:17:00.491683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.219 [2024-07-26 14:17:00.491698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:102312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.219 [2024-07-26 14:17:00.491711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.219 [2024-07-26 14:17:00.491726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:102320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.219 [2024-07-26 14:17:00.491739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.219 [2024-07-26 14:17:00.491754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.219 [2024-07-26 14:17:00.491768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.219 [2024-07-26 14:17:00.491783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:102336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.219 [2024-07-26 14:17:00.491796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.219 [2024-07-26 14:17:00.491811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:102344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.219 [2024-07-26 14:17:00.491824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.219 [2024-07-26 14:17:00.491853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:102352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.219 [2024-07-26 14:17:00.491866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.219 [2024-07-26 14:17:00.491899] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:59.219 [2024-07-26 14:17:00.491916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102360 len:8 PRP1 0x0 PRP2 0x0 00:22:59.219 [2024-07-26 14:17:00.491928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.219 [2024-07-26 14:17:00.491946] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:59.219 [2024-07-26 14:17:00.491958] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:59.219 [2024-07-26 14:17:00.491969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102368 len:8 PRP1 0x0 PRP2 0x0 00:22:59.219 [2024-07-26 14:17:00.491981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.219 [2024-07-26 14:17:00.492000] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:59.219 [2024-07-26 14:17:00.492012] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:59.219 [2024-07-26 14:17:00.492023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102376 len:8 PRP1 0x0 PRP2 0x0 00:22:59.219 [2024-07-26 14:17:00.492036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.220 [2024-07-26 14:17:00.492049] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:59.220 [2024-07-26 14:17:00.492060] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:59.220 [2024-07-26 14:17:00.492070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102384 len:8 PRP1 0x0 PRP2 0x0 00:22:59.220 [2024-07-26 14:17:00.492083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.220 [2024-07-26 14:17:00.492096] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:59.220 [2024-07-26 14:17:00.492107] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:59.220 [2024-07-26 14:17:00.492118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102392 len:8 PRP1 0x0 PRP2 0x0 00:22:59.220 [2024-07-26 14:17:00.492130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.220 [2024-07-26 14:17:00.492143] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:59.220 [2024-07-26 14:17:00.492153] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:59.220 [2024-07-26 14:17:00.492164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102400 len:8 PRP1 0x0 PRP2 0x0 00:22:59.220 [2024-07-26 14:17:00.492177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.220 [2024-07-26 14:17:00.492190] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:59.220 [2024-07-26 14:17:00.492200] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:59.220 [2024-07-26 14:17:00.492211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102408 len:8 PRP1 0x0 PRP2 0x0 00:22:59.220 [2024-07-26 14:17:00.492224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.220 [2024-07-26 14:17:00.492237] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:59.220 [2024-07-26 14:17:00.492247] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:59.220 [2024-07-26 14:17:00.492258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102416 len:8 PRP1 0x0 PRP2 0x0 00:22:59.220 [2024-07-26 14:17:00.492270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.220 [2024-07-26 14:17:00.492287] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:59.220 [2024-07-26 14:17:00.492298] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:59.220 [2024-07-26 14:17:00.492310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102424 len:8 PRP1 0x0 PRP2 0x0 00:22:59.220 [2024-07-26 14:17:00.492322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.220 [2024-07-26 14:17:00.492335] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:59.220 [2024-07-26 14:17:00.492346] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:59.220 [2024-07-26 14:17:00.492357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102432 len:8 PRP1 0x0 PRP2 0x0 00:22:59.220 [2024-07-26 14:17:00.492369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.220 [2024-07-26 14:17:00.492387] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:59.220 [2024-07-26 14:17:00.492398] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:59.220 [2024-07-26 14:17:00.492409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102440 len:8 PRP1 0x0 PRP2 0x0 00:22:59.220 [2024-07-26 14:17:00.492422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.220 [2024-07-26 14:17:00.492435] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:59.220 [2024-07-26 14:17:00.492445] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:59.220 [2024-07-26 14:17:00.492456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102448 len:8 PRP1 0x0 PRP2 0x0 00:22:59.220 [2024-07-26 14:17:00.492468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.220 [2024-07-26 14:17:00.492481] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:59.220 [2024-07-26 14:17:00.492491] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:59.220 [2024-07-26 14:17:00.492502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102456 len:8 PRP1 0x0 PRP2 0x0 00:22:59.220 [2024-07-26 14:17:00.492514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.220 [2024-07-26 14:17:00.492550] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:59.220 [2024-07-26 14:17:00.492564] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:59.220 [2024-07-26 14:17:00.492576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102464 len:8 PRP1 0x0 PRP2 0x0 00:22:59.220 [2024-07-26 14:17:00.492589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.220 [2024-07-26 14:17:00.492602] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:59.220 [2024-07-26 14:17:00.492613] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:59.220 [2024-07-26 14:17:00.492624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102472 len:8 PRP1 0x0 PRP2 0x0 00:22:59.220 [2024-07-26 14:17:00.492637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.220 [2024-07-26 14:17:00.492650] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:59.220 [2024-07-26 14:17:00.492661] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:59.220 [2024-07-26 14:17:00.492672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101536 len:8 PRP1 0x0 PRP2 0x0 00:22:59.220 [2024-07-26 14:17:00.492688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.220 [2024-07-26 14:17:00.492702] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:59.220 [2024-07-26 14:17:00.492713] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:59.220 [2024-07-26 14:17:00.492724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101544 len:8 PRP1 0x0 PRP2 0x0 00:22:59.220 [2024-07-26 14:17:00.492737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.220 [2024-07-26 14:17:00.492750] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:59.220 [2024-07-26 14:17:00.492760] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:59.220 [2024-07-26 14:17:00.492772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101552 len:8 PRP1 0x0 PRP2 0x0 00:22:59.220 [2024-07-26 14:17:00.492784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.220 [2024-07-26 14:17:00.492798] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:59.220 [2024-07-26 14:17:00.492810] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:59.220 [2024-07-26 14:17:00.492821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101560 len:8 PRP1 0x0 PRP2 0x0 00:22:59.220 [2024-07-26 14:17:00.492849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.220 [2024-07-26 14:17:00.492863] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:59.220 [2024-07-26 14:17:00.492874] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:59.220 [2024-07-26 14:17:00.492885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101568 len:8 PRP1 0x0 PRP2 0x0 00:22:59.220 [2024-07-26 14:17:00.492897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.220 [2024-07-26 14:17:00.492910] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:59.220 [2024-07-26 14:17:00.492920] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:59.220 [2024-07-26 14:17:00.492931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101576 len:8 PRP1 0x0 PRP2 0x0 00:22:59.220 [2024-07-26 14:17:00.492943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.220 [2024-07-26 14:17:00.492956] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:59.220 [2024-07-26 14:17:00.492967] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:59.220 [2024-07-26 14:17:00.492977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101584 len:8 PRP1 0x0 PRP2 0x0 00:22:59.220 [2024-07-26 14:17:00.492989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.220 [2024-07-26 14:17:00.493048] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xdb9b40 was disconnected and freed. reset controller. 00:22:59.220 [2024-07-26 14:17:00.493065] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:22:59.220 [2024-07-26 14:17:00.493099] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.220 [2024-07-26 14:17:00.493134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.220 [2024-07-26 14:17:00.493149] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.220 [2024-07-26 14:17:00.493167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.220 [2024-07-26 14:17:00.493182] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.220 [2024-07-26 14:17:00.493196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.220 [2024-07-26 14:17:00.493209] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.220 [2024-07-26 14:17:00.493223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.220 [2024-07-26 14:17:00.493236] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:59.220 [2024-07-26 14:17:00.493274] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd890f0 (9): Bad file descriptor 00:22:59.220 [2024-07-26 14:17:00.496559] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:59.220 [2024-07-26 14:17:00.575153] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:59.220 00:22:59.220 Latency(us) 00:22:59.220 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:59.220 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:59.220 Verification LBA range: start 0x0 length 0x4000 00:22:59.220 NVMe0n1 : 15.01 8577.28 33.51 1194.82 0.00 13071.35 543.10 17185.00 00:22:59.220 =================================================================================================================== 00:22:59.220 Total : 8577.28 33.51 1194.82 0.00 13071.35 543.10 17185.00 00:22:59.220 Received shutdown signal, test time was about 15.000000 seconds 00:22:59.220 00:22:59.220 Latency(us) 00:22:59.220 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:59.220 =================================================================================================================== 00:22:59.220 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:59.220 14:17:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:22:59.220 14:17:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:22:59.220 14:17:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:22:59.220 14:17:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=288835 00:22:59.220 14:17:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:22:59.220 14:17:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 288835 /var/tmp/bdevperf.sock 00:22:59.220 14:17:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 288835 ']' 00:22:59.220 14:17:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:59.220 14:17:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:59.220 14:17:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:59.220 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:59.220 14:17:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:59.220 14:17:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:59.220 14:17:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:59.220 14:17:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:22:59.220 14:17:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:59.220 [2024-07-26 14:17:07.032350] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:59.220 14:17:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:59.477 [2024-07-26 14:17:07.297119] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:22:59.477 14:17:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:59.734 NVMe0n1 00:22:59.734 14:17:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:00.299 00:23:00.299 14:17:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:00.556 00:23:00.556 14:17:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:00.556 14:17:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:23:00.814 14:17:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:01.071 14:17:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:23:04.351 14:17:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:04.351 14:17:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:23:04.351 14:17:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=289508 00:23:04.351 14:17:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:04.351 14:17:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 289508 00:23:05.778 0 00:23:05.778 14:17:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:05.778 [2024-07-26 14:17:06.485148] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:23:05.778 [2024-07-26 14:17:06.485246] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid288835 ] 00:23:05.778 EAL: No free 2048 kB hugepages reported on node 1 00:23:05.778 [2024-07-26 14:17:06.545273] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:05.778 [2024-07-26 14:17:06.651568] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:05.778 [2024-07-26 14:17:09.035501] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:23:05.778 [2024-07-26 14:17:09.035600] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:05.778 [2024-07-26 14:17:09.035623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.778 [2024-07-26 14:17:09.035641] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:05.778 [2024-07-26 14:17:09.035654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.778 [2024-07-26 14:17:09.035668] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:05.778 [2024-07-26 14:17:09.035683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.778 [2024-07-26 14:17:09.035697] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:05.778 [2024-07-26 14:17:09.035711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.778 [2024-07-26 14:17:09.035726] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:05.778 [2024-07-26 14:17:09.035774] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:05.778 [2024-07-26 14:17:09.035806] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23460f0 (9): Bad file descriptor 00:23:05.778 [2024-07-26 14:17:09.082845] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:05.778 Running I/O for 1 seconds... 00:23:05.778 00:23:05.778 Latency(us) 00:23:05.778 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:05.778 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:05.778 Verification LBA range: start 0x0 length 0x4000 00:23:05.778 NVMe0n1 : 1.00 8808.17 34.41 0.00 0.00 14473.96 3106.89 13495.56 00:23:05.778 =================================================================================================================== 00:23:05.778 Total : 8808.17 34.41 0.00 0.00 14473.96 3106.89 13495.56 00:23:05.778 14:17:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:05.778 14:17:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:23:05.778 14:17:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:06.343 14:17:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:06.343 14:17:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:23:06.343 14:17:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:06.600 14:17:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:23:09.877 14:17:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:09.877 14:17:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:23:09.877 14:17:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 288835 00:23:09.877 14:17:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 288835 ']' 00:23:09.877 14:17:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 288835 00:23:09.877 14:17:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:23:09.877 14:17:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:09.877 14:17:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 288835 00:23:09.877 14:17:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:09.877 14:17:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:09.877 14:17:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 288835' 00:23:09.877 killing process with pid 288835 00:23:09.877 14:17:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 288835 00:23:09.877 14:17:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 288835 00:23:10.135 14:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:23:10.135 14:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:10.393 14:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:23:10.393 14:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:10.393 14:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:23:10.393 14:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:10.393 14:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:23:10.393 14:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:10.393 14:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:23:10.393 14:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:10.393 14:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:10.393 rmmod nvme_tcp 00:23:10.393 rmmod nvme_fabrics 00:23:10.393 rmmod nvme_keyring 00:23:10.393 14:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:10.652 14:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:23:10.652 14:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:23:10.652 14:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 286690 ']' 00:23:10.652 14:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 286690 00:23:10.652 14:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 286690 ']' 00:23:10.652 14:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 286690 00:23:10.652 14:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:23:10.652 14:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:10.652 14:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 286690 00:23:10.652 14:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:10.652 14:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:10.652 14:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 286690' 00:23:10.652 killing process with pid 286690 00:23:10.652 14:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 286690 00:23:10.652 14:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 286690 00:23:10.912 14:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:10.912 14:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:10.912 14:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:10.912 14:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:10.912 14:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:10.912 14:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:10.912 14:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:10.912 14:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:12.833 14:17:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:12.833 00:23:12.833 real 0m35.141s 00:23:12.833 user 2m3.828s 00:23:12.833 sys 0m5.909s 00:23:12.833 14:17:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:12.833 14:17:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:12.833 ************************************ 00:23:12.833 END TEST nvmf_failover 00:23:12.833 ************************************ 00:23:12.833 14:17:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:23:12.833 14:17:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:12.833 14:17:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:12.833 14:17:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.833 ************************************ 00:23:12.833 START TEST nvmf_host_discovery 00:23:12.833 ************************************ 00:23:12.833 14:17:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:23:13.092 * Looking for test storage... 00:23:13.092 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:13.092 14:17:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:13.092 14:17:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:23:13.092 14:17:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:13.092 14:17:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:13.092 14:17:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:13.092 14:17:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:13.092 14:17:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:13.092 14:17:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:13.092 14:17:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:13.092 14:17:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:13.092 14:17:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:13.092 14:17:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:13.092 14:17:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:23:13.092 14:17:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:23:13.092 14:17:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:13.092 14:17:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:13.092 14:17:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:13.092 14:17:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:13.092 14:17:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:13.092 14:17:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:13.092 14:17:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:13.092 14:17:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:13.092 14:17:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:13.092 14:17:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:13.092 14:17:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:13.092 14:17:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:23:13.092 14:17:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:13.092 14:17:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:23:13.092 14:17:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:13.092 14:17:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:13.092 14:17:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:13.092 14:17:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:13.092 14:17:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:13.092 14:17:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:13.092 14:17:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:13.092 14:17:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:13.092 14:17:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:23:13.092 14:17:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:23:13.092 14:17:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:23:13.092 14:17:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:23:13.092 14:17:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:23:13.092 14:17:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:23:13.092 14:17:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:23:13.092 14:17:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:13.092 14:17:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:13.092 14:17:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:13.092 14:17:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:13.092 14:17:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:13.092 14:17:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:13.092 14:17:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:13.092 14:17:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:13.092 14:17:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:13.092 14:17:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:13.092 14:17:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:23:13.092 14:17:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:14.993 14:17:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:14.993 14:17:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:23:14.993 14:17:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:14.993 14:17:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:14.993 14:17:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:14.993 14:17:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:14.993 14:17:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:14.993 14:17:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:23:14.993 14:17:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:14.993 14:17:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:23:14.993 14:17:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:23:14.993 14:17:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:23:14.993 14:17:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:23:14.993 14:17:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:23:14.993 14:17:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:23:14.993 14:17:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:14.993 14:17:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:14.993 14:17:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:14.993 14:17:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:14.993 14:17:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:14.993 14:17:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:14.993 14:17:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:14.993 14:17:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:14.993 14:17:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:14.993 14:17:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:14.993 14:17:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:14.993 14:17:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:14.993 14:17:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:14.993 14:17:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:14.993 14:17:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:14.993 14:17:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:14.993 14:17:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:14.993 14:17:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:14.993 14:17:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:23:14.993 Found 0000:09:00.0 (0x8086 - 0x159b) 00:23:14.993 14:17:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:14.993 14:17:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:14.993 14:17:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:14.993 14:17:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:14.993 14:17:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:14.993 14:17:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:14.993 14:17:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:23:14.993 Found 0000:09:00.1 (0x8086 - 0x159b) 00:23:14.993 14:17:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:14.993 14:17:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:14.993 14:17:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:14.994 14:17:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:14.994 14:17:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:14.994 14:17:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:14.994 14:17:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:14.994 14:17:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:14.994 14:17:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:14.994 14:17:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:14.994 14:17:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:14.994 14:17:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:14.994 14:17:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:14.994 14:17:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:14.994 14:17:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:14.994 14:17:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:23:14.994 Found net devices under 0000:09:00.0: cvl_0_0 00:23:14.994 14:17:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:14.994 14:17:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:14.994 14:17:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:14.994 14:17:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:14.994 14:17:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:14.994 14:17:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:14.994 14:17:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:14.994 14:17:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:14.994 14:17:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:23:14.994 Found net devices under 0000:09:00.1: cvl_0_1 00:23:14.994 14:17:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:14.994 14:17:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:14.994 14:17:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:23:14.994 14:17:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:14.994 14:17:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:14.994 14:17:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:14.994 14:17:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:14.994 14:17:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:14.994 14:17:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:14.994 14:17:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:14.994 14:17:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:14.994 14:17:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:14.994 14:17:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:14.994 14:17:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:14.994 14:17:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:14.994 14:17:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:14.994 14:17:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:14.994 14:17:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:14.994 14:17:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:15.252 14:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:15.252 14:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:15.252 14:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:15.252 14:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:15.252 14:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:15.252 14:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:15.252 14:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:15.252 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:15.252 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.209 ms 00:23:15.252 00:23:15.252 --- 10.0.0.2 ping statistics --- 00:23:15.252 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:15.252 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:23:15.252 14:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:15.252 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:15.252 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.053 ms 00:23:15.252 00:23:15.252 --- 10.0.0.1 ping statistics --- 00:23:15.252 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:15.252 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:23:15.252 14:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:15.252 14:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:23:15.252 14:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:15.252 14:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:15.252 14:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:15.252 14:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:15.252 14:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:15.252 14:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:15.252 14:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:15.252 14:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:23:15.252 14:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:15.252 14:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:15.252 14:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:15.252 14:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=292230 00:23:15.252 14:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:15.252 14:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 292230 00:23:15.252 14:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 292230 ']' 00:23:15.252 14:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:15.252 14:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:15.252 14:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:15.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:15.252 14:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:15.252 14:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:15.252 [2024-07-26 14:17:23.169588] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:23:15.253 [2024-07-26 14:17:23.169660] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:15.253 EAL: No free 2048 kB hugepages reported on node 1 00:23:15.253 [2024-07-26 14:17:23.231224] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:15.510 [2024-07-26 14:17:23.338552] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:15.510 [2024-07-26 14:17:23.338604] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:15.510 [2024-07-26 14:17:23.338628] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:15.510 [2024-07-26 14:17:23.338639] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:15.510 [2024-07-26 14:17:23.338650] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:15.510 [2024-07-26 14:17:23.338675] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:15.510 14:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:15.510 14:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:23:15.510 14:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:15.510 14:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:15.510 14:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:15.510 14:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:15.510 14:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:15.510 14:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.510 14:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:15.510 [2024-07-26 14:17:23.484329] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:15.510 14:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.510 14:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:23:15.510 14:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.510 14:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:15.510 [2024-07-26 14:17:23.492498] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:23:15.510 14:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.510 14:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:23:15.510 14:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.510 14:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:15.510 null0 00:23:15.510 14:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.510 14:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:23:15.510 14:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.510 14:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:15.510 null1 00:23:15.510 14:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.510 14:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:23:15.510 14:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.510 14:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:15.510 14:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.510 14:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=292254 00:23:15.510 14:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 292254 /tmp/host.sock 00:23:15.510 14:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 292254 ']' 00:23:15.510 14:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:23:15.510 14:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:15.510 14:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:23:15.510 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:23:15.510 14:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:23:15.510 14:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:15.510 14:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:15.768 [2024-07-26 14:17:23.568280] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:23:15.768 [2024-07-26 14:17:23.568366] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid292254 ] 00:23:15.768 EAL: No free 2048 kB hugepages reported on node 1 00:23:15.768 [2024-07-26 14:17:23.626334] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:15.768 [2024-07-26 14:17:23.734158] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:16.025 14:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:16.025 14:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:23:16.026 14:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:16.026 14:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:23:16.026 14:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.026 14:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:16.026 14:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.026 14:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:23:16.026 14:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.026 14:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:16.026 14:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.026 14:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:23:16.026 14:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:23:16.026 14:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:16.026 14:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:16.026 14:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.026 14:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:16.026 14:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:16.026 14:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:16.026 14:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.026 14:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:23:16.026 14:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:23:16.026 14:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:16.026 14:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:16.026 14:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.026 14:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:16.026 14:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:16.026 14:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:16.026 14:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.026 14:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:23:16.026 14:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:23:16.026 14:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.026 14:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:16.026 14:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.026 14:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:23:16.026 14:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:16.026 14:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.026 14:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:16.026 14:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:16.026 14:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:16.026 14:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:16.026 14:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.026 14:17:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:23:16.026 14:17:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:23:16.026 14:17:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:16.026 14:17:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:16.026 14:17:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.026 14:17:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:16.026 14:17:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:16.026 14:17:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:16.026 14:17:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.284 14:17:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:23:16.284 14:17:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:23:16.284 14:17:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.284 14:17:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:16.284 14:17:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.284 14:17:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:23:16.284 14:17:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:16.284 14:17:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:16.284 14:17:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.284 14:17:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:16.284 14:17:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:16.284 14:17:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:16.284 14:17:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.284 14:17:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:23:16.284 14:17:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:23:16.284 14:17:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:16.284 14:17:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:16.284 14:17:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.284 14:17:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:16.284 14:17:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:16.284 14:17:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:16.284 14:17:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.284 14:17:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:23:16.284 14:17:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:16.284 14:17:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.284 14:17:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:16.284 [2024-07-26 14:17:24.142209] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:16.284 14:17:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.284 14:17:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:23:16.284 14:17:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:16.284 14:17:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:16.284 14:17:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.284 14:17:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:16.284 14:17:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:16.284 14:17:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:16.284 14:17:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.284 14:17:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:23:16.284 14:17:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:23:16.284 14:17:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:16.284 14:17:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.284 14:17:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:16.284 14:17:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:16.284 14:17:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:16.284 14:17:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:16.284 14:17:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.284 14:17:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:23:16.284 14:17:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:23:16.284 14:17:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:16.284 14:17:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:16.284 14:17:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:16.284 14:17:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:16.284 14:17:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:16.284 14:17:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:16.284 14:17:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:23:16.284 14:17:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:16.284 14:17:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:16.284 14:17:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.284 14:17:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:16.284 14:17:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.284 14:17:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:16.284 14:17:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:23:16.284 14:17:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:23:16.284 14:17:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:16.284 14:17:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:23:16.284 14:17:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.284 14:17:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:16.284 14:17:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.285 14:17:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:16.285 14:17:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:16.285 14:17:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:16.285 14:17:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:16.285 14:17:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:16.285 14:17:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:23:16.285 14:17:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:16.285 14:17:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:16.285 14:17:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:16.285 14:17:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.285 14:17:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:16.285 14:17:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:16.285 14:17:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.542 14:17:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == \n\v\m\e\0 ]] 00:23:16.542 14:17:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:23:17.107 [2024-07-26 14:17:24.871250] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:17.107 [2024-07-26 14:17:24.871278] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:17.107 [2024-07-26 14:17:24.871308] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:17.107 [2024-07-26 14:17:24.960615] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:23:17.107 [2024-07-26 14:17:25.063018] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:17.107 [2024-07-26 14:17:25.063042] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:17.364 14:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:17.364 14:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:17.364 14:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:23:17.364 14:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:17.364 14:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.364 14:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:17.364 14:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:17.364 14:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:17.364 14:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:17.364 14:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.364 14:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:17.364 14:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:17.364 14:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:23:17.364 14:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:23:17.364 14:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:17.364 14:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:17.364 14:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:23:17.364 14:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:23:17.364 14:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:17.364 14:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:17.364 14:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.364 14:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:17.364 14:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:17.364 14:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:17.641 14:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.641 14:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:23:17.641 14:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:17.641 14:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:23:17.641 14:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:23:17.641 14:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:17.641 14:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:17.641 14:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:23:17.641 14:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:23:17.641 14:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:17.641 14:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:17.641 14:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.641 14:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:17.641 14:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:17.641 14:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:17.641 14:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.641 14:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0 ]] 00:23:17.641 14:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:17.641 14:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:23:17.641 14:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:23:17.641 14:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:17.641 14:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:17.641 14:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:17.641 14:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:17.641 14:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:17.641 14:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:23:17.641 14:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:17.641 14:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:17.641 14:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.641 14:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:17.641 14:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.641 14:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:23:17.641 14:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:23:17.641 14:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:23:17.641 14:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:17.641 14:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:23:17.641 14:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.641 14:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:17.641 14:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.641 14:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:17.641 14:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:17.641 14:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:17.641 14:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:17.641 14:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:17.641 14:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:23:17.641 14:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:17.641 14:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.641 14:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:17.641 14:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:17.641 14:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:17.641 14:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:17.900 14:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.900 14:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:17.900 14:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:17.900 14:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:23:17.900 14:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:23:17.900 14:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:17.900 14:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:17.900 14:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:17.900 14:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:17.900 14:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:17.900 14:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:23:17.900 14:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:17.900 14:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:23:17.900 14:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.900 14:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:17.900 14:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.900 14:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:23:17.900 14:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:17.900 14:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:23:17.900 14:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:17.900 14:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:23:17.900 14:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.900 14:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:17.900 [2024-07-26 14:17:25.827344] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:17.900 [2024-07-26 14:17:25.827587] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:17.900 [2024-07-26 14:17:25.827638] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:17.900 14:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.900 14:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:17.900 14:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:17.900 14:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:17.900 14:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:17.900 14:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:17.900 14:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:23:17.900 14:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:17.900 14:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.900 14:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:17.900 14:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:17.900 14:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:17.900 14:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:17.900 14:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.900 14:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:17.900 14:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:17.900 14:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:17.900 14:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:17.900 14:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:17.900 14:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:17.900 14:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:17.900 14:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:23:17.900 14:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:17.900 14:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.900 14:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:17.900 14:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:17.900 14:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:17.900 14:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:17.900 14:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.158 14:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:18.158 14:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:18.158 14:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:23:18.158 14:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:23:18.158 14:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:18.158 14:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:18.159 14:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:23:18.159 14:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:23:18.159 14:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:18.159 14:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:18.159 14:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.159 14:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:18.159 14:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:18.159 14:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:18.159 14:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.159 [2024-07-26 14:17:25.954388] bdev_nvme.c:6935:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:23:18.159 14:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:23:18.159 14:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:23:18.159 [2024-07-26 14:17:26.016961] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:18.159 [2024-07-26 14:17:26.016983] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:18.159 [2024-07-26 14:17:26.016992] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:19.092 14:17:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:19.092 14:17:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:23:19.092 14:17:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:23:19.092 14:17:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:19.092 14:17:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.092 14:17:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:19.092 14:17:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:19.092 14:17:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:19.092 14:17:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:19.092 14:17:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.092 14:17:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:23:19.092 14:17:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:19.092 14:17:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:23:19.092 14:17:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:19.092 14:17:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:19.092 14:17:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:19.092 14:17:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:19.092 14:17:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:19.092 14:17:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:19.092 14:17:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:23:19.092 14:17:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:19.092 14:17:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:19.092 14:17:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.092 14:17:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:19.092 14:17:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.092 14:17:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:19.092 14:17:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:19.092 14:17:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:23:19.092 14:17:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:19.092 14:17:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:19.092 14:17:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.092 14:17:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:19.092 [2024-07-26 14:17:27.051202] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:19.092 [2024-07-26 14:17:27.051231] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:19.092 [2024-07-26 14:17:27.053594] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:19.092 [2024-07-26 14:17:27.053627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.092 [2024-07-26 14:17:27.053645] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:19.092 [2024-07-26 14:17:27.053659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.092 [2024-07-26 14:17:27.053673] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:19.092 [2024-07-26 14:17:27.053687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.092 [2024-07-26 14:17:27.053700] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:19.092 [2024-07-26 14:17:27.053714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.092 [2024-07-26 14:17:27.053727] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ebfc20 is same with the state(5) to be set 00:23:19.092 14:17:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.092 14:17:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:19.092 14:17:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:19.092 14:17:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:19.092 14:17:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:19.092 14:17:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:19.092 14:17:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:23:19.092 14:17:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:19.092 14:17:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.092 14:17:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:19.092 14:17:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:19.092 14:17:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:19.092 14:17:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:19.092 [2024-07-26 14:17:27.063585] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ebfc20 (9): Bad file descriptor 00:23:19.092 14:17:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.092 [2024-07-26 14:17:27.073630] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:19.092 [2024-07-26 14:17:27.073825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:19.092 [2024-07-26 14:17:27.073854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ebfc20 with addr=10.0.0.2, port=4420 00:23:19.092 [2024-07-26 14:17:27.073871] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ebfc20 is same with the state(5) to be set 00:23:19.092 [2024-07-26 14:17:27.073894] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ebfc20 (9): Bad file descriptor 00:23:19.092 [2024-07-26 14:17:27.073914] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:19.092 [2024-07-26 14:17:27.073928] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:19.092 [2024-07-26 14:17:27.073943] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:19.092 [2024-07-26 14:17:27.073963] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:19.092 [2024-07-26 14:17:27.083707] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:19.092 [2024-07-26 14:17:27.083851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:19.092 [2024-07-26 14:17:27.083879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ebfc20 with addr=10.0.0.2, port=4420 00:23:19.092 [2024-07-26 14:17:27.083895] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ebfc20 is same with the state(5) to be set 00:23:19.092 [2024-07-26 14:17:27.083917] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ebfc20 (9): Bad file descriptor 00:23:19.092 [2024-07-26 14:17:27.083949] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:19.093 [2024-07-26 14:17:27.083966] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:19.093 [2024-07-26 14:17:27.083980] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:19.093 [2024-07-26 14:17:27.083999] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:19.093 [2024-07-26 14:17:27.093780] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:19.093 [2024-07-26 14:17:27.093972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:19.093 [2024-07-26 14:17:27.094001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ebfc20 with addr=10.0.0.2, port=4420 00:23:19.093 14:17:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:19.093 [2024-07-26 14:17:27.094016] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ebfc20 is same with the state(5) to be set 00:23:19.093 [2024-07-26 14:17:27.094039] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ebfc20 (9): Bad file descriptor 00:23:19.093 [2024-07-26 14:17:27.094059] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:19.093 [2024-07-26 14:17:27.094073] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:19.093 [2024-07-26 14:17:27.094086] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:19.093 [2024-07-26 14:17:27.094104] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:19.093 14:17:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:19.093 14:17:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:19.093 14:17:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:19.093 14:17:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:19.093 14:17:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:19.093 14:17:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:19.093 14:17:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:23:19.093 14:17:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:19.093 14:17:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.093 14:17:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:19.093 14:17:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:19.093 14:17:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:19.093 14:17:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:19.093 [2024-07-26 14:17:27.103868] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:19.093 [2024-07-26 14:17:27.104088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:19.093 [2024-07-26 14:17:27.104116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ebfc20 with addr=10.0.0.2, port=4420 00:23:19.093 [2024-07-26 14:17:27.104132] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ebfc20 is same with the state(5) to be set 00:23:19.093 [2024-07-26 14:17:27.104154] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ebfc20 (9): Bad file descriptor 00:23:19.093 [2024-07-26 14:17:27.104200] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:19.093 [2024-07-26 14:17:27.104219] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:19.093 [2024-07-26 14:17:27.104232] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:19.093 [2024-07-26 14:17:27.104251] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:19.351 [2024-07-26 14:17:27.113966] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:19.351 [2024-07-26 14:17:27.114130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:19.351 [2024-07-26 14:17:27.114158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ebfc20 with addr=10.0.0.2, port=4420 00:23:19.351 [2024-07-26 14:17:27.114174] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ebfc20 is same with the state(5) to be set 00:23:19.351 [2024-07-26 14:17:27.114196] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ebfc20 (9): Bad file descriptor 00:23:19.351 [2024-07-26 14:17:27.114216] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:19.351 [2024-07-26 14:17:27.114229] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:19.351 [2024-07-26 14:17:27.114242] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:19.351 [2024-07-26 14:17:27.114260] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:19.351 14:17:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.351 [2024-07-26 14:17:27.124051] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:19.351 [2024-07-26 14:17:27.124198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:19.351 [2024-07-26 14:17:27.124230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ebfc20 with addr=10.0.0.2, port=4420 00:23:19.351 [2024-07-26 14:17:27.124246] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ebfc20 is same with the state(5) to be set 00:23:19.351 [2024-07-26 14:17:27.124268] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ebfc20 (9): Bad file descriptor 00:23:19.351 [2024-07-26 14:17:27.124299] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:19.351 [2024-07-26 14:17:27.124315] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:19.351 [2024-07-26 14:17:27.124328] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:19.351 [2024-07-26 14:17:27.124347] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:19.351 14:17:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:19.351 14:17:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:19.351 [2024-07-26 14:17:27.134134] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:19.351 14:17:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:23:19.351 14:17:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:23:19.351 [2024-07-26 14:17:27.134336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:19.351 [2024-07-26 14:17:27.134364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ebfc20 with addr=10.0.0.2, port=4420 00:23:19.351 [2024-07-26 14:17:27.134380] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ebfc20 is same with the state(5) to be set 00:23:19.351 [2024-07-26 14:17:27.134401] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ebfc20 (9): Bad file descriptor 00:23:19.351 14:17:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:19.351 [2024-07-26 14:17:27.134422] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:19.351 [2024-07-26 14:17:27.134436] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:19.351 [2024-07-26 14:17:27.134449] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:19.351 [2024-07-26 14:17:27.134467] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:19.352 14:17:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:19.352 14:17:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:23:19.352 14:17:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:23:19.352 14:17:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:19.352 14:17:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.352 14:17:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:19.352 14:17:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:19.352 14:17:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:19.352 [2024-07-26 14:17:27.136907] bdev_nvme.c:6798:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:23:19.352 [2024-07-26 14:17:27.136945] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:19.352 14:17:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:19.352 14:17:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.352 14:17:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4421 == \4\4\2\1 ]] 00:23:19.352 14:17:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:19.352 14:17:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:23:19.352 14:17:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:19.352 14:17:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:19.352 14:17:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:19.352 14:17:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:19.352 14:17:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:19.352 14:17:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:19.352 14:17:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:23:19.352 14:17:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:19.352 14:17:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:19.352 14:17:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.352 14:17:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:19.352 14:17:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.352 14:17:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:19.352 14:17:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:19.352 14:17:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:23:19.352 14:17:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:19.352 14:17:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:23:19.352 14:17:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.352 14:17:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:19.352 14:17:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.352 14:17:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:23:19.352 14:17:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:23:19.352 14:17:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:19.352 14:17:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:19.352 14:17:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:23:19.352 14:17:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:23:19.352 14:17:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:19.352 14:17:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.352 14:17:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:19.352 14:17:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:19.352 14:17:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:19.352 14:17:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:19.352 14:17:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.352 14:17:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:23:19.352 14:17:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:19.352 14:17:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:23:19.352 14:17:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:23:19.352 14:17:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:19.352 14:17:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:19.352 14:17:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:23:19.352 14:17:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:23:19.352 14:17:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:19.352 14:17:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:19.352 14:17:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.352 14:17:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:19.352 14:17:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:19.352 14:17:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:19.352 14:17:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.352 14:17:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:23:19.352 14:17:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:19.352 14:17:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:23:19.352 14:17:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:23:19.352 14:17:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:19.352 14:17:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:19.352 14:17:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:19.352 14:17:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:19.352 14:17:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:19.352 14:17:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:23:19.352 14:17:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:19.352 14:17:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:19.352 14:17:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.352 14:17:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:19.352 14:17:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.352 14:17:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:23:19.352 14:17:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:23:19.352 14:17:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:23:19.352 14:17:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:19.352 14:17:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:19.352 14:17:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.352 14:17:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:20.724 [2024-07-26 14:17:28.390176] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:20.724 [2024-07-26 14:17:28.390197] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:20.724 [2024-07-26 14:17:28.390217] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:20.724 [2024-07-26 14:17:28.477506] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:23:20.724 [2024-07-26 14:17:28.585605] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:20.724 [2024-07-26 14:17:28.585639] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:20.724 14:17:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.724 14:17:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:20.724 14:17:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:23:20.724 14:17:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:20.724 14:17:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:20.724 14:17:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:20.724 14:17:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:20.724 14:17:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:20.724 14:17:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:20.724 14:17:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.724 14:17:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:20.724 request: 00:23:20.724 { 00:23:20.724 "name": "nvme", 00:23:20.724 "trtype": "tcp", 00:23:20.724 "traddr": "10.0.0.2", 00:23:20.724 "adrfam": "ipv4", 00:23:20.724 "trsvcid": "8009", 00:23:20.724 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:20.724 "wait_for_attach": true, 00:23:20.724 "method": "bdev_nvme_start_discovery", 00:23:20.724 "req_id": 1 00:23:20.724 } 00:23:20.724 Got JSON-RPC error response 00:23:20.724 response: 00:23:20.724 { 00:23:20.724 "code": -17, 00:23:20.724 "message": "File exists" 00:23:20.724 } 00:23:20.724 14:17:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:20.724 14:17:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:23:20.724 14:17:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:20.724 14:17:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:20.724 14:17:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:20.724 14:17:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:23:20.724 14:17:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:20.724 14:17:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.724 14:17:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:20.724 14:17:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:20.724 14:17:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:20.724 14:17:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:20.724 14:17:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.724 14:17:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:23:20.724 14:17:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:23:20.724 14:17:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:20.724 14:17:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:20.724 14:17:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.724 14:17:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:20.724 14:17:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:20.724 14:17:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:20.724 14:17:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.724 14:17:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:20.724 14:17:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:20.724 14:17:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:23:20.724 14:17:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:20.724 14:17:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:20.724 14:17:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:20.724 14:17:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:20.724 14:17:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:20.724 14:17:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:20.724 14:17:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.724 14:17:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:20.724 request: 00:23:20.724 { 00:23:20.724 "name": "nvme_second", 00:23:20.724 "trtype": "tcp", 00:23:20.724 "traddr": "10.0.0.2", 00:23:20.724 "adrfam": "ipv4", 00:23:20.724 "trsvcid": "8009", 00:23:20.724 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:20.724 "wait_for_attach": true, 00:23:20.724 "method": "bdev_nvme_start_discovery", 00:23:20.724 "req_id": 1 00:23:20.724 } 00:23:20.724 Got JSON-RPC error response 00:23:20.724 response: 00:23:20.724 { 00:23:20.724 "code": -17, 00:23:20.724 "message": "File exists" 00:23:20.724 } 00:23:20.724 14:17:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:20.724 14:17:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:23:20.724 14:17:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:20.724 14:17:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:20.724 14:17:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:20.724 14:17:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:23:20.724 14:17:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:20.724 14:17:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:20.724 14:17:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.724 14:17:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:20.724 14:17:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:20.724 14:17:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:20.724 14:17:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.724 14:17:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:23:20.724 14:17:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:23:20.724 14:17:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:20.724 14:17:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:20.724 14:17:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.724 14:17:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:20.724 14:17:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:20.724 14:17:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:20.983 14:17:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.983 14:17:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:20.983 14:17:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:20.983 14:17:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:23:20.983 14:17:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:20.983 14:17:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:20.983 14:17:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:20.983 14:17:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:20.983 14:17:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:20.983 14:17:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:20.983 14:17:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.983 14:17:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:21.915 [2024-07-26 14:17:29.772980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.915 [2024-07-26 14:17:29.773036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec3030 with addr=10.0.0.2, port=8010 00:23:21.915 [2024-07-26 14:17:29.773061] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:23:21.915 [2024-07-26 14:17:29.773074] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:21.915 [2024-07-26 14:17:29.773087] bdev_nvme.c:7073:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:23:22.847 [2024-07-26 14:17:30.775533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:22.847 [2024-07-26 14:17:30.775624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec3030 with addr=10.0.0.2, port=8010 00:23:22.847 [2024-07-26 14:17:30.775656] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:23:22.847 [2024-07-26 14:17:30.775671] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:22.847 [2024-07-26 14:17:30.775684] bdev_nvme.c:7073:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:23:23.779 [2024-07-26 14:17:31.777675] bdev_nvme.c:7054:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:23:23.779 request: 00:23:23.779 { 00:23:23.779 "name": "nvme_second", 00:23:23.779 "trtype": "tcp", 00:23:23.779 "traddr": "10.0.0.2", 00:23:23.779 "adrfam": "ipv4", 00:23:23.779 "trsvcid": "8010", 00:23:23.779 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:23.779 "wait_for_attach": false, 00:23:23.779 "attach_timeout_ms": 3000, 00:23:23.779 "method": "bdev_nvme_start_discovery", 00:23:23.779 "req_id": 1 00:23:23.779 } 00:23:23.779 Got JSON-RPC error response 00:23:23.779 response: 00:23:23.779 { 00:23:23.779 "code": -110, 00:23:23.779 "message": "Connection timed out" 00:23:23.779 } 00:23:23.779 14:17:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:23.780 14:17:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:23:23.780 14:17:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:23.780 14:17:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:23.780 14:17:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:23.780 14:17:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:23:23.780 14:17:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:23.780 14:17:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:23.780 14:17:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.780 14:17:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:23.780 14:17:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:23.780 14:17:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:23.780 14:17:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.038 14:17:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:23:24.038 14:17:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:23:24.038 14:17:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 292254 00:23:24.038 14:17:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:23:24.038 14:17:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:24.038 14:17:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:23:24.038 14:17:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:24.038 14:17:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:23:24.038 14:17:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:24.038 14:17:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:24.038 rmmod nvme_tcp 00:23:24.038 rmmod nvme_fabrics 00:23:24.038 rmmod nvme_keyring 00:23:24.038 14:17:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:24.038 14:17:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:23:24.038 14:17:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:23:24.038 14:17:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 292230 ']' 00:23:24.038 14:17:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 292230 00:23:24.038 14:17:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@950 -- # '[' -z 292230 ']' 00:23:24.038 14:17:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # kill -0 292230 00:23:24.038 14:17:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # uname 00:23:24.038 14:17:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:24.038 14:17:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 292230 00:23:24.038 14:17:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:24.038 14:17:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:24.038 14:17:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 292230' 00:23:24.038 killing process with pid 292230 00:23:24.038 14:17:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@969 -- # kill 292230 00:23:24.038 14:17:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@974 -- # wait 292230 00:23:24.297 14:17:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:24.298 14:17:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:24.298 14:17:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:24.298 14:17:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:24.298 14:17:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:24.298 14:17:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:24.298 14:17:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:24.298 14:17:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:26.205 14:17:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:26.205 00:23:26.205 real 0m13.392s 00:23:26.205 user 0m19.232s 00:23:26.205 sys 0m2.920s 00:23:26.205 14:17:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:26.205 14:17:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:26.205 ************************************ 00:23:26.205 END TEST nvmf_host_discovery 00:23:26.205 ************************************ 00:23:26.464 14:17:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:23:26.464 14:17:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:26.464 14:17:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:26.464 14:17:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.464 ************************************ 00:23:26.464 START TEST nvmf_host_multipath_status 00:23:26.464 ************************************ 00:23:26.464 14:17:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:23:26.464 * Looking for test storage... 00:23:26.464 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:26.464 14:17:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:26.464 14:17:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:23:26.464 14:17:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:26.464 14:17:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:26.464 14:17:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:26.464 14:17:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:26.464 14:17:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:26.464 14:17:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:26.464 14:17:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:26.464 14:17:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:26.464 14:17:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:26.464 14:17:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:26.464 14:17:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:23:26.464 14:17:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:23:26.464 14:17:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:26.464 14:17:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:26.464 14:17:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:26.464 14:17:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:26.464 14:17:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:26.464 14:17:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:26.464 14:17:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:26.464 14:17:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:26.464 14:17:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:26.464 14:17:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:26.464 14:17:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:26.464 14:17:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:23:26.464 14:17:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:26.464 14:17:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:23:26.464 14:17:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:26.464 14:17:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:26.464 14:17:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:26.464 14:17:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:26.464 14:17:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:26.464 14:17:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:26.464 14:17:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:26.464 14:17:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:26.464 14:17:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:26.464 14:17:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:26.464 14:17:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:26.464 14:17:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:23:26.464 14:17:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:26.464 14:17:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:23:26.464 14:17:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:23:26.464 14:17:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:26.464 14:17:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:26.464 14:17:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:26.464 14:17:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:26.464 14:17:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:26.464 14:17:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:26.464 14:17:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:26.464 14:17:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:26.464 14:17:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:26.464 14:17:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:26.464 14:17:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:23:26.464 14:17:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:28.367 14:17:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:28.367 14:17:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:23:28.367 14:17:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:28.367 14:17:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:28.367 14:17:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:28.367 14:17:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:28.367 14:17:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:28.367 14:17:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:23:28.367 14:17:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:28.367 14:17:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:23:28.367 14:17:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:23:28.367 14:17:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:23:28.367 14:17:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:23:28.367 14:17:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:23:28.367 14:17:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:23:28.367 14:17:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:28.367 14:17:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:28.367 14:17:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:28.367 14:17:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:28.367 14:17:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:28.367 14:17:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:28.367 14:17:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:28.367 14:17:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:28.367 14:17:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:28.367 14:17:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:28.367 14:17:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:28.367 14:17:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:28.367 14:17:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:28.367 14:17:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:28.367 14:17:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:28.367 14:17:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:28.367 14:17:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:28.367 14:17:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:28.367 14:17:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:23:28.367 Found 0000:09:00.0 (0x8086 - 0x159b) 00:23:28.367 14:17:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:28.367 14:17:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:28.367 14:17:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:28.367 14:17:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:28.367 14:17:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:28.367 14:17:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:28.367 14:17:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:23:28.367 Found 0000:09:00.1 (0x8086 - 0x159b) 00:23:28.367 14:17:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:28.367 14:17:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:28.368 14:17:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:28.368 14:17:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:28.368 14:17:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:28.368 14:17:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:28.368 14:17:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:28.368 14:17:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:28.368 14:17:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:28.368 14:17:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:28.368 14:17:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:28.368 14:17:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:28.368 14:17:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:28.368 14:17:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:28.368 14:17:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:28.368 14:17:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:23:28.368 Found net devices under 0000:09:00.0: cvl_0_0 00:23:28.368 14:17:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:28.368 14:17:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:28.368 14:17:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:28.368 14:17:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:28.368 14:17:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:28.368 14:17:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:28.368 14:17:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:28.368 14:17:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:28.368 14:17:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:23:28.368 Found net devices under 0000:09:00.1: cvl_0_1 00:23:28.368 14:17:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:28.368 14:17:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:28.368 14:17:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:23:28.368 14:17:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:28.368 14:17:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:28.368 14:17:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:28.368 14:17:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:28.368 14:17:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:28.368 14:17:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:28.368 14:17:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:28.368 14:17:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:28.368 14:17:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:28.368 14:17:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:28.368 14:17:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:28.368 14:17:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:28.368 14:17:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:28.368 14:17:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:28.368 14:17:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:28.368 14:17:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:28.626 14:17:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:28.626 14:17:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:28.626 14:17:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:28.626 14:17:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:28.626 14:17:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:28.626 14:17:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:28.626 14:17:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:28.626 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:28.627 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.165 ms 00:23:28.627 00:23:28.627 --- 10.0.0.2 ping statistics --- 00:23:28.627 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:28.627 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:23:28.627 14:17:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:28.627 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:28.627 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.178 ms 00:23:28.627 00:23:28.627 --- 10.0.0.1 ping statistics --- 00:23:28.627 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:28.627 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:23:28.627 14:17:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:28.627 14:17:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:23:28.627 14:17:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:28.627 14:17:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:28.627 14:17:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:28.627 14:17:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:28.627 14:17:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:28.627 14:17:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:28.627 14:17:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:28.627 14:17:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:23:28.627 14:17:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:28.627 14:17:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:28.627 14:17:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:28.627 14:17:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=295334 00:23:28.627 14:17:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 295334 00:23:28.627 14:17:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:23:28.627 14:17:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 295334 ']' 00:23:28.627 14:17:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:28.627 14:17:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:28.627 14:17:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:28.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:28.627 14:17:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:28.627 14:17:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:28.627 [2024-07-26 14:17:36.558103] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:23:28.627 [2024-07-26 14:17:36.558179] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:28.627 EAL: No free 2048 kB hugepages reported on node 1 00:23:28.627 [2024-07-26 14:17:36.622036] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:28.885 [2024-07-26 14:17:36.734416] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:28.885 [2024-07-26 14:17:36.734466] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:28.885 [2024-07-26 14:17:36.734494] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:28.885 [2024-07-26 14:17:36.734506] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:28.885 [2024-07-26 14:17:36.734516] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:28.885 [2024-07-26 14:17:36.734595] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:28.885 [2024-07-26 14:17:36.734601] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:28.885 14:17:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:28.885 14:17:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:23:28.885 14:17:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:28.885 14:17:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:28.885 14:17:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:28.885 14:17:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:28.885 14:17:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=295334 00:23:28.885 14:17:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:29.142 [2024-07-26 14:17:37.118745] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:29.142 14:17:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:29.402 Malloc0 00:23:29.402 14:17:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:23:29.967 14:17:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:29.967 14:17:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:30.225 [2024-07-26 14:17:38.145797] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:30.225 14:17:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:30.483 [2024-07-26 14:17:38.402607] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:30.483 14:17:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=295571 00:23:30.483 14:17:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:23:30.483 14:17:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:30.483 14:17:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 295571 /var/tmp/bdevperf.sock 00:23:30.483 14:17:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 295571 ']' 00:23:30.483 14:17:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:30.483 14:17:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:30.483 14:17:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:30.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:30.483 14:17:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:30.483 14:17:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:30.741 14:17:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:30.741 14:17:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:23:30.741 14:17:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:23:30.999 14:17:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:23:31.564 Nvme0n1 00:23:31.564 14:17:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:23:31.822 Nvme0n1 00:23:31.822 14:17:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:23:31.822 14:17:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:23:34.351 14:17:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:23:34.351 14:17:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:23:34.351 14:17:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:34.609 14:17:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:23:35.541 14:17:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:23:35.541 14:17:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:35.541 14:17:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:35.541 14:17:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:35.800 14:17:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:35.800 14:17:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:35.800 14:17:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:35.800 14:17:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:36.058 14:17:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:36.058 14:17:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:36.058 14:17:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:36.058 14:17:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:36.315 14:17:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:36.315 14:17:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:36.315 14:17:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:36.315 14:17:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:36.574 14:17:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:36.574 14:17:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:36.574 14:17:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:36.574 14:17:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:36.832 14:17:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:36.832 14:17:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:36.832 14:17:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:36.832 14:17:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:37.090 14:17:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:37.090 14:17:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:23:37.090 14:17:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:37.348 14:17:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:37.606 14:17:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:23:38.539 14:17:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:23:38.539 14:17:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:38.539 14:17:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:38.539 14:17:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:38.796 14:17:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:38.796 14:17:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:38.796 14:17:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:38.796 14:17:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:39.054 14:17:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:39.054 14:17:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:39.054 14:17:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:39.054 14:17:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:39.311 14:17:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:39.311 14:17:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:39.311 14:17:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:39.311 14:17:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:39.569 14:17:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:39.569 14:17:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:39.569 14:17:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:39.569 14:17:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:39.826 14:17:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:39.826 14:17:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:39.826 14:17:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:39.826 14:17:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:40.120 14:17:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:40.120 14:17:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:23:40.120 14:17:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:40.417 14:17:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:23:40.417 14:17:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:23:41.406 14:17:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:23:41.406 14:17:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:41.406 14:17:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:41.406 14:17:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:41.664 14:17:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:41.664 14:17:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:41.664 14:17:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:41.664 14:17:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:41.922 14:17:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:41.922 14:17:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:41.922 14:17:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:41.922 14:17:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:42.180 14:17:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:42.180 14:17:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:42.180 14:17:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:42.180 14:17:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:42.437 14:17:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:42.437 14:17:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:42.437 14:17:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:42.437 14:17:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:42.695 14:17:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:42.695 14:17:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:42.695 14:17:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:42.695 14:17:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:42.953 14:17:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:42.953 14:17:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:23:42.953 14:17:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:43.210 14:17:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:43.468 14:17:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:23:44.841 14:17:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:23:44.841 14:17:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:44.841 14:17:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:44.841 14:17:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:44.841 14:17:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:44.841 14:17:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:44.841 14:17:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:44.841 14:17:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:45.099 14:17:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:45.099 14:17:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:45.099 14:17:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:45.099 14:17:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:45.356 14:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:45.356 14:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:45.356 14:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:45.356 14:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:45.614 14:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:45.614 14:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:45.614 14:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:45.615 14:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:45.872 14:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:45.872 14:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:23:45.872 14:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:45.872 14:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:46.130 14:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:46.130 14:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:23:46.130 14:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:23:46.388 14:17:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:46.646 14:17:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:23:47.578 14:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:23:47.578 14:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:47.578 14:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:47.578 14:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:47.836 14:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:47.836 14:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:47.836 14:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:47.836 14:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:48.093 14:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:48.094 14:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:48.094 14:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:48.094 14:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:48.351 14:17:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:48.351 14:17:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:48.351 14:17:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:48.351 14:17:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:48.607 14:17:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:48.607 14:17:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:23:48.607 14:17:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:48.607 14:17:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:48.863 14:17:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:48.863 14:17:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:23:48.863 14:17:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:48.864 14:17:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:49.120 14:17:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:49.120 14:17:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:23:49.120 14:17:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:23:49.378 14:17:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:49.635 14:17:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:23:50.567 14:17:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:23:50.567 14:17:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:50.567 14:17:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:50.567 14:17:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:50.824 14:17:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:50.824 14:17:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:50.824 14:17:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:50.824 14:17:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:51.083 14:17:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:51.083 14:17:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:51.083 14:17:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:51.083 14:17:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:51.340 14:17:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:51.340 14:17:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:51.340 14:17:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:51.340 14:17:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:51.598 14:17:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:51.598 14:17:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:23:51.598 14:17:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:51.598 14:17:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:51.855 14:17:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:51.855 14:17:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:51.855 14:17:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:51.855 14:17:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:52.112 14:17:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:52.112 14:17:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:23:52.369 14:18:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:23:52.369 14:18:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:23:52.627 14:18:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:52.884 14:18:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:23:53.817 14:18:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:23:53.817 14:18:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:53.817 14:18:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:53.817 14:18:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:54.074 14:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:54.074 14:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:54.074 14:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:54.074 14:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:54.332 14:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:54.332 14:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:54.332 14:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:54.332 14:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:54.590 14:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:54.590 14:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:54.590 14:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:54.590 14:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:54.848 14:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:54.848 14:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:54.848 14:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:54.848 14:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:55.105 14:18:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:55.105 14:18:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:55.105 14:18:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:55.105 14:18:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:55.364 14:18:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:55.364 14:18:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:23:55.364 14:18:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:55.622 14:18:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:55.880 14:18:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:23:57.252 14:18:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:23:57.252 14:18:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:57.252 14:18:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:57.252 14:18:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:57.252 14:18:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:57.252 14:18:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:57.252 14:18:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:57.252 14:18:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:57.511 14:18:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:57.511 14:18:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:57.511 14:18:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:57.511 14:18:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:57.769 14:18:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:57.769 14:18:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:57.769 14:18:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:57.769 14:18:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:58.027 14:18:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:58.027 14:18:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:58.027 14:18:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:58.027 14:18:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:58.286 14:18:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:58.286 14:18:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:58.286 14:18:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:58.286 14:18:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:58.544 14:18:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:58.544 14:18:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:23:58.544 14:18:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:58.803 14:18:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:23:59.061 14:18:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:23:59.993 14:18:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:23:59.993 14:18:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:59.993 14:18:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:59.993 14:18:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:00.251 14:18:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:00.251 14:18:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:00.251 14:18:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:00.251 14:18:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:00.509 14:18:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:00.509 14:18:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:00.509 14:18:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:00.509 14:18:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:00.767 14:18:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:00.767 14:18:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:00.767 14:18:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:00.767 14:18:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:01.025 14:18:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:01.025 14:18:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:01.025 14:18:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:01.025 14:18:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:01.283 14:18:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:01.283 14:18:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:01.283 14:18:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:01.283 14:18:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:01.540 14:18:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:01.540 14:18:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:24:01.540 14:18:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:01.799 14:18:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:02.056 14:18:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:24:03.430 14:18:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:24:03.430 14:18:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:03.430 14:18:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:03.430 14:18:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:03.430 14:18:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:03.430 14:18:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:03.430 14:18:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:03.430 14:18:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:03.688 14:18:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:03.688 14:18:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:03.688 14:18:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:03.688 14:18:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:03.946 14:18:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:03.946 14:18:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:03.946 14:18:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:03.946 14:18:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:04.204 14:18:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:04.204 14:18:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:04.204 14:18:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:04.204 14:18:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:04.462 14:18:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:04.462 14:18:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:04.462 14:18:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:04.462 14:18:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:04.720 14:18:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:04.720 14:18:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 295571 00:24:04.720 14:18:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 295571 ']' 00:24:04.720 14:18:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 295571 00:24:04.720 14:18:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:24:04.720 14:18:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:04.720 14:18:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 295571 00:24:04.720 14:18:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:24:04.720 14:18:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:24:04.720 14:18:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 295571' 00:24:04.720 killing process with pid 295571 00:24:04.720 14:18:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 295571 00:24:04.720 14:18:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 295571 00:24:04.720 Connection closed with partial response: 00:24:04.720 00:24:04.720 00:24:05.005 14:18:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 295571 00:24:05.005 14:18:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:05.005 [2024-07-26 14:17:38.462877] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:24:05.005 [2024-07-26 14:17:38.462949] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid295571 ] 00:24:05.005 EAL: No free 2048 kB hugepages reported on node 1 00:24:05.005 [2024-07-26 14:17:38.522350] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:05.005 [2024-07-26 14:17:38.638048] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:05.005 Running I/O for 90 seconds... 00:24:05.005 [2024-07-26 14:17:54.173868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:94432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.005 [2024-07-26 14:17:54.173928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:05.005 [2024-07-26 14:17:54.173966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:94440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.005 [2024-07-26 14:17:54.173984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:05.005 [2024-07-26 14:17:54.174009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:94448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.005 [2024-07-26 14:17:54.174025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:05.005 [2024-07-26 14:17:54.174047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:94456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.005 [2024-07-26 14:17:54.174063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:05.005 [2024-07-26 14:17:54.174084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:94464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.005 [2024-07-26 14:17:54.174100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:05.005 [2024-07-26 14:17:54.174122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:94136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.005 [2024-07-26 14:17:54.174137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:05.005 [2024-07-26 14:17:54.174160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:94144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.005 [2024-07-26 14:17:54.174175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:05.005 [2024-07-26 14:17:54.174197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:94152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.005 [2024-07-26 14:17:54.174213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:05.005 [2024-07-26 14:17:54.174234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:94160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.005 [2024-07-26 14:17:54.174250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:05.005 [2024-07-26 14:17:54.174272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:94168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.005 [2024-07-26 14:17:54.174287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:05.005 [2024-07-26 14:17:54.174333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:94176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.005 [2024-07-26 14:17:54.174356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:05.005 [2024-07-26 14:17:54.174378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:94184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.005 [2024-07-26 14:17:54.174393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:05.005 [2024-07-26 14:17:54.174414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:94472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.005 [2024-07-26 14:17:54.174429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:05.005 [2024-07-26 14:17:54.174449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:94480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.005 [2024-07-26 14:17:54.174478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:05.005 [2024-07-26 14:17:54.174501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:94488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.005 [2024-07-26 14:17:54.174516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:05.005 [2024-07-26 14:17:54.174562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:94496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.005 [2024-07-26 14:17:54.174588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:05.005 [2024-07-26 14:17:54.174610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:94504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.005 [2024-07-26 14:17:54.174625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:05.005 [2024-07-26 14:17:54.174647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:94512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.005 [2024-07-26 14:17:54.174663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:05.005 [2024-07-26 14:17:54.174684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:94520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.005 [2024-07-26 14:17:54.174699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:05.005 [2024-07-26 14:17:54.174720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:94528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.005 [2024-07-26 14:17:54.174736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:05.005 [2024-07-26 14:17:54.174757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:94536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.005 [2024-07-26 14:17:54.174772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:05.005 [2024-07-26 14:17:54.174793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:94544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.005 [2024-07-26 14:17:54.174809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:05.005 [2024-07-26 14:17:54.174830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:94552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.005 [2024-07-26 14:17:54.174861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:05.005 [2024-07-26 14:17:54.174889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:94560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.005 [2024-07-26 14:17:54.174905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:05.005 [2024-07-26 14:17:54.174926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:94568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.005 [2024-07-26 14:17:54.174942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:05.005 [2024-07-26 14:17:54.174964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:94576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.005 [2024-07-26 14:17:54.174979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:05.005 [2024-07-26 14:17:54.175000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:94584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.005 [2024-07-26 14:17:54.175014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:05.005 [2024-07-26 14:17:54.175035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:94592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.005 [2024-07-26 14:17:54.175050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:05.005 [2024-07-26 14:17:54.175071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:94600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.005 [2024-07-26 14:17:54.175086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:05.005 [2024-07-26 14:17:54.175107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:94608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.005 [2024-07-26 14:17:54.175122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:05.005 [2024-07-26 14:17:54.175143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:94616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.005 [2024-07-26 14:17:54.175159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:05.005 [2024-07-26 14:17:54.175179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:94624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.005 [2024-07-26 14:17:54.175194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:05.005 [2024-07-26 14:17:54.175215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:94632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.006 [2024-07-26 14:17:54.175230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:05.006 [2024-07-26 14:17:54.175251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:94640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.006 [2024-07-26 14:17:54.175266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:05.006 [2024-07-26 14:17:54.175287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:94648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.006 [2024-07-26 14:17:54.175302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:05.006 [2024-07-26 14:17:54.175327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:94656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.006 [2024-07-26 14:17:54.175343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:05.006 [2024-07-26 14:17:54.175364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:94664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.006 [2024-07-26 14:17:54.175380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:05.006 [2024-07-26 14:17:54.175401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:94672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.006 [2024-07-26 14:17:54.175416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:05.006 [2024-07-26 14:17:54.175437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:94680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.006 [2024-07-26 14:17:54.175452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:05.006 [2024-07-26 14:17:54.175473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:94688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.006 [2024-07-26 14:17:54.175488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:05.006 [2024-07-26 14:17:54.175511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:94696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.006 [2024-07-26 14:17:54.175533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:05.006 [2024-07-26 14:17:54.175573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:94704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.006 [2024-07-26 14:17:54.175589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:05.006 [2024-07-26 14:17:54.175611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:94712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.006 [2024-07-26 14:17:54.175626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:05.006 [2024-07-26 14:17:54.175648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:94720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.006 [2024-07-26 14:17:54.175664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:05.006 [2024-07-26 14:17:54.175685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:94728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.006 [2024-07-26 14:17:54.175700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:05.006 [2024-07-26 14:17:54.175722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:94736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.006 [2024-07-26 14:17:54.175737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:05.006 [2024-07-26 14:17:54.175759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:94744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.006 [2024-07-26 14:17:54.175775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:05.006 [2024-07-26 14:17:54.176386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:94752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.006 [2024-07-26 14:17:54.176413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:05.006 [2024-07-26 14:17:54.176442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:94760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.006 [2024-07-26 14:17:54.176459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:05.006 [2024-07-26 14:17:54.176481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:94768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.006 [2024-07-26 14:17:54.176497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:05.006 [2024-07-26 14:17:54.176519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:94776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.006 [2024-07-26 14:17:54.176543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:05.006 [2024-07-26 14:17:54.176567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.006 [2024-07-26 14:17:54.176583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:05.006 [2024-07-26 14:17:54.176605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:94792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.006 [2024-07-26 14:17:54.176620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:05.006 [2024-07-26 14:17:54.176642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:94800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.006 [2024-07-26 14:17:54.176657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:05.006 [2024-07-26 14:17:54.176679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:94808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.006 [2024-07-26 14:17:54.176694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:05.006 [2024-07-26 14:17:54.176717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:94816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.006 [2024-07-26 14:17:54.176733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:05.006 [2024-07-26 14:17:54.176755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:94824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.006 [2024-07-26 14:17:54.176771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:05.006 [2024-07-26 14:17:54.176793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:94832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.006 [2024-07-26 14:17:54.176808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:05.006 [2024-07-26 14:17:54.176845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:94840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.006 [2024-07-26 14:17:54.176861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:05.006 [2024-07-26 14:17:54.176883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:94848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.006 [2024-07-26 14:17:54.176902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:05.006 [2024-07-26 14:17:54.176924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:94192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.006 [2024-07-26 14:17:54.176940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:05.006 [2024-07-26 14:17:54.176961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:94200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.006 [2024-07-26 14:17:54.176976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:05.006 [2024-07-26 14:17:54.177014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:94208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.006 [2024-07-26 14:17:54.177030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:05.006 [2024-07-26 14:17:54.177051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:94216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.006 [2024-07-26 14:17:54.177067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:05.006 [2024-07-26 14:17:54.177089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:94224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.006 [2024-07-26 14:17:54.177105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:05.006 [2024-07-26 14:17:54.177126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:94232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.006 [2024-07-26 14:17:54.177142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:05.006 [2024-07-26 14:17:54.177164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:94240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.006 [2024-07-26 14:17:54.177179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:05.006 [2024-07-26 14:17:54.177201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:94856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.006 [2024-07-26 14:17:54.177217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:05.006 [2024-07-26 14:17:54.177238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:94864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.006 [2024-07-26 14:17:54.177253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:05.006 [2024-07-26 14:17:54.177275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:94872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.006 [2024-07-26 14:17:54.177291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:05.006 [2024-07-26 14:17:54.177327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:94880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.007 [2024-07-26 14:17:54.177343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:05.007 [2024-07-26 14:17:54.177365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:94888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.007 [2024-07-26 14:17:54.177380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:05.007 [2024-07-26 14:17:54.177405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:94896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.007 [2024-07-26 14:17:54.177421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:05.007 [2024-07-26 14:17:54.177442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:94904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.007 [2024-07-26 14:17:54.177457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:05.007 [2024-07-26 14:17:54.177479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:94912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.007 [2024-07-26 14:17:54.177494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:05.007 [2024-07-26 14:17:54.177537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:94920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.007 [2024-07-26 14:17:54.177555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:05.007 [2024-07-26 14:17:54.177588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:94928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.007 [2024-07-26 14:17:54.177605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:05.007 [2024-07-26 14:17:54.177626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:94936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.007 [2024-07-26 14:17:54.177642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:05.007 [2024-07-26 14:17:54.177663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:94944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.007 [2024-07-26 14:17:54.177678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:05.007 [2024-07-26 14:17:54.177699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:94952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.007 [2024-07-26 14:17:54.177715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:05.007 [2024-07-26 14:17:54.177737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:94960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.007 [2024-07-26 14:17:54.177753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:05.007 [2024-07-26 14:17:54.177774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:94968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.007 [2024-07-26 14:17:54.177790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:05.007 [2024-07-26 14:17:54.177811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:94976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.007 [2024-07-26 14:17:54.177827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:05.007 [2024-07-26 14:17:54.177864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:94984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.007 [2024-07-26 14:17:54.177886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:05.007 [2024-07-26 14:17:54.177911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:94992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.007 [2024-07-26 14:17:54.177927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:05.007 [2024-07-26 14:17:54.177948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:95000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.007 [2024-07-26 14:17:54.177963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:05.007 [2024-07-26 14:17:54.177984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:95008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.007 [2024-07-26 14:17:54.177999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:05.007 [2024-07-26 14:17:54.178020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:95016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.007 [2024-07-26 14:17:54.178035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:05.007 [2024-07-26 14:17:54.178056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:95024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.007 [2024-07-26 14:17:54.178071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:05.007 [2024-07-26 14:17:54.178092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:95032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.007 [2024-07-26 14:17:54.178107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:05.007 [2024-07-26 14:17:54.178144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:95040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.007 [2024-07-26 14:17:54.178160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:05.007 [2024-07-26 14:17:54.178185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:95048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.007 [2024-07-26 14:17:54.178201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:05.007 [2024-07-26 14:17:54.178223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:95056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.007 [2024-07-26 14:17:54.178239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:05.007 [2024-07-26 14:17:54.178261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:95064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.007 [2024-07-26 14:17:54.178277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:05.007 [2024-07-26 14:17:54.178299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:95072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.007 [2024-07-26 14:17:54.178315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:05.007 [2024-07-26 14:17:54.178337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:95080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.007 [2024-07-26 14:17:54.178353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:05.007 [2024-07-26 14:17:54.178375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:95088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.007 [2024-07-26 14:17:54.178395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.007 [2024-07-26 14:17:54.178417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:95096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.007 [2024-07-26 14:17:54.178434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:05.007 [2024-07-26 14:17:54.178471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:95104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.007 [2024-07-26 14:17:54.178487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:05.007 [2024-07-26 14:17:54.178509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:95112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.007 [2024-07-26 14:17:54.178524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:05.007 [2024-07-26 14:17:54.178574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:95120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.007 [2024-07-26 14:17:54.178593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:05.007 [2024-07-26 14:17:54.178615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:95128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.007 [2024-07-26 14:17:54.178631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:05.007 [2024-07-26 14:17:54.178653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:95136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.007 [2024-07-26 14:17:54.178669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:05.007 [2024-07-26 14:17:54.178691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:95144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.007 [2024-07-26 14:17:54.178707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:05.007 [2024-07-26 14:17:54.178729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:94248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.007 [2024-07-26 14:17:54.178745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:05.007 [2024-07-26 14:17:54.178766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:94256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.007 [2024-07-26 14:17:54.178783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:05.007 [2024-07-26 14:17:54.178805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:94264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.007 [2024-07-26 14:17:54.178820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:05.007 [2024-07-26 14:17:54.178858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:94272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.007 [2024-07-26 14:17:54.178874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:05.007 [2024-07-26 14:17:54.178895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:94280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.008 [2024-07-26 14:17:54.178914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:05.008 [2024-07-26 14:17:54.178936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:94288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.008 [2024-07-26 14:17:54.178951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:05.008 [2024-07-26 14:17:54.178972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:94296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.008 [2024-07-26 14:17:54.178988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:05.008 [2024-07-26 14:17:54.179009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:94304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.008 [2024-07-26 14:17:54.179024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:05.008 [2024-07-26 14:17:54.179045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:94312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.008 [2024-07-26 14:17:54.179060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:05.008 [2024-07-26 14:17:54.179081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:94320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.008 [2024-07-26 14:17:54.179096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:05.008 [2024-07-26 14:17:54.179117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:94328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.008 [2024-07-26 14:17:54.179132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:05.008 [2024-07-26 14:17:54.179153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:94336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.008 [2024-07-26 14:17:54.179168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:05.008 [2024-07-26 14:17:54.179189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:94344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.008 [2024-07-26 14:17:54.179204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:05.008 [2024-07-26 14:17:54.179225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:94352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.008 [2024-07-26 14:17:54.179240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:05.008 [2024-07-26 14:17:54.179970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:94360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.008 [2024-07-26 14:17:54.179993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:05.008 [2024-07-26 14:17:54.180021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:95152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.008 [2024-07-26 14:17:54.180038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:05.008 [2024-07-26 14:17:54.180061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:94368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.008 [2024-07-26 14:17:54.180089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:05.008 [2024-07-26 14:17:54.180113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:94376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.008 [2024-07-26 14:17:54.180129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:05.008 [2024-07-26 14:17:54.180152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:94384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.008 [2024-07-26 14:17:54.180176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:05.008 [2024-07-26 14:17:54.180199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:94392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.008 [2024-07-26 14:17:54.180214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:05.008 [2024-07-26 14:17:54.180236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:94400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.008 [2024-07-26 14:17:54.180252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:05.008 [2024-07-26 14:17:54.180274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:94408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.008 [2024-07-26 14:17:54.180290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:05.008 [2024-07-26 14:17:54.180312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:94416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.008 [2024-07-26 14:17:54.180354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:05.008 [2024-07-26 14:17:54.180377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:94424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.008 [2024-07-26 14:17:54.180392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:05.008 [2024-07-26 14:17:54.180414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:94432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.008 [2024-07-26 14:17:54.180429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:05.008 [2024-07-26 14:17:54.180450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:94440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.008 [2024-07-26 14:17:54.180465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:05.008 [2024-07-26 14:17:54.180501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:94448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.008 [2024-07-26 14:17:54.180517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:05.008 [2024-07-26 14:17:54.180548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:94456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.008 [2024-07-26 14:17:54.180565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:05.008 [2024-07-26 14:17:54.180590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:94464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.008 [2024-07-26 14:17:54.180606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:05.008 [2024-07-26 14:17:54.180632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:94136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.008 [2024-07-26 14:17:54.180648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:05.008 [2024-07-26 14:17:54.180670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:94144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.008 [2024-07-26 14:17:54.180686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:05.008 [2024-07-26 14:17:54.180708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:94152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.008 [2024-07-26 14:17:54.180724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:05.008 [2024-07-26 14:17:54.180746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:94160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.008 [2024-07-26 14:17:54.180762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:05.008 [2024-07-26 14:17:54.180784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:94168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.008 [2024-07-26 14:17:54.180800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:05.008 [2024-07-26 14:17:54.180822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:94176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.008 [2024-07-26 14:17:54.180837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:05.008 [2024-07-26 14:17:54.180859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:94184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.008 [2024-07-26 14:17:54.180874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:05.008 [2024-07-26 14:17:54.180898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:94472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.008 [2024-07-26 14:17:54.180914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:05.008 [2024-07-26 14:17:54.180935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:94480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.008 [2024-07-26 14:17:54.180950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:05.008 [2024-07-26 14:17:54.180971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:94488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.008 [2024-07-26 14:17:54.180987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:05.008 [2024-07-26 14:17:54.181009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:94496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.008 [2024-07-26 14:17:54.181024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:05.008 [2024-07-26 14:17:54.181045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:94504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.008 [2024-07-26 14:17:54.181060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:05.008 [2024-07-26 14:17:54.181088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:94512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.008 [2024-07-26 14:17:54.181105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:05.008 [2024-07-26 14:17:54.181127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:94520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.008 [2024-07-26 14:17:54.181142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:05.009 [2024-07-26 14:17:54.181163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:94528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.009 [2024-07-26 14:17:54.181179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:05.009 [2024-07-26 14:17:54.181200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:94536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.009 [2024-07-26 14:17:54.181215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:05.009 [2024-07-26 14:17:54.181236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:94544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.009 [2024-07-26 14:17:54.181251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:05.009 [2024-07-26 14:17:54.181273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:94552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.009 [2024-07-26 14:17:54.181288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:05.009 [2024-07-26 14:17:54.181310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:94560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.009 [2024-07-26 14:17:54.181325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:05.009 [2024-07-26 14:17:54.181347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:94568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.009 [2024-07-26 14:17:54.181362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:05.009 [2024-07-26 14:17:54.181399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:94576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.009 [2024-07-26 14:17:54.181414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:05.009 [2024-07-26 14:17:54.181435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:94584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.009 [2024-07-26 14:17:54.181465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:05.009 [2024-07-26 14:17:54.181487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:94592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.009 [2024-07-26 14:17:54.181503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:05.009 [2024-07-26 14:17:54.181525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:94600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.009 [2024-07-26 14:17:54.181548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:05.009 [2024-07-26 14:17:54.181571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:94608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.009 [2024-07-26 14:17:54.181598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:05.009 [2024-07-26 14:17:54.181620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:94616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.009 [2024-07-26 14:17:54.181636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:05.009 [2024-07-26 14:17:54.181658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:94624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.009 [2024-07-26 14:17:54.181673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:05.009 [2024-07-26 14:17:54.181695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:94632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.009 [2024-07-26 14:17:54.181710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:05.009 [2024-07-26 14:17:54.181731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:94640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.009 [2024-07-26 14:17:54.181747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:05.009 [2024-07-26 14:17:54.181768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:94648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.009 [2024-07-26 14:17:54.181784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:05.009 [2024-07-26 14:17:54.181805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:94656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.009 [2024-07-26 14:17:54.181821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:05.009 [2024-07-26 14:17:54.181842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:94664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.009 [2024-07-26 14:17:54.181858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:05.009 [2024-07-26 14:17:54.181879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:94672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.009 [2024-07-26 14:17:54.181894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:05.009 [2024-07-26 14:17:54.181923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:94680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.009 [2024-07-26 14:17:54.181940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:05.009 [2024-07-26 14:17:54.181962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:94688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.009 [2024-07-26 14:17:54.181977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:05.009 [2024-07-26 14:17:54.182014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:94696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.009 [2024-07-26 14:17:54.182030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:05.009 [2024-07-26 14:17:54.182051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:94704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.009 [2024-07-26 14:17:54.182070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:05.009 [2024-07-26 14:17:54.182091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:94712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.009 [2024-07-26 14:17:54.182106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:05.009 [2024-07-26 14:17:54.182128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:94720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.009 [2024-07-26 14:17:54.182143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:05.009 [2024-07-26 14:17:54.182163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:94728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.009 [2024-07-26 14:17:54.182178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:05.009 [2024-07-26 14:17:54.182200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:94736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.009 [2024-07-26 14:17:54.182231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:05.009 [2024-07-26 14:17:54.182862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:94744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.009 [2024-07-26 14:17:54.182885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:05.009 [2024-07-26 14:17:54.182911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:94752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.009 [2024-07-26 14:17:54.182928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:05.009 [2024-07-26 14:17:54.182950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:94760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.009 [2024-07-26 14:17:54.182966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:05.010 [2024-07-26 14:17:54.182988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:94768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.010 [2024-07-26 14:17:54.183003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:05.010 [2024-07-26 14:17:54.183025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:94776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.010 [2024-07-26 14:17:54.183041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:05.010 [2024-07-26 14:17:54.183063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:94784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.010 [2024-07-26 14:17:54.183078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:05.010 [2024-07-26 14:17:54.183100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:94792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.010 [2024-07-26 14:17:54.183115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:05.010 [2024-07-26 14:17:54.183137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:94800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.010 [2024-07-26 14:17:54.183152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:05.010 [2024-07-26 14:17:54.183179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:94808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.010 [2024-07-26 14:17:54.183195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:05.010 [2024-07-26 14:17:54.183217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:94816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.010 [2024-07-26 14:17:54.183232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:05.010 [2024-07-26 14:17:54.183254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:94824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.010 [2024-07-26 14:17:54.183269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:05.010 [2024-07-26 14:17:54.183290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:94832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.010 [2024-07-26 14:17:54.183305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:05.010 [2024-07-26 14:17:54.183327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:94840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.010 [2024-07-26 14:17:54.183357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:05.010 [2024-07-26 14:17:54.183379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:94848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.010 [2024-07-26 14:17:54.183394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:05.010 [2024-07-26 14:17:54.183414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:94192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.010 [2024-07-26 14:17:54.183429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:05.010 [2024-07-26 14:17:54.183450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:94200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.010 [2024-07-26 14:17:54.183464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:05.010 [2024-07-26 14:17:54.183485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:94208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.010 [2024-07-26 14:17:54.183500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:05.010 [2024-07-26 14:17:54.183553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:94216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.010 [2024-07-26 14:17:54.183571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:05.010 [2024-07-26 14:17:54.183593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:94224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.010 [2024-07-26 14:17:54.183608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:05.010 [2024-07-26 14:17:54.183629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:94232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.010 [2024-07-26 14:17:54.183644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:05.010 [2024-07-26 14:17:54.183670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:94240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.010 [2024-07-26 14:17:54.183686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:05.010 [2024-07-26 14:17:54.183707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:94856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.010 [2024-07-26 14:17:54.183723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:05.010 [2024-07-26 14:17:54.183744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:94864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.010 [2024-07-26 14:17:54.183760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:05.010 [2024-07-26 14:17:54.183781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:94872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.010 [2024-07-26 14:17:54.183797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:05.010 [2024-07-26 14:17:54.183818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:94880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.010 [2024-07-26 14:17:54.183844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:05.010 [2024-07-26 14:17:54.183866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:94888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.010 [2024-07-26 14:17:54.183882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:05.010 [2024-07-26 14:17:54.183903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:94896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.010 [2024-07-26 14:17:54.183919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:05.010 [2024-07-26 14:17:54.183940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:94904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.010 [2024-07-26 14:17:54.183955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:05.010 [2024-07-26 14:17:54.183977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:94912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.010 [2024-07-26 14:17:54.183993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:05.010 [2024-07-26 14:17:54.184014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:94920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.010 [2024-07-26 14:17:54.184030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:05.010 [2024-07-26 14:17:54.184051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:94928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.010 [2024-07-26 14:17:54.184067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:05.010 [2024-07-26 14:17:54.184088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:94936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.010 [2024-07-26 14:17:54.184103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:05.010 [2024-07-26 14:17:54.184124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:94944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.010 [2024-07-26 14:17:54.184144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:05.010 [2024-07-26 14:17:54.184172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:94952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.010 [2024-07-26 14:17:54.184188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:05.010 [2024-07-26 14:17:54.184210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:94960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.010 [2024-07-26 14:17:54.184225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:05.010 [2024-07-26 14:17:54.184247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:94968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.010 [2024-07-26 14:17:54.184262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:05.010 [2024-07-26 14:17:54.184284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:94976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.010 [2024-07-26 14:17:54.184300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:05.010 [2024-07-26 14:17:54.184321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:94984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.010 [2024-07-26 14:17:54.184336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:05.010 [2024-07-26 14:17:54.184358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:94992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.010 [2024-07-26 14:17:54.184373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:05.010 [2024-07-26 14:17:54.184395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:95000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.010 [2024-07-26 14:17:54.184411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:05.010 [2024-07-26 14:17:54.184432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:95008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.010 [2024-07-26 14:17:54.184447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:05.010 [2024-07-26 14:17:54.184469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:95016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.011 [2024-07-26 14:17:54.184484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:05.011 [2024-07-26 14:17:54.184505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:95024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.011 [2024-07-26 14:17:54.184521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:05.011 [2024-07-26 14:17:54.184550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:95032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.011 [2024-07-26 14:17:54.184566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:05.011 [2024-07-26 14:17:54.184588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:95040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.011 [2024-07-26 14:17:54.184617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:05.011 [2024-07-26 14:17:54.184639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:95048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.011 [2024-07-26 14:17:54.184655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:05.011 [2024-07-26 14:17:54.184677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:95056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.011 [2024-07-26 14:17:54.184692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:05.011 [2024-07-26 14:17:54.184714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:95064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.011 [2024-07-26 14:17:54.184729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:05.011 [2024-07-26 14:17:54.184750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:95072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.011 [2024-07-26 14:17:54.184766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:05.011 [2024-07-26 14:17:54.184788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:95080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.011 [2024-07-26 14:17:54.184803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:05.011 [2024-07-26 14:17:54.184840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:95088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.011 [2024-07-26 14:17:54.184855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.011 [2024-07-26 14:17:54.184877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:95096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.011 [2024-07-26 14:17:54.184892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:05.011 [2024-07-26 14:17:54.184912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:95104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.011 [2024-07-26 14:17:54.184927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:05.011 [2024-07-26 14:17:54.184947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:95112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.011 [2024-07-26 14:17:54.184962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:05.011 [2024-07-26 14:17:54.184982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:95120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.011 [2024-07-26 14:17:54.184997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:05.011 [2024-07-26 14:17:54.185018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:95128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.011 [2024-07-26 14:17:54.185032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:05.011 [2024-07-26 14:17:54.185053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:95136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.011 [2024-07-26 14:17:54.185067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:05.011 [2024-07-26 14:17:54.185092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:95144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.011 [2024-07-26 14:17:54.185108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:05.011 [2024-07-26 14:17:54.185129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:94248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.011 [2024-07-26 14:17:54.185143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:05.011 [2024-07-26 14:17:54.185164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:94256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.011 [2024-07-26 14:17:54.185179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:05.011 [2024-07-26 14:17:54.185200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:94264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.011 [2024-07-26 14:17:54.185215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:05.011 [2024-07-26 14:17:54.185236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:94272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.011 [2024-07-26 14:17:54.185250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:05.011 [2024-07-26 14:17:54.185271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:94280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.011 [2024-07-26 14:17:54.185286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:05.011 [2024-07-26 14:17:54.185306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:94288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.011 [2024-07-26 14:17:54.185321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:05.011 [2024-07-26 14:17:54.185342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:94296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.011 [2024-07-26 14:17:54.185357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:05.011 [2024-07-26 14:17:54.185378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:94304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.011 [2024-07-26 14:17:54.185392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:05.011 [2024-07-26 14:17:54.185413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.011 [2024-07-26 14:17:54.185427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:05.011 [2024-07-26 14:17:54.185448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:94320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.011 [2024-07-26 14:17:54.185463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:05.011 [2024-07-26 14:17:54.185483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:94328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.011 [2024-07-26 14:17:54.185498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:05.011 [2024-07-26 14:17:54.185547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:94336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.011 [2024-07-26 14:17:54.185564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:05.011 [2024-07-26 14:17:54.185586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:94344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.011 [2024-07-26 14:17:54.185602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:05.011 [2024-07-26 14:17:54.186357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:94352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.011 [2024-07-26 14:17:54.186380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:05.011 [2024-07-26 14:17:54.186407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:94360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.011 [2024-07-26 14:17:54.186427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:05.011 [2024-07-26 14:17:54.186451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:95152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.011 [2024-07-26 14:17:54.186467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:05.011 [2024-07-26 14:17:54.186490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:94368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.011 [2024-07-26 14:17:54.186506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:05.011 [2024-07-26 14:17:54.186536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:94376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.011 [2024-07-26 14:17:54.186554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:05.011 [2024-07-26 14:17:54.186577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:94384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.011 [2024-07-26 14:17:54.186593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:05.011 [2024-07-26 14:17:54.186634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:94392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.011 [2024-07-26 14:17:54.186651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:05.011 [2024-07-26 14:17:54.186673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:94400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.011 [2024-07-26 14:17:54.186689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:05.011 [2024-07-26 14:17:54.186710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:94408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.011 [2024-07-26 14:17:54.186726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:05.012 [2024-07-26 14:17:54.186748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:94416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.012 [2024-07-26 14:17:54.186764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:05.012 [2024-07-26 14:17:54.186786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:94424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.012 [2024-07-26 14:17:54.186806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:05.012 [2024-07-26 14:17:54.186844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:94432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.012 [2024-07-26 14:17:54.186860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:05.012 [2024-07-26 14:17:54.186882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:94440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.012 [2024-07-26 14:17:54.186906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:05.012 [2024-07-26 14:17:54.186945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:94448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.012 [2024-07-26 14:17:54.186961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:05.012 [2024-07-26 14:17:54.186982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:94456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.012 [2024-07-26 14:17:54.186998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:05.012 [2024-07-26 14:17:54.187020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:94464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.012 [2024-07-26 14:17:54.187036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:05.012 [2024-07-26 14:17:54.187058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:94136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.012 [2024-07-26 14:17:54.187074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:05.012 [2024-07-26 14:17:54.187096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:94144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.012 [2024-07-26 14:17:54.187112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:05.012 [2024-07-26 14:17:54.187133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:94152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.012 [2024-07-26 14:17:54.187149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:05.012 [2024-07-26 14:17:54.187170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:94160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.012 [2024-07-26 14:17:54.187186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:05.012 [2024-07-26 14:17:54.187208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:94168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.012 [2024-07-26 14:17:54.187224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:05.012 [2024-07-26 14:17:54.187245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:94176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.012 [2024-07-26 14:17:54.187261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:05.012 [2024-07-26 14:17:54.187283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:94184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.012 [2024-07-26 14:17:54.187303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:05.012 [2024-07-26 14:17:54.187327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:94472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.012 [2024-07-26 14:17:54.187343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:05.012 [2024-07-26 14:17:54.187365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:94480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.012 [2024-07-26 14:17:54.187381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:05.012 [2024-07-26 14:17:54.187402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:94488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.012 [2024-07-26 14:17:54.187418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:05.012 [2024-07-26 14:17:54.187440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:94496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.012 [2024-07-26 14:17:54.187456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:05.012 [2024-07-26 14:17:54.187478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:94504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.012 [2024-07-26 14:17:54.187495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:05.012 [2024-07-26 14:17:54.187516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:94512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.012 [2024-07-26 14:17:54.187542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:05.012 [2024-07-26 14:17:54.187566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:94520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.012 [2024-07-26 14:17:54.187590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:05.012 [2024-07-26 14:17:54.187612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:94528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.012 [2024-07-26 14:17:54.187627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:05.012 [2024-07-26 14:17:54.187649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:94536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.012 [2024-07-26 14:17:54.187665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:05.012 [2024-07-26 14:17:54.187687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:94544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.012 [2024-07-26 14:17:54.187703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:05.012 [2024-07-26 14:17:54.187725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:94552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.012 [2024-07-26 14:17:54.187740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:05.012 [2024-07-26 14:17:54.187762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:94560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.012 [2024-07-26 14:17:54.187778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:05.012 [2024-07-26 14:17:54.187804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:94568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.012 [2024-07-26 14:17:54.187821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:05.012 [2024-07-26 14:17:54.187844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:94576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.012 [2024-07-26 14:17:54.187860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:05.012 [2024-07-26 14:17:54.187882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:94584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.012 [2024-07-26 14:17:54.187898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:05.012 [2024-07-26 14:17:54.187919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:94592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.012 [2024-07-26 14:17:54.187935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:05.012 [2024-07-26 14:17:54.187956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:94600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.012 [2024-07-26 14:17:54.187972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:05.012 [2024-07-26 14:17:54.187994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:94608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.012 [2024-07-26 14:17:54.188010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:05.012 [2024-07-26 14:17:54.188031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:94616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.012 [2024-07-26 14:17:54.188047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:05.012 [2024-07-26 14:17:54.188068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:94624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.012 [2024-07-26 14:17:54.188083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:05.012 [2024-07-26 14:17:54.188104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:94632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.012 [2024-07-26 14:17:54.188120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:05.012 [2024-07-26 14:17:54.188141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:94640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.012 [2024-07-26 14:17:54.188157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:05.012 [2024-07-26 14:17:54.188178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:94648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.012 [2024-07-26 14:17:54.188193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:05.012 [2024-07-26 14:17:54.188214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:94656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.012 [2024-07-26 14:17:54.188229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:05.012 [2024-07-26 14:17:54.188255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:94664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.013 [2024-07-26 14:17:54.188271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:05.013 [2024-07-26 14:17:54.188293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:94672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.013 [2024-07-26 14:17:54.188308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:05.013 [2024-07-26 14:17:54.188329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:94680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.013 [2024-07-26 14:17:54.188345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:05.013 [2024-07-26 14:17:54.188366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:94688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.013 [2024-07-26 14:17:54.188382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:05.013 [2024-07-26 14:17:54.188417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:94696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.013 [2024-07-26 14:17:54.188433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:05.013 [2024-07-26 14:17:54.188454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:94704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.013 [2024-07-26 14:17:54.188469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:05.013 [2024-07-26 14:17:54.188490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:94712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.013 [2024-07-26 14:17:54.188504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:05.013 [2024-07-26 14:17:54.188525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:94720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.013 [2024-07-26 14:17:54.188564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:05.013 [2024-07-26 14:17:54.188589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:94728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.013 [2024-07-26 14:17:54.188605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:05.013 [2024-07-26 14:17:54.189224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:94736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.013 [2024-07-26 14:17:54.189246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:05.013 [2024-07-26 14:17:54.189274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:94744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.013 [2024-07-26 14:17:54.189291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:05.013 [2024-07-26 14:17:54.189313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:94752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.013 [2024-07-26 14:17:54.189328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:05.013 [2024-07-26 14:17:54.189350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:94760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.013 [2024-07-26 14:17:54.189370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:05.013 [2024-07-26 14:17:54.189392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.013 [2024-07-26 14:17:54.189408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:05.013 [2024-07-26 14:17:54.189429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:94776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.013 [2024-07-26 14:17:54.189444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:05.013 [2024-07-26 14:17:54.189465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:94784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.013 [2024-07-26 14:17:54.189481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:05.013 [2024-07-26 14:17:54.189502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:94792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.013 [2024-07-26 14:17:54.189517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:05.013 [2024-07-26 14:17:54.189547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:94800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.013 [2024-07-26 14:17:54.189565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:05.013 [2024-07-26 14:17:54.189587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:94808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.013 [2024-07-26 14:17:54.189603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:05.013 [2024-07-26 14:17:54.189624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:94816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.013 [2024-07-26 14:17:54.189639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:05.013 [2024-07-26 14:17:54.189660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:94824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.013 [2024-07-26 14:17:54.189676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:05.013 [2024-07-26 14:17:54.189698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:94832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.013 [2024-07-26 14:17:54.189714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:05.013 [2024-07-26 14:17:54.189735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:94840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.013 [2024-07-26 14:17:54.189751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:05.013 [2024-07-26 14:17:54.189772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:94848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.013 [2024-07-26 14:17:54.189787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:05.013 [2024-07-26 14:17:54.189808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:94192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.013 [2024-07-26 14:17:54.189833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:05.013 [2024-07-26 14:17:54.189855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:94200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.013 [2024-07-26 14:17:54.189871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:05.013 [2024-07-26 14:17:54.189893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:94208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.013 [2024-07-26 14:17:54.189908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:05.013 [2024-07-26 14:17:54.189946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:94216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.013 [2024-07-26 14:17:54.189962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:05.013 [2024-07-26 14:17:54.189982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:94224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.013 [2024-07-26 14:17:54.190013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:05.013 [2024-07-26 14:17:54.190035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:94232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.013 [2024-07-26 14:17:54.190050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:05.013 [2024-07-26 14:17:54.190072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:94240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.013 [2024-07-26 14:17:54.190087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:05.013 [2024-07-26 14:17:54.190108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:94856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.013 [2024-07-26 14:17:54.190123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:05.013 [2024-07-26 14:17:54.190145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:94864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.013 [2024-07-26 14:17:54.190160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:05.013 [2024-07-26 14:17:54.190190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:94872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.013 [2024-07-26 14:17:54.190207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:05.013 [2024-07-26 14:17:54.190228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:94880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.013 [2024-07-26 14:17:54.190244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:05.013 [2024-07-26 14:17:54.190265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:94888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.013 [2024-07-26 14:17:54.190280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:05.013 [2024-07-26 14:17:54.190302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:94896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.013 [2024-07-26 14:17:54.190317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:05.014 [2024-07-26 14:17:54.190345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:94904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.014 [2024-07-26 14:17:54.190362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:05.014 [2024-07-26 14:17:54.190383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:94912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.014 [2024-07-26 14:17:54.190399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:05.014 [2024-07-26 14:17:54.190420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:94920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.014 [2024-07-26 14:17:54.190436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:05.014 [2024-07-26 14:17:54.190457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:94928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.014 [2024-07-26 14:17:54.190472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:05.014 [2024-07-26 14:17:54.190494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:94936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.014 [2024-07-26 14:17:54.190509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:05.014 [2024-07-26 14:17:54.190537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:94944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.014 [2024-07-26 14:17:54.190554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:05.014 [2024-07-26 14:17:54.190576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:94952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.014 [2024-07-26 14:17:54.190592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:05.014 [2024-07-26 14:17:54.190613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:94960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.014 [2024-07-26 14:17:54.190628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:05.014 [2024-07-26 14:17:54.190649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:94968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.014 [2024-07-26 14:17:54.190665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:05.014 [2024-07-26 14:17:54.190686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:94976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.014 [2024-07-26 14:17:54.190701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:05.014 [2024-07-26 14:17:54.190722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:94984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.014 [2024-07-26 14:17:54.190737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:05.014 [2024-07-26 14:17:54.190759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:94992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.014 [2024-07-26 14:17:54.190780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:05.014 [2024-07-26 14:17:54.190811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:95000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.014 [2024-07-26 14:17:54.190834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:05.014 [2024-07-26 14:17:54.190856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:95008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.014 [2024-07-26 14:17:54.190872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:05.014 [2024-07-26 14:17:54.190894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:95016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.014 [2024-07-26 14:17:54.190909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:05.014 [2024-07-26 14:17:54.190931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:95024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.014 [2024-07-26 14:17:54.190946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:05.014 [2024-07-26 14:17:54.190968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:95032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.014 [2024-07-26 14:17:54.190983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:05.014 [2024-07-26 14:17:54.191005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:95040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.014 [2024-07-26 14:17:54.191020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:05.014 [2024-07-26 14:17:54.191042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:95048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.014 [2024-07-26 14:17:54.191057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:05.014 [2024-07-26 14:17:54.191079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:95056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.014 [2024-07-26 14:17:54.191094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:05.014 [2024-07-26 14:17:54.191116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:95064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.014 [2024-07-26 14:17:54.191131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:05.014 [2024-07-26 14:17:54.191152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:95072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.014 [2024-07-26 14:17:54.191168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:05.014 [2024-07-26 14:17:54.191189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:95080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.014 [2024-07-26 14:17:54.191205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:05.014 [2024-07-26 14:17:54.191226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:95088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.014 [2024-07-26 14:17:54.191241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.014 [2024-07-26 14:17:54.191263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:95096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.014 [2024-07-26 14:17:54.191282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:05.014 [2024-07-26 14:17:54.191304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:95104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.014 [2024-07-26 14:17:54.191320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:05.014 [2024-07-26 14:17:54.191342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:95112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.014 [2024-07-26 14:17:54.191357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:05.014 [2024-07-26 14:17:54.191378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:95120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.014 [2024-07-26 14:17:54.191400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:05.014 [2024-07-26 14:17:54.191423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:95128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.014 [2024-07-26 14:17:54.191439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:05.014 [2024-07-26 14:17:54.191460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:95136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.014 [2024-07-26 14:17:54.191476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:05.014 [2024-07-26 14:17:54.191497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:95144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.014 [2024-07-26 14:17:54.191512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:05.014 [2024-07-26 14:17:54.191540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:94248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.015 [2024-07-26 14:17:54.191558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:05.015 [2024-07-26 14:17:54.191580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:94256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.015 [2024-07-26 14:17:54.191595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:05.015 [2024-07-26 14:17:54.191617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:94264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.015 [2024-07-26 14:17:54.191632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:05.015 [2024-07-26 14:17:54.191654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:94272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.015 [2024-07-26 14:17:54.191669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:05.015 [2024-07-26 14:17:54.191690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:94280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.015 [2024-07-26 14:17:54.191706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:05.015 [2024-07-26 14:17:54.191727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:94288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.015 [2024-07-26 14:17:54.191747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:05.015 [2024-07-26 14:17:54.191769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:94296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.015 [2024-07-26 14:17:54.191785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:05.015 [2024-07-26 14:17:54.191806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:94304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.015 [2024-07-26 14:17:54.191821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:05.015 [2024-07-26 14:17:54.191843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:94312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.015 [2024-07-26 14:17:54.191858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:05.015 [2024-07-26 14:17:54.191879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:94320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.015 [2024-07-26 14:17:54.191909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:05.015 [2024-07-26 14:17:54.191932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:94328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.015 [2024-07-26 14:17:54.191946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:05.015 [2024-07-26 14:17:54.191968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:94336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.015 [2024-07-26 14:17:54.191983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:05.015 [2024-07-26 14:17:54.192730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:94344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.015 [2024-07-26 14:17:54.192754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:05.015 [2024-07-26 14:17:54.192781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:94352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.015 [2024-07-26 14:17:54.192799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:05.015 [2024-07-26 14:17:54.192821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:94360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.015 [2024-07-26 14:17:54.192837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:05.015 [2024-07-26 14:17:54.192859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:95152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.015 [2024-07-26 14:17:54.192874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:05.015 [2024-07-26 14:17:54.192896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:94368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.015 [2024-07-26 14:17:54.192911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:05.015 [2024-07-26 14:17:54.192933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:94376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.015 [2024-07-26 14:17:54.192949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:05.015 [2024-07-26 14:17:54.192975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:94384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.015 [2024-07-26 14:17:54.192991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:05.015 [2024-07-26 14:17:54.193013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:94392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.015 [2024-07-26 14:17:54.193029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:05.015 [2024-07-26 14:17:54.193051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:94400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.015 [2024-07-26 14:17:54.193068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:05.015 [2024-07-26 14:17:54.193105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:94408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.015 [2024-07-26 14:17:54.193121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:05.015 [2024-07-26 14:17:54.193142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:94416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.015 [2024-07-26 14:17:54.193157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:05.015 [2024-07-26 14:17:54.193177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:94424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.015 [2024-07-26 14:17:54.193192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:05.015 [2024-07-26 14:17:54.193212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:94432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.015 [2024-07-26 14:17:54.193226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:05.015 [2024-07-26 14:17:54.193246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:94440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.015 [2024-07-26 14:17:54.193261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:05.015 [2024-07-26 14:17:54.193281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:94448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.015 [2024-07-26 14:17:54.193295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:05.015 [2024-07-26 14:17:54.193315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:94456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.015 [2024-07-26 14:17:54.193330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:05.015 [2024-07-26 14:17:54.193351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:94464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.015 [2024-07-26 14:17:54.193366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:05.015 [2024-07-26 14:17:54.193386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:94136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.015 [2024-07-26 14:17:54.193401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:05.015 [2024-07-26 14:17:54.193425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:94144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.015 [2024-07-26 14:17:54.193456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:05.015 [2024-07-26 14:17:54.193479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:94152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.015 [2024-07-26 14:17:54.193495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:05.015 [2024-07-26 14:17:54.193516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:94160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.015 [2024-07-26 14:17:54.193541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:05.015 [2024-07-26 14:17:54.193565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:94168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.015 [2024-07-26 14:17:54.193583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:05.015 [2024-07-26 14:17:54.193605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:94176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.015 [2024-07-26 14:17:54.193621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:05.015 [2024-07-26 14:17:54.193643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:94184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.015 [2024-07-26 14:17:54.193659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:05.015 [2024-07-26 14:17:54.193680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:94472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.015 [2024-07-26 14:17:54.193696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:05.015 [2024-07-26 14:17:54.193718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:94480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.015 [2024-07-26 14:17:54.193733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:05.015 [2024-07-26 14:17:54.193756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:94488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.015 [2024-07-26 14:17:54.193771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:05.016 [2024-07-26 14:17:54.193793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:94496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.016 [2024-07-26 14:17:54.193809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:05.016 [2024-07-26 14:17:54.193838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:94504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.016 [2024-07-26 14:17:54.193854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:05.016 [2024-07-26 14:17:54.193876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:94512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.016 [2024-07-26 14:17:54.193891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:05.016 [2024-07-26 14:17:54.193913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:94520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.016 [2024-07-26 14:17:54.193932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:05.016 [2024-07-26 14:17:54.193955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:94528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.016 [2024-07-26 14:17:54.193971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:05.016 [2024-07-26 14:17:54.193992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:94536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.016 [2024-07-26 14:17:54.194008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:05.016 [2024-07-26 14:17:54.194029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:94544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.016 [2024-07-26 14:17:54.194060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:05.016 [2024-07-26 14:17:54.194082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:94552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.016 [2024-07-26 14:17:54.194098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:05.016 [2024-07-26 14:17:54.194136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:94560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.016 [2024-07-26 14:17:54.194152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:05.016 [2024-07-26 14:17:54.194174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:94568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.016 [2024-07-26 14:17:54.194189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:05.016 [2024-07-26 14:17:54.194211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:94576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.016 [2024-07-26 14:17:54.194226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:05.016 [2024-07-26 14:17:54.194248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:94584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.016 [2024-07-26 14:17:54.194263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:05.016 [2024-07-26 14:17:54.194285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:94592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.016 [2024-07-26 14:17:54.194300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:05.016 [2024-07-26 14:17:54.194322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:94600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.016 [2024-07-26 14:17:54.194337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:05.016 [2024-07-26 14:17:54.194358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:94608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.016 [2024-07-26 14:17:54.194374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:05.016 [2024-07-26 14:17:54.194395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:94616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.016 [2024-07-26 14:17:54.194415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:05.016 [2024-07-26 14:17:54.194437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:94624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.016 [2024-07-26 14:17:54.194453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:05.016 [2024-07-26 14:17:54.194475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:94632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.016 [2024-07-26 14:17:54.194491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:05.016 [2024-07-26 14:17:54.194512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:94640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.016 [2024-07-26 14:17:54.194535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:05.016 [2024-07-26 14:17:54.194559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:94648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.016 [2024-07-26 14:17:54.194575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:05.016 [2024-07-26 14:17:54.194596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:94656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.016 [2024-07-26 14:17:54.194612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:05.016 [2024-07-26 14:17:54.194639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:94664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.016 [2024-07-26 14:17:54.194656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:05.016 [2024-07-26 14:17:54.194677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:94672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.016 [2024-07-26 14:17:54.194693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:05.016 [2024-07-26 14:17:54.194714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:94680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.016 [2024-07-26 14:17:54.194730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:05.016 [2024-07-26 14:17:54.194751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:94688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.016 [2024-07-26 14:17:54.194767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:05.016 [2024-07-26 14:17:54.194789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:94696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.016 [2024-07-26 14:17:54.194819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:05.016 [2024-07-26 14:17:54.194842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:94704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.016 [2024-07-26 14:17:54.194857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:05.016 [2024-07-26 14:17:54.194892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:94712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.016 [2024-07-26 14:17:54.194907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:05.016 [2024-07-26 14:17:54.194932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:94720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.016 [2024-07-26 14:17:54.194964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:05.016 [2024-07-26 14:17:54.195636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:94728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.016 [2024-07-26 14:17:54.195659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:05.016 [2024-07-26 14:17:54.195686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:94736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.016 [2024-07-26 14:17:54.195702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:05.016 [2024-07-26 14:17:54.195725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:94744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.016 [2024-07-26 14:17:54.195741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:05.016 [2024-07-26 14:17:54.195762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:94752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.016 [2024-07-26 14:17:54.195778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:05.016 [2024-07-26 14:17:54.195799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:94760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.016 [2024-07-26 14:17:54.195822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:05.016 [2024-07-26 14:17:54.195843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:94768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.016 [2024-07-26 14:17:54.195859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:05.016 [2024-07-26 14:17:54.195881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:94776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.016 [2024-07-26 14:17:54.195896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:05.016 [2024-07-26 14:17:54.195917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:94784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.016 [2024-07-26 14:17:54.195933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:05.016 [2024-07-26 14:17:54.195954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:94792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.016 [2024-07-26 14:17:54.195985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:05.016 [2024-07-26 14:17:54.196007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:94800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.017 [2024-07-26 14:17:54.196022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:05.017 [2024-07-26 14:17:54.196043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:94808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.017 [2024-07-26 14:17:54.196058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:05.017 [2024-07-26 14:17:54.196083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:94816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.017 [2024-07-26 14:17:54.196099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:05.017 [2024-07-26 14:17:54.196119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:94824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.017 [2024-07-26 14:17:54.196135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:05.017 [2024-07-26 14:17:54.196156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:94832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.017 [2024-07-26 14:17:54.196170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:05.017 [2024-07-26 14:17:54.196191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:94840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.017 [2024-07-26 14:17:54.196206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:05.017 [2024-07-26 14:17:54.196242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:94848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.017 [2024-07-26 14:17:54.196259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:05.017 [2024-07-26 14:17:54.196281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:94192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.017 [2024-07-26 14:17:54.196296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:05.017 [2024-07-26 14:17:54.196317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:94200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.017 [2024-07-26 14:17:54.196333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:05.017 [2024-07-26 14:17:54.196354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:94208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.017 [2024-07-26 14:17:54.196369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:05.017 [2024-07-26 14:17:54.196391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:94216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.017 [2024-07-26 14:17:54.196406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:05.017 [2024-07-26 14:17:54.196428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:94224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.017 [2024-07-26 14:17:54.196443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:05.017 [2024-07-26 14:17:54.196464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:94232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.017 [2024-07-26 14:17:54.196480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:05.017 [2024-07-26 14:17:54.196502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:94240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.017 [2024-07-26 14:17:54.196517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:05.017 [2024-07-26 14:17:54.196548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:94856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.017 [2024-07-26 14:17:54.196579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:05.017 [2024-07-26 14:17:54.196602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:94864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.017 [2024-07-26 14:17:54.196623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:05.017 [2024-07-26 14:17:54.196645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:94872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.017 [2024-07-26 14:17:54.196661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:05.017 [2024-07-26 14:17:54.196683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:94880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.017 [2024-07-26 14:17:54.196698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:05.017 [2024-07-26 14:17:54.196719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:94888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.017 [2024-07-26 14:17:54.196735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:05.017 [2024-07-26 14:17:54.196757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:94896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.017 [2024-07-26 14:17:54.196772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:05.017 [2024-07-26 14:17:54.196793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:94904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.017 [2024-07-26 14:17:54.196816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:05.017 [2024-07-26 14:17:54.196853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:94912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.017 [2024-07-26 14:17:54.196868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:05.017 [2024-07-26 14:17:54.196888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:94920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.017 [2024-07-26 14:17:54.196902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:05.017 [2024-07-26 14:17:54.196922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:94928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.017 [2024-07-26 14:17:54.196937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:05.017 [2024-07-26 14:17:54.196957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:94936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.017 [2024-07-26 14:17:54.196971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:05.017 [2024-07-26 14:17:54.196991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:94944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.017 [2024-07-26 14:17:54.197006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:05.017 [2024-07-26 14:17:54.197025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:94952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.017 [2024-07-26 14:17:54.197043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:05.017 [2024-07-26 14:17:54.197064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:94960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.017 [2024-07-26 14:17:54.197079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:05.017 [2024-07-26 14:17:54.197099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:94968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.017 [2024-07-26 14:17:54.197113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:05.017 [2024-07-26 14:17:54.197133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:94976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.017 [2024-07-26 14:17:54.197148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:05.017 [2024-07-26 14:17:54.197168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:94984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.017 [2024-07-26 14:17:54.197182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:05.017 [2024-07-26 14:17:54.197220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:94992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.017 [2024-07-26 14:17:54.197240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:05.017 [2024-07-26 14:17:54.197277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:95000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.017 [2024-07-26 14:17:54.197294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:05.017 [2024-07-26 14:17:54.197315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:95008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.017 [2024-07-26 14:17:54.197330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:05.017 [2024-07-26 14:17:54.197352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:95016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.017 [2024-07-26 14:17:54.197367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:05.017 [2024-07-26 14:17:54.197389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:95024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.017 [2024-07-26 14:17:54.197404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:05.017 [2024-07-26 14:17:54.197425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:95032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.017 [2024-07-26 14:17:54.197441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:05.017 [2024-07-26 14:17:54.197462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:95040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.017 [2024-07-26 14:17:54.197477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:05.017 [2024-07-26 14:17:54.197498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:95048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.018 [2024-07-26 14:17:54.197514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:05.018 [2024-07-26 14:17:54.197547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:95056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.018 [2024-07-26 14:17:54.197572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:05.018 [2024-07-26 14:17:54.197594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:95064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.018 [2024-07-26 14:17:54.197609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:05.018 [2024-07-26 14:17:54.197630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:95072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.018 [2024-07-26 14:17:54.197645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:05.018 [2024-07-26 14:17:54.197666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:95080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.018 [2024-07-26 14:17:54.197682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:05.018 [2024-07-26 14:17:54.197703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:95088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.018 [2024-07-26 14:17:54.197718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.018 [2024-07-26 14:17:54.197739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:95096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.018 [2024-07-26 14:17:54.197755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:05.018 [2024-07-26 14:17:54.197776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:95104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.018 [2024-07-26 14:17:54.197791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:05.018 [2024-07-26 14:17:54.197813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:95112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.018 [2024-07-26 14:17:54.197828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:05.018 [2024-07-26 14:17:54.197864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:95120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.018 [2024-07-26 14:17:54.197892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:05.018 [2024-07-26 14:17:54.197913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:95128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.018 [2024-07-26 14:17:54.197928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:05.018 [2024-07-26 14:17:54.197948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:95136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.018 [2024-07-26 14:17:54.197962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:05.018 [2024-07-26 14:17:54.197982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:95144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.018 [2024-07-26 14:17:54.197997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:05.018 [2024-07-26 14:17:54.198021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:94248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.018 [2024-07-26 14:17:54.198036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:05.018 [2024-07-26 14:17:54.204412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:94256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.018 [2024-07-26 14:17:54.204441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:05.018 [2024-07-26 14:17:54.204464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:94264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.018 [2024-07-26 14:17:54.204480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:05.018 [2024-07-26 14:17:54.204500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:94272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.018 [2024-07-26 14:17:54.204541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:05.018 [2024-07-26 14:17:54.204567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:94280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.018 [2024-07-26 14:17:54.204583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:05.018 [2024-07-26 14:17:54.204604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:94288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.018 [2024-07-26 14:17:54.204619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:05.018 [2024-07-26 14:17:54.204641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.018 [2024-07-26 14:17:54.204657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:05.018 [2024-07-26 14:17:54.204678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:94304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.018 [2024-07-26 14:17:54.204693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:05.018 [2024-07-26 14:17:54.204714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:94312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.018 [2024-07-26 14:17:54.204730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:05.018 [2024-07-26 14:17:54.204751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:94320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.018 [2024-07-26 14:17:54.204766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:05.018 [2024-07-26 14:17:54.204788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:94328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.018 [2024-07-26 14:17:54.204802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:05.018 [2024-07-26 14:17:54.204838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:94336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.018 [2024-07-26 14:17:54.204854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:05.018 [2024-07-26 14:17:54.204876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:94344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.018 [2024-07-26 14:17:54.204912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:05.018 [2024-07-26 14:17:54.204934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:94352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.018 [2024-07-26 14:17:54.204949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:05.018 [2024-07-26 14:17:54.204969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:94360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.018 [2024-07-26 14:17:54.204984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:05.018 [2024-07-26 14:17:54.205863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:95152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.018 [2024-07-26 14:17:54.205887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:05.018 [2024-07-26 14:17:54.205915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:94368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.018 [2024-07-26 14:17:54.205932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:05.018 [2024-07-26 14:17:54.205954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:94376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.018 [2024-07-26 14:17:54.205969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:05.018 [2024-07-26 14:17:54.205991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:94384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.018 [2024-07-26 14:17:54.206007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:05.018 [2024-07-26 14:17:54.206028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:94392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.018 [2024-07-26 14:17:54.206043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:05.019 [2024-07-26 14:17:54.206064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:94400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.019 [2024-07-26 14:17:54.206080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:05.019 [2024-07-26 14:17:54.206101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:94408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.019 [2024-07-26 14:17:54.206131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:05.019 [2024-07-26 14:17:54.206153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:94416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.019 [2024-07-26 14:17:54.206167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:05.019 [2024-07-26 14:17:54.206187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:94424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.019 [2024-07-26 14:17:54.206202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:05.019 [2024-07-26 14:17:54.206222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:94432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.019 [2024-07-26 14:17:54.206243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:05.019 [2024-07-26 14:17:54.206265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:94440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.019 [2024-07-26 14:17:54.206280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:05.019 [2024-07-26 14:17:54.206300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:94448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.019 [2024-07-26 14:17:54.206315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:05.019 [2024-07-26 14:17:54.206335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:94456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.019 [2024-07-26 14:17:54.206349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:05.019 [2024-07-26 14:17:54.206385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:94464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.019 [2024-07-26 14:17:54.206401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:05.019 [2024-07-26 14:17:54.206423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:94136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.019 [2024-07-26 14:17:54.206439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:05.019 [2024-07-26 14:17:54.206460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:94144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.019 [2024-07-26 14:17:54.206475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:05.019 [2024-07-26 14:17:54.206497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:94152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.019 [2024-07-26 14:17:54.206512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:05.019 [2024-07-26 14:17:54.206542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:94160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.019 [2024-07-26 14:17:54.206559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:05.019 [2024-07-26 14:17:54.206580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:94168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.019 [2024-07-26 14:17:54.206596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:05.019 [2024-07-26 14:17:54.206617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:94176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.019 [2024-07-26 14:17:54.206632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:05.019 [2024-07-26 14:17:54.206654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:94184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.019 [2024-07-26 14:17:54.206669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:05.019 [2024-07-26 14:17:54.206691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:94472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.019 [2024-07-26 14:17:54.206710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:05.019 [2024-07-26 14:17:54.206733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:94480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.019 [2024-07-26 14:17:54.206748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:05.019 [2024-07-26 14:17:54.206770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:94488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.019 [2024-07-26 14:17:54.206785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:05.019 [2024-07-26 14:17:54.206806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:94496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.019 [2024-07-26 14:17:54.206822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:05.019 [2024-07-26 14:17:54.206843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:94504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.019 [2024-07-26 14:17:54.206859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:05.019 [2024-07-26 14:17:54.206880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:94512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.019 [2024-07-26 14:17:54.206895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:05.019 [2024-07-26 14:17:54.206918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:94520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.019 [2024-07-26 14:17:54.206933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:05.019 [2024-07-26 14:17:54.206954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:94528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.019 [2024-07-26 14:17:54.206985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:05.019 [2024-07-26 14:17:54.207007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:94536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.019 [2024-07-26 14:17:54.207022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:05.019 [2024-07-26 14:17:54.207059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:94544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.019 [2024-07-26 14:17:54.207075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:05.019 [2024-07-26 14:17:54.207096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:94552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.019 [2024-07-26 14:17:54.207112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:05.019 [2024-07-26 14:17:54.207133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:94560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.019 [2024-07-26 14:17:54.207148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:05.019 [2024-07-26 14:17:54.207169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:94568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.019 [2024-07-26 14:17:54.207185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:05.019 [2024-07-26 14:17:54.207210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:94576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.019 [2024-07-26 14:17:54.207226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:05.019 [2024-07-26 14:17:54.207248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:94584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.019 [2024-07-26 14:17:54.207263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:05.019 [2024-07-26 14:17:54.207299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:94592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.019 [2024-07-26 14:17:54.207315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:05.019 [2024-07-26 14:17:54.207336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:94600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.019 [2024-07-26 14:17:54.207367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:05.019 [2024-07-26 14:17:54.207390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:94608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.019 [2024-07-26 14:17:54.207405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:05.019 [2024-07-26 14:17:54.207426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:94616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.019 [2024-07-26 14:17:54.207441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:05.019 [2024-07-26 14:17:54.207463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:94624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.019 [2024-07-26 14:17:54.207479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:05.019 [2024-07-26 14:17:54.207500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:94632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.019 [2024-07-26 14:17:54.207520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:05.019 [2024-07-26 14:17:54.207549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:94640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.019 [2024-07-26 14:17:54.207565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:05.020 [2024-07-26 14:17:54.207586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:94648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.020 [2024-07-26 14:17:54.207602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:05.020 [2024-07-26 14:17:54.207623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:94656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.020 [2024-07-26 14:17:54.207638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:05.020 [2024-07-26 14:17:54.207660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:94664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.020 [2024-07-26 14:17:54.207675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:05.020 [2024-07-26 14:17:54.207697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:94672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.020 [2024-07-26 14:17:54.207716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:05.020 [2024-07-26 14:17:54.207738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:94680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.020 [2024-07-26 14:17:54.207754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:05.020 [2024-07-26 14:17:54.207775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:94688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.020 [2024-07-26 14:17:54.207791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:05.020 [2024-07-26 14:17:54.207827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:94696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.020 [2024-07-26 14:17:54.207842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:05.020 [2024-07-26 14:17:54.207863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:94704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.020 [2024-07-26 14:17:54.207878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:05.020 [2024-07-26 14:17:54.207898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:94712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.020 [2024-07-26 14:17:54.207912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:05.020 [2024-07-26 14:17:54.208493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:94720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.020 [2024-07-26 14:17:54.208516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:05.020 [2024-07-26 14:17:54.208553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:94728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.020 [2024-07-26 14:17:54.208572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:05.020 [2024-07-26 14:17:54.208594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:94736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.020 [2024-07-26 14:17:54.208610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:05.020 [2024-07-26 14:17:54.208632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:94744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.020 [2024-07-26 14:17:54.208648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:05.020 [2024-07-26 14:17:54.208669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:94752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.020 [2024-07-26 14:17:54.208685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:05.020 [2024-07-26 14:17:54.208706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:94760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.020 [2024-07-26 14:17:54.208721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:05.020 [2024-07-26 14:17:54.208744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:94768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.020 [2024-07-26 14:17:54.208764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:05.020 [2024-07-26 14:17:54.208787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:94776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.020 [2024-07-26 14:17:54.208802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:05.020 [2024-07-26 14:17:54.208850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.020 [2024-07-26 14:17:54.208866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:05.020 [2024-07-26 14:17:54.208888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:94792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.020 [2024-07-26 14:17:54.208917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:05.020 [2024-07-26 14:17:54.208938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:94800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.020 [2024-07-26 14:17:54.208952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:05.020 [2024-07-26 14:17:54.208972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:94808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.020 [2024-07-26 14:17:54.208986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:05.020 [2024-07-26 14:17:54.209006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:94816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.020 [2024-07-26 14:17:54.209020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:05.020 [2024-07-26 14:17:54.209040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:94824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.020 [2024-07-26 14:17:54.209054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:05.020 [2024-07-26 14:17:54.209091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:94832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.020 [2024-07-26 14:17:54.209107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:05.020 [2024-07-26 14:17:54.209129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:94840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.020 [2024-07-26 14:17:54.209144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:05.020 [2024-07-26 14:17:54.209166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:94848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.020 [2024-07-26 14:17:54.209181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:05.020 [2024-07-26 14:17:54.209203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:94192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.020 [2024-07-26 14:17:54.209218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:05.020 [2024-07-26 14:17:54.209239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:94200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.020 [2024-07-26 14:17:54.209254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:05.020 [2024-07-26 14:17:54.209279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:94208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.020 [2024-07-26 14:17:54.209295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:05.020 [2024-07-26 14:17:54.209317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:94216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.020 [2024-07-26 14:17:54.209332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:05.020 [2024-07-26 14:17:54.209354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:94224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.020 [2024-07-26 14:17:54.209384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:05.020 [2024-07-26 14:17:54.209406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:94232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.020 [2024-07-26 14:17:54.209421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:05.020 [2024-07-26 14:17:54.209457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:94240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.020 [2024-07-26 14:17:54.209472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:05.020 [2024-07-26 14:17:54.209492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:94856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.020 [2024-07-26 14:17:54.209538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:05.020 [2024-07-26 14:17:54.209564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:94864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.020 [2024-07-26 14:17:54.209580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:05.020 [2024-07-26 14:17:54.209601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:94872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.020 [2024-07-26 14:17:54.209616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:05.020 [2024-07-26 14:17:54.209637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:94880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.020 [2024-07-26 14:17:54.209653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:05.020 [2024-07-26 14:17:54.209675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:94888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.020 [2024-07-26 14:17:54.209690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:05.020 [2024-07-26 14:17:54.209711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:94896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.021 [2024-07-26 14:17:54.209726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:05.021 [2024-07-26 14:17:54.209748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:94904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.021 [2024-07-26 14:17:54.209763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:05.021 [2024-07-26 14:17:54.209789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:94912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.021 [2024-07-26 14:17:54.209805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:05.021 [2024-07-26 14:17:54.209841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:94920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.021 [2024-07-26 14:17:54.209856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:05.021 [2024-07-26 14:17:54.209876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:94928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.021 [2024-07-26 14:17:54.209891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:05.021 [2024-07-26 14:17:54.209910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:94936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.021 [2024-07-26 14:17:54.209924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:05.021 [2024-07-26 14:17:54.209945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:94944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.021 [2024-07-26 14:17:54.209959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:05.021 [2024-07-26 14:17:54.209979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:94952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.021 [2024-07-26 14:17:54.209993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:05.021 [2024-07-26 14:17:54.210014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:94960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.021 [2024-07-26 14:17:54.210028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:05.021 [2024-07-26 14:17:54.210048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:94968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.021 [2024-07-26 14:17:54.210063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:05.021 [2024-07-26 14:17:54.210100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:94976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.021 [2024-07-26 14:17:54.210115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:05.021 [2024-07-26 14:17:54.210151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:94984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.021 [2024-07-26 14:17:54.210168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:05.021 [2024-07-26 14:17:54.210189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:94992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.021 [2024-07-26 14:17:54.210205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:05.021 [2024-07-26 14:17:54.210226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:95000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.021 [2024-07-26 14:17:54.210241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:05.021 [2024-07-26 14:17:54.210266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:95008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.021 [2024-07-26 14:17:54.210282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:05.021 [2024-07-26 14:17:54.210303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:95016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.021 [2024-07-26 14:17:54.210319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:05.021 [2024-07-26 14:17:54.210340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:95024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.021 [2024-07-26 14:17:54.210356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:05.021 [2024-07-26 14:17:54.210377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:95032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.021 [2024-07-26 14:17:54.210392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:05.021 [2024-07-26 14:17:54.210413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:95040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.021 [2024-07-26 14:17:54.210428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:05.021 [2024-07-26 14:17:54.210466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:95048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.021 [2024-07-26 14:17:54.210481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:05.021 [2024-07-26 14:17:54.210516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:95056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.021 [2024-07-26 14:17:54.210538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:05.021 [2024-07-26 14:17:54.210561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:95064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.021 [2024-07-26 14:17:54.210592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:05.021 [2024-07-26 14:17:54.210615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:95072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.021 [2024-07-26 14:17:54.210630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:05.021 [2024-07-26 14:17:54.210651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:95080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.021 [2024-07-26 14:17:54.210666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:05.021 [2024-07-26 14:17:54.210688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:95088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.021 [2024-07-26 14:17:54.210703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.021 [2024-07-26 14:17:54.210724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:95096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.021 [2024-07-26 14:17:54.210739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:05.021 [2024-07-26 14:17:54.210761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:95104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.021 [2024-07-26 14:17:54.210780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:05.021 [2024-07-26 14:17:54.210803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:95112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.021 [2024-07-26 14:17:54.210818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:05.021 [2024-07-26 14:17:54.210840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:95120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.021 [2024-07-26 14:17:54.210855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:05.021 [2024-07-26 14:17:54.210891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:95128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.021 [2024-07-26 14:17:54.210906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:05.021 [2024-07-26 14:17:54.210926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:95136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.021 [2024-07-26 14:17:54.210941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:05.021 [2024-07-26 14:17:54.210962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:95144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.021 [2024-07-26 14:17:54.210976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:05.021 [2024-07-26 14:17:54.210997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:94248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.021 [2024-07-26 14:17:54.211011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:05.021 [2024-07-26 14:17:54.211031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:94256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.021 [2024-07-26 14:17:54.211046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:05.021 [2024-07-26 14:17:54.211066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:94264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.021 [2024-07-26 14:17:54.211080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:05.021 [2024-07-26 14:17:54.211100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:94272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.021 [2024-07-26 14:17:54.211114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:05.021 [2024-07-26 14:17:54.211134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:94280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.021 [2024-07-26 14:17:54.211148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:05.021 [2024-07-26 14:17:54.211168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:94288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.021 [2024-07-26 14:17:54.211182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:05.021 [2024-07-26 14:17:54.211202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:94296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.021 [2024-07-26 14:17:54.211219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:05.022 [2024-07-26 14:17:54.211240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:94304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.022 [2024-07-26 14:17:54.211255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:05.022 [2024-07-26 14:17:54.211274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:94312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.022 [2024-07-26 14:17:54.211289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:05.022 [2024-07-26 14:17:54.211309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:94320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.022 [2024-07-26 14:17:54.211323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:05.022 [2024-07-26 14:17:54.211343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:94328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.022 [2024-07-26 14:17:54.211357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:05.022 [2024-07-26 14:17:54.211378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:94336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.022 [2024-07-26 14:17:54.211392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:05.022 [2024-07-26 14:17:54.211412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:94344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.022 [2024-07-26 14:17:54.211427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:05.022 [2024-07-26 14:17:54.211447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:94352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.022 [2024-07-26 14:17:54.211462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:05.022 [2024-07-26 14:17:54.212307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:94360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.022 [2024-07-26 14:17:54.212330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:05.022 [2024-07-26 14:17:54.212357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:95152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.022 [2024-07-26 14:17:54.212374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:05.022 [2024-07-26 14:17:54.212395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:94368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.022 [2024-07-26 14:17:54.212411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:05.022 [2024-07-26 14:17:54.212433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:94376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.022 [2024-07-26 14:17:54.212448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:05.022 [2024-07-26 14:17:54.212469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:94384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.022 [2024-07-26 14:17:54.212484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:05.022 [2024-07-26 14:17:54.212510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:94392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.022 [2024-07-26 14:17:54.212534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:05.022 [2024-07-26 14:17:54.212559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:94400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.022 [2024-07-26 14:17:54.212574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:05.022 [2024-07-26 14:17:54.212596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:94408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.022 [2024-07-26 14:17:54.212611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:05.022 [2024-07-26 14:17:54.212632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:94416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.022 [2024-07-26 14:17:54.212648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:05.022 [2024-07-26 14:17:54.212669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:94424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.022 [2024-07-26 14:17:54.212685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:05.022 [2024-07-26 14:17:54.212706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:94432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.022 [2024-07-26 14:17:54.212722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:05.022 [2024-07-26 14:17:54.212743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:94440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.022 [2024-07-26 14:17:54.212759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:05.022 [2024-07-26 14:17:54.212780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:94448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.022 [2024-07-26 14:17:54.212796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:05.022 [2024-07-26 14:17:54.212843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:94456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.022 [2024-07-26 14:17:54.212858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:05.022 [2024-07-26 14:17:54.212879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:94464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.022 [2024-07-26 14:17:54.212909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:05.022 [2024-07-26 14:17:54.212930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:94136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.022 [2024-07-26 14:17:54.212945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:05.022 [2024-07-26 14:17:54.212965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:94144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.022 [2024-07-26 14:17:54.212980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:05.022 [2024-07-26 14:17:54.213004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:94152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.022 [2024-07-26 14:17:54.213019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:05.022 [2024-07-26 14:17:54.213056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:94160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.022 [2024-07-26 14:17:54.213072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:05.022 [2024-07-26 14:17:54.213093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:94168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.022 [2024-07-26 14:17:54.213108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:05.022 [2024-07-26 14:17:54.213130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:94176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.022 [2024-07-26 14:17:54.213145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:05.022 [2024-07-26 14:17:54.213167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:94184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.022 [2024-07-26 14:17:54.213182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:05.022 [2024-07-26 14:17:54.213203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:94472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.022 [2024-07-26 14:17:54.213218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:05.022 [2024-07-26 14:17:54.213239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:94480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.022 [2024-07-26 14:17:54.213255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:05.022 [2024-07-26 14:17:54.213276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:94488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.022 [2024-07-26 14:17:54.213291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:05.022 [2024-07-26 14:17:54.213313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:94496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.022 [2024-07-26 14:17:54.213344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:05.022 [2024-07-26 14:17:54.213367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:94504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.022 [2024-07-26 14:17:54.213382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:05.022 [2024-07-26 14:17:54.213419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:94512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.022 [2024-07-26 14:17:54.213436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:05.022 [2024-07-26 14:17:54.213457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:94520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.022 [2024-07-26 14:17:54.213473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:05.022 [2024-07-26 14:17:54.213494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:94528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.022 [2024-07-26 14:17:54.213513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:05.022 [2024-07-26 14:17:54.213543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:94536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.022 [2024-07-26 14:17:54.213562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:05.022 [2024-07-26 14:17:54.213584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:94544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.023 [2024-07-26 14:17:54.213600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:05.023 [2024-07-26 14:17:54.213621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:94552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.023 [2024-07-26 14:17:54.213636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:05.023 [2024-07-26 14:17:54.213657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:94560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.023 [2024-07-26 14:17:54.213673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:05.023 [2024-07-26 14:17:54.213694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:94568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.023 [2024-07-26 14:17:54.213709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:05.023 [2024-07-26 14:17:54.213731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:94576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.023 [2024-07-26 14:17:54.213746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:05.023 [2024-07-26 14:17:54.213768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:94584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.023 [2024-07-26 14:17:54.213783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:05.023 [2024-07-26 14:17:54.213804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:94592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.023 [2024-07-26 14:17:54.213825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:05.023 [2024-07-26 14:17:54.213847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:94600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.023 [2024-07-26 14:17:54.213863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:05.023 [2024-07-26 14:17:54.213884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:94608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.023 [2024-07-26 14:17:54.213900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:05.023 [2024-07-26 14:17:54.213921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:94616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.023 [2024-07-26 14:17:54.213951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:05.023 [2024-07-26 14:17:54.213972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:94624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.023 [2024-07-26 14:17:54.213990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:05.023 [2024-07-26 14:17:54.214028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:94632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.023 [2024-07-26 14:17:54.214044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:05.023 [2024-07-26 14:17:54.214065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:94640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.023 [2024-07-26 14:17:54.214081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:05.023 [2024-07-26 14:17:54.214102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:94648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.023 [2024-07-26 14:17:54.214117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:05.023 [2024-07-26 14:17:54.214139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:94656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.023 [2024-07-26 14:17:54.214154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:05.023 [2024-07-26 14:17:54.214176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:94664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.023 [2024-07-26 14:17:54.214191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:05.023 [2024-07-26 14:17:54.214212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:94672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.023 [2024-07-26 14:17:54.214228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:05.023 [2024-07-26 14:17:54.214249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:94680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.023 [2024-07-26 14:17:54.214264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:05.023 [2024-07-26 14:17:54.214285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:94688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.023 [2024-07-26 14:17:54.214315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:05.023 [2024-07-26 14:17:54.214337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:94696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.023 [2024-07-26 14:17:54.214352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:05.023 [2024-07-26 14:17:54.214389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:94704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.023 [2024-07-26 14:17:54.214404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:05.023 [2024-07-26 14:17:54.214959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:94712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.023 [2024-07-26 14:17:54.214982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:05.023 [2024-07-26 14:17:54.215009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:94720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.023 [2024-07-26 14:17:54.215026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:05.023 [2024-07-26 14:17:54.215053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:94728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.023 [2024-07-26 14:17:54.215069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:05.023 [2024-07-26 14:17:54.215090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:94736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.023 [2024-07-26 14:17:54.215106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:05.023 [2024-07-26 14:17:54.215128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:94744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.023 [2024-07-26 14:17:54.215144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:05.023 [2024-07-26 14:17:54.215165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:94752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.023 [2024-07-26 14:17:54.215181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:05.023 [2024-07-26 14:17:54.215202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:94760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.023 [2024-07-26 14:17:54.215217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:05.023 [2024-07-26 14:17:54.215254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:94768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.023 [2024-07-26 14:17:54.215270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:05.023 [2024-07-26 14:17:54.215290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:94776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.023 [2024-07-26 14:17:54.215320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:05.023 [2024-07-26 14:17:54.215341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:94784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.023 [2024-07-26 14:17:54.215356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:05.023 [2024-07-26 14:17:54.215376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:94792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.023 [2024-07-26 14:17:54.215390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:05.023 [2024-07-26 14:17:54.215410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:94800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.024 [2024-07-26 14:17:54.215424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:05.024 [2024-07-26 14:17:54.215445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:94808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.024 [2024-07-26 14:17:54.215460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:05.024 [2024-07-26 14:17:54.215480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:94816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.024 [2024-07-26 14:17:54.215494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:05.024 [2024-07-26 14:17:54.215545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:94824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.024 [2024-07-26 14:17:54.215564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:05.024 [2024-07-26 14:17:54.215586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:94832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.024 [2024-07-26 14:17:54.215603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:05.024 [2024-07-26 14:17:54.215624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:94840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.024 [2024-07-26 14:17:54.215639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:05.024 [2024-07-26 14:17:54.215661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:94848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.024 [2024-07-26 14:17:54.215676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:05.024 [2024-07-26 14:17:54.215697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:94192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.024 [2024-07-26 14:17:54.215713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:05.024 [2024-07-26 14:17:54.215734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:94200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.024 [2024-07-26 14:17:54.215750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:05.024 [2024-07-26 14:17:54.215772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:94208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.024 [2024-07-26 14:17:54.215787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:05.024 [2024-07-26 14:17:54.215808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:94216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.024 [2024-07-26 14:17:54.215823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:05.024 [2024-07-26 14:17:54.215845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:94224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.024 [2024-07-26 14:17:54.215860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:05.024 [2024-07-26 14:17:54.215882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:94232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.024 [2024-07-26 14:17:54.215897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:05.024 [2024-07-26 14:17:54.215918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:94240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.024 [2024-07-26 14:17:54.215948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:05.024 [2024-07-26 14:17:54.215976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:94856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.024 [2024-07-26 14:17:54.216007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:05.024 [2024-07-26 14:17:54.216029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:94864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.024 [2024-07-26 14:17:54.216047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:05.024 [2024-07-26 14:17:54.216068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:94872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.024 [2024-07-26 14:17:54.216082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:05.024 [2024-07-26 14:17:54.216103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:94880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.024 [2024-07-26 14:17:54.216118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:05.024 [2024-07-26 14:17:54.216138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:94888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.024 [2024-07-26 14:17:54.216152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:05.024 [2024-07-26 14:17:54.216172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:94896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.024 [2024-07-26 14:17:54.216195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:05.024 [2024-07-26 14:17:54.216215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:94904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.024 [2024-07-26 14:17:54.216229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:05.024 [2024-07-26 14:17:54.216249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:94912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.024 [2024-07-26 14:17:54.216264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:05.024 [2024-07-26 14:17:54.216284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:94920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.024 [2024-07-26 14:17:54.216299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:05.024 [2024-07-26 14:17:54.216319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:94928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.024 [2024-07-26 14:17:54.216334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:05.024 [2024-07-26 14:17:54.216353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:94936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.024 [2024-07-26 14:17:54.216368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:05.024 [2024-07-26 14:17:54.216388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:94944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.024 [2024-07-26 14:17:54.216402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:05.024 [2024-07-26 14:17:54.216423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:94952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.024 [2024-07-26 14:17:54.216438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:05.024 [2024-07-26 14:17:54.216458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:94960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.024 [2024-07-26 14:17:54.216476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:05.024 [2024-07-26 14:17:54.216497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:94968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.024 [2024-07-26 14:17:54.216546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:05.024 [2024-07-26 14:17:54.216571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:94976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.024 [2024-07-26 14:17:54.216588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:05.024 [2024-07-26 14:17:54.216610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:94984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.024 [2024-07-26 14:17:54.216625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:05.024 [2024-07-26 14:17:54.216646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:94992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.024 [2024-07-26 14:17:54.216662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:05.024 [2024-07-26 14:17:54.216683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:95000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.024 [2024-07-26 14:17:54.216698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:05.024 [2024-07-26 14:17:54.216720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:95008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.024 [2024-07-26 14:17:54.216735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:05.024 [2024-07-26 14:17:54.216756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:95016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.024 [2024-07-26 14:17:54.216772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:05.024 [2024-07-26 14:17:54.216794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:95024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.024 [2024-07-26 14:17:54.216809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:05.024 [2024-07-26 14:17:54.216830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:95032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.024 [2024-07-26 14:17:54.216845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:05.024 [2024-07-26 14:17:54.216867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:95040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.024 [2024-07-26 14:17:54.216896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:05.024 [2024-07-26 14:17:54.216919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:95048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.024 [2024-07-26 14:17:54.216934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:05.025 [2024-07-26 14:17:54.216955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:95056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.025 [2024-07-26 14:17:54.216970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:05.025 [2024-07-26 14:17:54.216995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:95064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.025 [2024-07-26 14:17:54.217010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:05.025 [2024-07-26 14:17:54.217031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:95072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.025 [2024-07-26 14:17:54.217046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:05.025 [2024-07-26 14:17:54.217067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:95080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.025 [2024-07-26 14:17:54.217081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:05.025 [2024-07-26 14:17:54.217101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:95088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.025 [2024-07-26 14:17:54.217117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.025 [2024-07-26 14:17:54.217137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:95096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.025 [2024-07-26 14:17:54.217152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:05.025 [2024-07-26 14:17:54.217173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:95104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.025 [2024-07-26 14:17:54.217187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:05.025 [2024-07-26 14:17:54.217208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:95112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.025 [2024-07-26 14:17:54.217223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:05.025 [2024-07-26 14:17:54.217243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:95120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.025 [2024-07-26 14:17:54.217258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:05.025 [2024-07-26 14:17:54.217278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:95128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.025 [2024-07-26 14:17:54.217293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:05.025 [2024-07-26 14:17:54.217314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:95136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.025 [2024-07-26 14:17:54.217329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:05.025 [2024-07-26 14:17:54.217349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:95144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.025 [2024-07-26 14:17:54.217364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:05.025 [2024-07-26 14:17:54.217384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:94248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.025 [2024-07-26 14:17:54.217399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:05.025 [2024-07-26 14:17:54.217424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:94256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.025 [2024-07-26 14:17:54.217439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:05.025 [2024-07-26 14:17:54.217460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:94264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.025 [2024-07-26 14:17:54.217475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:05.025 [2024-07-26 14:17:54.217496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:94272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.025 [2024-07-26 14:17:54.217534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:05.025 [2024-07-26 14:17:54.217559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.025 [2024-07-26 14:17:54.217576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:05.025 [2024-07-26 14:17:54.217597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:94288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.025 [2024-07-26 14:17:54.217613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:05.025 [2024-07-26 14:17:54.217635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:94296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.025 [2024-07-26 14:17:54.217651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:05.025 [2024-07-26 14:17:54.217672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:94304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.025 [2024-07-26 14:17:54.217687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:05.025 [2024-07-26 14:17:54.217709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:94312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.025 [2024-07-26 14:17:54.217724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:05.025 [2024-07-26 14:17:54.217746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:94320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.025 [2024-07-26 14:17:54.217761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:05.025 [2024-07-26 14:17:54.217782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:94328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.025 [2024-07-26 14:17:54.217797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:05.025 [2024-07-26 14:17:54.217835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:94336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.025 [2024-07-26 14:17:54.217849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:05.025 [2024-07-26 14:17:54.217870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:94344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.025 [2024-07-26 14:17:54.217885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:05.025 [2024-07-26 14:17:54.218729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:94352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.025 [2024-07-26 14:17:54.218756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:05.025 [2024-07-26 14:17:54.218784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:94360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.025 [2024-07-26 14:17:54.218801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:05.025 [2024-07-26 14:17:54.218823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:95152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.025 [2024-07-26 14:17:54.218840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:05.025 [2024-07-26 14:17:54.218861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:94368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.025 [2024-07-26 14:17:54.218876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:05.025 [2024-07-26 14:17:54.218897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:94376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.025 [2024-07-26 14:17:54.218913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:05.025 [2024-07-26 14:17:54.218934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:94384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.025 [2024-07-26 14:17:54.218950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:05.025 [2024-07-26 14:17:54.218971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:94392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.025 [2024-07-26 14:17:54.218986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:05.025 [2024-07-26 14:17:54.219007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:94400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.025 [2024-07-26 14:17:54.219022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:05.025 [2024-07-26 14:17:54.219058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:94408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.025 [2024-07-26 14:17:54.219074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:05.025 [2024-07-26 14:17:54.219094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:94416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.025 [2024-07-26 14:17:54.219109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:05.025 [2024-07-26 14:17:54.219129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:94424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.025 [2024-07-26 14:17:54.219144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:05.025 [2024-07-26 14:17:54.219164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:94432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.025 [2024-07-26 14:17:54.219178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:05.025 [2024-07-26 14:17:54.219198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:94440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.025 [2024-07-26 14:17:54.219216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:05.025 [2024-07-26 14:17:54.219237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:94448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.026 [2024-07-26 14:17:54.219252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:05.026 [2024-07-26 14:17:54.219271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:94456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.026 [2024-07-26 14:17:54.219285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:05.026 [2024-07-26 14:17:54.219305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:94464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.026 [2024-07-26 14:17:54.219320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:05.026 [2024-07-26 14:17:54.219341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:94136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.026 [2024-07-26 14:17:54.219355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:05.026 [2024-07-26 14:17:54.219376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:94144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.026 [2024-07-26 14:17:54.219390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:05.026 [2024-07-26 14:17:54.219428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:94152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.026 [2024-07-26 14:17:54.219444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:05.026 [2024-07-26 14:17:54.219465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:94160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.026 [2024-07-26 14:17:54.219481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:05.026 [2024-07-26 14:17:54.219502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:94168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.026 [2024-07-26 14:17:54.219517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:05.026 [2024-07-26 14:17:54.219547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:94176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.026 [2024-07-26 14:17:54.219564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:05.026 [2024-07-26 14:17:54.219586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:94184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.026 [2024-07-26 14:17:54.219601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:05.026 [2024-07-26 14:17:54.219623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:94472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.026 [2024-07-26 14:17:54.219639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:05.026 [2024-07-26 14:17:54.219660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:94480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.026 [2024-07-26 14:17:54.219675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:05.026 [2024-07-26 14:17:54.219704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:94488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.026 [2024-07-26 14:17:54.219721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:05.026 [2024-07-26 14:17:54.219742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:94496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.026 [2024-07-26 14:17:54.219757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:05.026 [2024-07-26 14:17:54.219779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:94504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.026 [2024-07-26 14:17:54.219794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:05.026 [2024-07-26 14:17:54.219816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:94512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.026 [2024-07-26 14:17:54.219831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:05.026 [2024-07-26 14:17:54.219857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:94520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.026 [2024-07-26 14:17:54.219873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:05.026 [2024-07-26 14:17:54.219894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:94528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.026 [2024-07-26 14:17:54.219910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:05.026 [2024-07-26 14:17:54.219931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:94536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.026 [2024-07-26 14:17:54.219947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:05.026 [2024-07-26 14:17:54.219968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:94544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.026 [2024-07-26 14:17:54.219983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:05.026 [2024-07-26 14:17:54.220004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:94552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.026 [2024-07-26 14:17:54.220036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:05.026 [2024-07-26 14:17:54.220058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:94560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.026 [2024-07-26 14:17:54.220072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:05.026 [2024-07-26 14:17:54.220108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:94568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.026 [2024-07-26 14:17:54.220125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:05.026 [2024-07-26 14:17:54.220146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:94576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.026 [2024-07-26 14:17:54.220162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:05.026 [2024-07-26 14:17:54.220188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:94584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.026 [2024-07-26 14:17:54.220204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:05.026 [2024-07-26 14:17:54.220226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:94592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.026 [2024-07-26 14:17:54.220242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:05.026 [2024-07-26 14:17:54.220263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:94600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.026 [2024-07-26 14:17:54.220278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:05.026 [2024-07-26 14:17:54.220300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:94608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.026 [2024-07-26 14:17:54.220315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:05.026 [2024-07-26 14:17:54.220337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:94616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.026 [2024-07-26 14:17:54.220353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:05.026 [2024-07-26 14:17:54.220374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:94624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.026 [2024-07-26 14:17:54.220389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:05.026 [2024-07-26 14:17:54.220411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:94632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.026 [2024-07-26 14:17:54.220426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:05.026 [2024-07-26 14:17:54.220448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:94640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.026 [2024-07-26 14:17:54.220463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:05.026 [2024-07-26 14:17:54.220485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:94648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.026 [2024-07-26 14:17:54.220500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:05.026 [2024-07-26 14:17:54.220522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:94656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.026 [2024-07-26 14:17:54.220548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:05.026 [2024-07-26 14:17:54.220571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:94664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.026 [2024-07-26 14:17:54.220587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:05.026 [2024-07-26 14:17:54.220609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:94672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.026 [2024-07-26 14:17:54.220624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:05.026 [2024-07-26 14:17:54.220645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:94680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.026 [2024-07-26 14:17:54.220664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:05.026 [2024-07-26 14:17:54.220686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:94688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.026 [2024-07-26 14:17:54.220702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:05.026 [2024-07-26 14:17:54.220723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:94696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.026 [2024-07-26 14:17:54.220739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:05.027 [2024-07-26 14:17:54.221311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:94704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.027 [2024-07-26 14:17:54.221333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:05.027 [2024-07-26 14:17:54.221360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:94712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.027 [2024-07-26 14:17:54.221376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:05.027 [2024-07-26 14:17:54.221398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:94720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.027 [2024-07-26 14:17:54.221414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:05.027 [2024-07-26 14:17:54.221435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:94728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.027 [2024-07-26 14:17:54.221452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:05.027 [2024-07-26 14:17:54.221473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:94736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.027 [2024-07-26 14:17:54.221489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:05.027 [2024-07-26 14:17:54.221510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:94744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.027 [2024-07-26 14:17:54.221526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:05.027 [2024-07-26 14:17:54.221558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:94752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.027 [2024-07-26 14:17:54.221575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:05.027 [2024-07-26 14:17:54.221596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:94760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.027 [2024-07-26 14:17:54.221611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:05.027 [2024-07-26 14:17:54.221633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.027 [2024-07-26 14:17:54.221648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:05.027 [2024-07-26 14:17:54.221670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:94776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.027 [2024-07-26 14:17:54.221690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:05.027 [2024-07-26 14:17:54.221712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:94784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.027 [2024-07-26 14:17:54.221728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:05.027 [2024-07-26 14:17:54.221749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:94792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.027 [2024-07-26 14:17:54.221764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:05.027 [2024-07-26 14:17:54.221785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:94800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.027 [2024-07-26 14:17:54.221801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:05.027 [2024-07-26 14:17:54.221822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:94808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.027 [2024-07-26 14:17:54.221852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:05.027 [2024-07-26 14:17:54.221874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:94816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.027 [2024-07-26 14:17:54.221889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:05.027 [2024-07-26 14:17:54.221910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:94824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.027 [2024-07-26 14:17:54.221925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:05.027 [2024-07-26 14:17:54.221945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:94832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.027 [2024-07-26 14:17:54.221960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:05.027 [2024-07-26 14:17:54.221981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:94840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.027 [2024-07-26 14:17:54.222011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:05.027 [2024-07-26 14:17:54.222034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:94848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.027 [2024-07-26 14:17:54.222049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:05.027 [2024-07-26 14:17:54.222070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:94192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.027 [2024-07-26 14:17:54.222085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:05.027 [2024-07-26 14:17:54.222106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:94200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.027 [2024-07-26 14:17:54.222121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:05.027 [2024-07-26 14:17:54.222143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:94208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.027 [2024-07-26 14:17:54.222158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:05.027 [2024-07-26 14:17:54.222184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:94216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.027 [2024-07-26 14:17:54.222200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:05.027 [2024-07-26 14:17:54.222221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:94224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.027 [2024-07-26 14:17:54.222238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:05.027 [2024-07-26 14:17:54.222260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:94232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.027 [2024-07-26 14:17:54.222275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:05.027 [2024-07-26 14:17:54.222312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:94240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.027 [2024-07-26 14:17:54.222328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:05.027 [2024-07-26 14:17:54.222349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:94856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.027 [2024-07-26 14:17:54.222364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:05.027 [2024-07-26 14:17:54.222385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:94864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.027 [2024-07-26 14:17:54.222400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:05.027 [2024-07-26 14:17:54.222420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:94872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.027 [2024-07-26 14:17:54.222435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:05.027 [2024-07-26 14:17:54.222456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:94880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.027 [2024-07-26 14:17:54.222470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:05.027 [2024-07-26 14:17:54.222491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:94888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.027 [2024-07-26 14:17:54.222521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:05.027 [2024-07-26 14:17:54.222552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:94896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.027 [2024-07-26 14:17:54.222569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:05.028 [2024-07-26 14:17:54.222591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:94904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.028 [2024-07-26 14:17:54.222606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:05.028 [2024-07-26 14:17:54.222628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:94912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.028 [2024-07-26 14:17:54.222643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:05.028 [2024-07-26 14:17:54.222670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:94920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.028 [2024-07-26 14:17:54.222687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:05.028 [2024-07-26 14:17:54.222708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:94928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.028 [2024-07-26 14:17:54.222723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:05.028 [2024-07-26 14:17:54.222744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:94936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.028 [2024-07-26 14:17:54.222760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:05.028 [2024-07-26 14:17:54.222781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:94944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.028 [2024-07-26 14:17:54.222797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:05.028 [2024-07-26 14:17:54.222818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:94952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.028 [2024-07-26 14:17:54.222849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:05.028 [2024-07-26 14:17:54.222871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:94960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.028 [2024-07-26 14:17:54.222886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:05.028 [2024-07-26 14:17:54.222907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:94968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.028 [2024-07-26 14:17:54.222922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:05.028 [2024-07-26 14:17:54.222959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:94976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.028 [2024-07-26 14:17:54.222975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:05.028 [2024-07-26 14:17:54.222996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:94984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.028 [2024-07-26 14:17:54.223012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:05.028 [2024-07-26 14:17:54.223033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:94992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.028 [2024-07-26 14:17:54.223048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:05.028 [2024-07-26 14:17:54.223070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:95000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.028 [2024-07-26 14:17:54.223085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:05.028 [2024-07-26 14:17:54.223106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:95008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.028 [2024-07-26 14:17:54.223122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:05.028 [2024-07-26 14:17:54.223143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:95016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.028 [2024-07-26 14:17:54.223162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:05.028 [2024-07-26 14:17:54.223184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:95024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.028 [2024-07-26 14:17:54.223200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:05.028 [2024-07-26 14:17:54.223221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:95032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.028 [2024-07-26 14:17:54.223251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:05.028 [2024-07-26 14:17:54.223273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:95040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.028 [2024-07-26 14:17:54.223288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:05.028 [2024-07-26 14:17:54.223324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:95048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.028 [2024-07-26 14:17:54.223339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:05.028 [2024-07-26 14:17:54.223359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:95056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.028 [2024-07-26 14:17:54.223373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:05.028 [2024-07-26 14:17:54.223393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:95064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.028 [2024-07-26 14:17:54.223408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:05.028 [2024-07-26 14:17:54.223427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:95072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.028 [2024-07-26 14:17:54.223442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:05.028 [2024-07-26 14:17:54.223462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:95080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.028 [2024-07-26 14:17:54.223476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:05.028 [2024-07-26 14:17:54.223496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:95088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.028 [2024-07-26 14:17:54.223533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.028 [2024-07-26 14:17:54.223560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:95096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.028 [2024-07-26 14:17:54.223576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:05.028 [2024-07-26 14:17:54.223603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:95104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.028 [2024-07-26 14:17:54.223619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:05.028 [2024-07-26 14:17:54.223641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:95112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.028 [2024-07-26 14:17:54.223660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:05.028 [2024-07-26 14:17:54.223683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:95120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.028 [2024-07-26 14:17:54.223698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:05.028 [2024-07-26 14:17:54.223720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:95128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.028 [2024-07-26 14:17:54.223736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:05.028 [2024-07-26 14:17:54.223757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:95136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.028 [2024-07-26 14:17:54.223772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:05.028 [2024-07-26 14:17:54.223794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:95144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.028 [2024-07-26 14:17:54.223825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:05.028 [2024-07-26 14:17:54.223847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:94248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.028 [2024-07-26 14:17:54.223862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:05.028 [2024-07-26 14:17:54.223898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:94256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.028 [2024-07-26 14:17:54.223912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:05.028 [2024-07-26 14:17:54.223933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:94264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.028 [2024-07-26 14:17:54.223947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:05.028 [2024-07-26 14:17:54.223967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:94272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.028 [2024-07-26 14:17:54.223981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:05.028 [2024-07-26 14:17:54.224002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:94280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.028 [2024-07-26 14:17:54.224016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:05.028 [2024-07-26 14:17:54.224036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:94288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.028 [2024-07-26 14:17:54.224050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:05.028 [2024-07-26 14:17:54.224070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:94296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.028 [2024-07-26 14:17:54.224084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:05.028 [2024-07-26 14:17:54.224104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:94304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.029 [2024-07-26 14:17:54.224119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:05.029 [2024-07-26 14:17:54.224144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:94312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.029 [2024-07-26 14:17:54.224159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:05.029 [2024-07-26 14:17:54.224179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:94320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.029 [2024-07-26 14:17:54.224194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:05.029 [2024-07-26 14:17:54.224220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:94328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.029 [2024-07-26 14:17:54.224235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:05.029 [2024-07-26 14:17:54.224255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:94336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.029 [2024-07-26 14:17:54.224270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:05.029 [2024-07-26 14:17:54.225077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:94344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.029 [2024-07-26 14:17:54.225100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:05.029 [2024-07-26 14:17:54.225127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:94352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.029 [2024-07-26 14:17:54.225144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:05.029 [2024-07-26 14:17:54.225166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:94360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.029 [2024-07-26 14:17:54.225182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:05.029 [2024-07-26 14:17:54.225204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:95152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.029 [2024-07-26 14:17:54.225220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:05.029 [2024-07-26 14:17:54.225241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:94368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.029 [2024-07-26 14:17:54.225256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:05.029 [2024-07-26 14:17:54.225277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:94376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.029 [2024-07-26 14:17:54.225293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:05.029 [2024-07-26 14:17:54.225314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:94384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.029 [2024-07-26 14:17:54.225330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:05.029 [2024-07-26 14:17:54.225352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:94392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.029 [2024-07-26 14:17:54.225367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:05.029 [2024-07-26 14:17:54.225415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:94400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.029 [2024-07-26 14:17:54.225431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:05.029 [2024-07-26 14:17:54.225451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:94408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.029 [2024-07-26 14:17:54.225466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:05.029 [2024-07-26 14:17:54.225486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:94416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.029 [2024-07-26 14:17:54.225501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:05.029 [2024-07-26 14:17:54.225545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:94424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.029 [2024-07-26 14:17:54.225562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:05.029 [2024-07-26 14:17:54.225583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:94432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.029 [2024-07-26 14:17:54.225598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:05.029 [2024-07-26 14:17:54.225619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:94440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.029 [2024-07-26 14:17:54.225634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:05.029 [2024-07-26 14:17:54.225655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:94448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.029 [2024-07-26 14:17:54.225670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:05.029 [2024-07-26 14:17:54.225690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:94456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.029 [2024-07-26 14:17:54.225705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:05.029 [2024-07-26 14:17:54.225726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:94464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.029 [2024-07-26 14:17:54.225741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:05.029 [2024-07-26 14:17:54.225761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:94136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.029 [2024-07-26 14:17:54.225792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:05.029 [2024-07-26 14:17:54.225814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:94144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.029 [2024-07-26 14:17:54.225830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:05.029 [2024-07-26 14:17:54.225851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:94152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.029 [2024-07-26 14:17:54.225866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:05.029 [2024-07-26 14:17:54.225887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:94160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.029 [2024-07-26 14:17:54.225906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:05.029 [2024-07-26 14:17:54.225929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:94168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.029 [2024-07-26 14:17:54.225945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:05.029 [2024-07-26 14:17:54.225966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:94176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.029 [2024-07-26 14:17:54.225981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:05.029 [2024-07-26 14:17:54.226002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:94184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.029 [2024-07-26 14:17:54.226017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:05.029 [2024-07-26 14:17:54.226038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:94472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.029 [2024-07-26 14:17:54.226053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:05.029 [2024-07-26 14:17:54.226091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:94480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.029 [2024-07-26 14:17:54.226106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:05.029 [2024-07-26 14:17:54.226126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:94488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.029 [2024-07-26 14:17:54.226157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:05.029 [2024-07-26 14:17:54.226179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:94496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.029 [2024-07-26 14:17:54.226194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:05.029 [2024-07-26 14:17:54.226215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:94504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.029 [2024-07-26 14:17:54.226230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:05.029 [2024-07-26 14:17:54.226251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:94512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.029 [2024-07-26 14:17:54.226267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:05.029 [2024-07-26 14:17:54.226288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:94520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.029 [2024-07-26 14:17:54.226303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:05.029 [2024-07-26 14:17:54.226325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:94528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.029 [2024-07-26 14:17:54.226340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:05.029 [2024-07-26 14:17:54.226361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:94536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.029 [2024-07-26 14:17:54.226380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:05.029 [2024-07-26 14:17:54.226402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:94544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.029 [2024-07-26 14:17:54.226418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:05.030 [2024-07-26 14:17:54.226439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:94552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.030 [2024-07-26 14:17:54.226455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:05.030 [2024-07-26 14:17:54.226476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:94560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.030 [2024-07-26 14:17:54.226491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:05.030 [2024-07-26 14:17:54.226512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:94568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.030 [2024-07-26 14:17:54.226534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:05.030 [2024-07-26 14:17:54.226557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:94576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.030 [2024-07-26 14:17:54.226573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:05.030 [2024-07-26 14:17:54.226595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:94584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.030 [2024-07-26 14:17:54.226610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:05.030 [2024-07-26 14:17:54.226631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:94592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.030 [2024-07-26 14:17:54.226647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:05.030 [2024-07-26 14:17:54.226668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:94600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.030 [2024-07-26 14:17:54.226683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:05.030 [2024-07-26 14:17:54.226704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:94608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.030 [2024-07-26 14:17:54.226719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:05.030 [2024-07-26 14:17:54.226740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:94616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.030 [2024-07-26 14:17:54.226756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:05.030 [2024-07-26 14:17:54.226777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:94624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.030 [2024-07-26 14:17:54.226792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:05.030 [2024-07-26 14:17:54.226813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:94632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.030 [2024-07-26 14:17:54.226829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:05.030 [2024-07-26 14:17:54.226854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:94640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.030 [2024-07-26 14:17:54.226870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:05.030 [2024-07-26 14:17:54.226892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:94648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.030 [2024-07-26 14:17:54.226907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:05.030 [2024-07-26 14:17:54.226929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:94656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.030 [2024-07-26 14:17:54.226944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:05.030 [2024-07-26 14:17:54.226965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:94664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.030 [2024-07-26 14:17:54.226980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:05.030 [2024-07-26 14:17:54.233216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:94672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.030 [2024-07-26 14:17:54.233245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:05.030 [2024-07-26 14:17:54.233269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:94680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.030 [2024-07-26 14:17:54.233284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:05.030 [2024-07-26 14:17:54.233305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:94688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.030 [2024-07-26 14:17:54.233320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:05.030 [2024-07-26 14:17:54.233605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:94696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.030 [2024-07-26 14:17:54.233630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:05.030 [2024-07-26 14:17:54.233677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:94704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.030 [2024-07-26 14:17:54.233697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:05.030 [2024-07-26 14:17:54.233724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:94712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.030 [2024-07-26 14:17:54.233740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:05.030 [2024-07-26 14:17:54.233765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:94720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.030 [2024-07-26 14:17:54.233782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:05.030 [2024-07-26 14:17:54.233807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:94728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.030 [2024-07-26 14:17:54.233822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:05.030 [2024-07-26 14:17:54.233854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:94736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.030 [2024-07-26 14:17:54.233871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:05.030 [2024-07-26 14:17:54.233898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:94744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.030 [2024-07-26 14:17:54.233914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:05.030 [2024-07-26 14:17:54.233939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:94752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.030 [2024-07-26 14:17:54.233954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:05.030 [2024-07-26 14:17:54.233980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:94760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.030 [2024-07-26 14:17:54.234011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:05.030 [2024-07-26 14:17:54.234038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:94768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.030 [2024-07-26 14:17:54.234053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:05.030 [2024-07-26 14:17:54.234093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:94776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.030 [2024-07-26 14:17:54.234108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:05.030 [2024-07-26 14:17:54.234131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:94784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.030 [2024-07-26 14:17:54.234146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:05.030 [2024-07-26 14:17:54.234170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:94792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.030 [2024-07-26 14:17:54.234184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:05.030 [2024-07-26 14:17:54.234208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:94800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.030 [2024-07-26 14:17:54.234223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:05.030 [2024-07-26 14:17:54.234247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:94808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.030 [2024-07-26 14:17:54.234261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:05.030 [2024-07-26 14:17:54.234285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:94816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.030 [2024-07-26 14:17:54.234299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:05.030 [2024-07-26 14:17:54.234323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:94824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.030 [2024-07-26 14:17:54.234338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:05.030 [2024-07-26 14:17:54.234361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:94832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.030 [2024-07-26 14:17:54.234380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:05.030 [2024-07-26 14:17:54.234404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:94840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.030 [2024-07-26 14:17:54.234419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:05.030 [2024-07-26 14:17:54.234443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:94848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.030 [2024-07-26 14:17:54.234458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:05.030 [2024-07-26 14:17:54.234482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:94192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.031 [2024-07-26 14:17:54.234496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:05.031 [2024-07-26 14:17:54.234520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:94200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.031 [2024-07-26 14:17:54.234559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:05.031 [2024-07-26 14:17:54.234587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:94208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.031 [2024-07-26 14:17:54.234603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:05.031 [2024-07-26 14:17:54.234628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:94216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.031 [2024-07-26 14:17:54.234643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:05.031 [2024-07-26 14:17:54.234667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:94224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.031 [2024-07-26 14:17:54.234682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:05.031 [2024-07-26 14:17:54.234708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:94232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.031 [2024-07-26 14:17:54.234723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:05.031 [2024-07-26 14:17:54.234748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:94240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.031 [2024-07-26 14:17:54.234763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:05.031 [2024-07-26 14:17:54.234788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:94856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.031 [2024-07-26 14:17:54.234802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:05.031 [2024-07-26 14:17:54.234827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:94864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.031 [2024-07-26 14:17:54.234857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:05.031 [2024-07-26 14:17:54.234881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:94872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.031 [2024-07-26 14:17:54.234900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:05.031 [2024-07-26 14:17:54.234924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:94880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.031 [2024-07-26 14:17:54.234939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:05.031 [2024-07-26 14:17:54.234963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:94888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.031 [2024-07-26 14:17:54.234977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:05.031 [2024-07-26 14:17:54.235001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:94896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.031 [2024-07-26 14:17:54.235015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:05.031 [2024-07-26 14:17:54.235039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:94904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.031 [2024-07-26 14:17:54.235054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:05.031 [2024-07-26 14:17:54.235077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:94912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.031 [2024-07-26 14:17:54.235092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:05.031 [2024-07-26 14:17:54.235116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:94920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.031 [2024-07-26 14:17:54.235130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:05.031 [2024-07-26 14:17:54.235154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:94928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.031 [2024-07-26 14:17:54.235169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:05.031 [2024-07-26 14:17:54.235192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:94936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.031 [2024-07-26 14:17:54.235207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:05.031 [2024-07-26 14:17:54.235230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:94944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.031 [2024-07-26 14:17:54.235245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:05.031 [2024-07-26 14:17:54.235268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:94952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.031 [2024-07-26 14:17:54.235283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:05.031 [2024-07-26 14:17:54.235308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:94960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.031 [2024-07-26 14:17:54.235322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:05.031 [2024-07-26 14:17:54.235346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:94968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.031 [2024-07-26 14:17:54.235361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:05.031 [2024-07-26 14:17:54.235388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:94976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.031 [2024-07-26 14:17:54.235403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:05.031 [2024-07-26 14:17:54.235427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:94984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.031 [2024-07-26 14:17:54.235442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:05.031 [2024-07-26 14:17:54.235466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:94992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.031 [2024-07-26 14:17:54.235481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:05.031 [2024-07-26 14:17:54.235504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:95000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.031 [2024-07-26 14:17:54.235543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:05.031 [2024-07-26 14:17:54.235570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:95008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.031 [2024-07-26 14:17:54.235601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:05.031 [2024-07-26 14:17:54.235628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:95016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.031 [2024-07-26 14:17:54.235644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:05.031 [2024-07-26 14:17:54.235669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:95024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.031 [2024-07-26 14:17:54.235685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:05.031 [2024-07-26 14:17:54.235710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:95032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.031 [2024-07-26 14:17:54.235725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:05.031 [2024-07-26 14:17:54.235751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:95040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.031 [2024-07-26 14:17:54.235767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:05.031 [2024-07-26 14:17:54.235792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:95048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.031 [2024-07-26 14:17:54.235807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:05.031 [2024-07-26 14:17:54.235848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:95056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.031 [2024-07-26 14:17:54.235864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:05.031 [2024-07-26 14:17:54.235903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:95064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.031 [2024-07-26 14:17:54.235919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:05.031 [2024-07-26 14:17:54.235948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:95072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.031 [2024-07-26 14:17:54.235963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:05.031 [2024-07-26 14:17:54.235987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:95080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.031 [2024-07-26 14:17:54.236001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:05.031 [2024-07-26 14:17:54.236025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:95088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.031 [2024-07-26 14:17:54.236040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.031 [2024-07-26 14:17:54.236064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:95096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.031 [2024-07-26 14:17:54.236078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:05.031 [2024-07-26 14:17:54.236102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:95104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.031 [2024-07-26 14:17:54.236117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:05.032 [2024-07-26 14:17:54.236141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:95112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.032 [2024-07-26 14:17:54.236155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:05.032 [2024-07-26 14:17:54.236179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:95120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.032 [2024-07-26 14:17:54.236193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:05.032 [2024-07-26 14:17:54.236218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:95128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.032 [2024-07-26 14:17:54.236233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:05.032 [2024-07-26 14:17:54.236257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:95136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.032 [2024-07-26 14:17:54.236271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:05.032 [2024-07-26 14:17:54.236295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:95144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.032 [2024-07-26 14:17:54.236310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:05.032 [2024-07-26 14:17:54.236334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:94248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.032 [2024-07-26 14:17:54.236348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:05.032 [2024-07-26 14:17:54.236372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:94256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.032 [2024-07-26 14:17:54.236387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:05.032 [2024-07-26 14:17:54.236410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.032 [2024-07-26 14:17:54.236429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:05.032 [2024-07-26 14:17:54.236454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:94272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.032 [2024-07-26 14:17:54.236469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:05.032 [2024-07-26 14:17:54.236492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:94280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.032 [2024-07-26 14:17:54.236522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:05.032 [2024-07-26 14:17:54.236558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:94288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.032 [2024-07-26 14:17:54.236574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:05.032 [2024-07-26 14:17:54.236599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:94296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.032 [2024-07-26 14:17:54.236614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:05.032 [2024-07-26 14:17:54.236639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:94304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.032 [2024-07-26 14:17:54.236654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:05.032 [2024-07-26 14:17:54.236678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:94312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.032 [2024-07-26 14:17:54.236694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:05.032 [2024-07-26 14:17:54.236718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:94320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.032 [2024-07-26 14:17:54.236733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:05.032 [2024-07-26 14:17:54.236758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:94328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.032 [2024-07-26 14:17:54.236774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:05.032 [2024-07-26 14:17:54.236967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:94336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.032 [2024-07-26 14:17:54.236986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:05.032 [2024-07-26 14:18:09.994890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:121792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.032 [2024-07-26 14:18:09.994967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:05.032 [2024-07-26 14:18:09.995010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:121808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.032 [2024-07-26 14:18:09.995028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:05.032 [2024-07-26 14:18:09.995052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:121824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.032 [2024-07-26 14:18:09.995079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:05.032 [2024-07-26 14:18:09.995103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:121840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.032 [2024-07-26 14:18:09.995119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:05.032 [2024-07-26 14:18:09.995141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:121856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.032 [2024-07-26 14:18:09.995157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:05.032 [2024-07-26 14:18:09.995179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:121872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.032 [2024-07-26 14:18:09.995195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:05.032 [2024-07-26 14:18:09.995218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:121888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.032 [2024-07-26 14:18:09.995234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:05.032 [2024-07-26 14:18:09.995256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:121904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.032 [2024-07-26 14:18:09.995271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:05.032 [2024-07-26 14:18:09.995293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:121920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.032 [2024-07-26 14:18:09.995309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:05.032 [2024-07-26 14:18:09.995331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:121936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.032 [2024-07-26 14:18:09.995347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:05.032 [2024-07-26 14:18:09.995368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:121952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.032 [2024-07-26 14:18:09.995384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:05.032 [2024-07-26 14:18:09.995406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:121968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.032 [2024-07-26 14:18:09.995422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:05.032 [2024-07-26 14:18:09.995444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:121984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.032 [2024-07-26 14:18:09.995460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:05.032 [2024-07-26 14:18:09.995481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:122000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.032 [2024-07-26 14:18:09.995497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:05.032 [2024-07-26 14:18:09.995519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:122016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.032 [2024-07-26 14:18:09.995549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:05.032 [2024-07-26 14:18:09.995574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:122032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.033 [2024-07-26 14:18:09.995591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:05.033 [2024-07-26 14:18:09.995612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:122048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.033 [2024-07-26 14:18:09.995628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:05.033 [2024-07-26 14:18:09.995650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:122064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.033 [2024-07-26 14:18:09.995666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:05.033 [2024-07-26 14:18:09.995688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.033 [2024-07-26 14:18:09.995704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:05.033 [2024-07-26 14:18:09.995726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:122096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.033 [2024-07-26 14:18:09.995741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:05.033 [2024-07-26 14:18:09.995763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:122112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.033 [2024-07-26 14:18:09.995779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:05.033 [2024-07-26 14:18:09.995800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:121600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.033 [2024-07-26 14:18:09.995816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:05.033 [2024-07-26 14:18:09.995838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:121632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.033 [2024-07-26 14:18:09.995853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:05.033 [2024-07-26 14:18:09.995875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:121664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.033 [2024-07-26 14:18:09.995891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:05.033 [2024-07-26 14:18:09.995914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:122128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.033 [2024-07-26 14:18:09.995930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:05.033 [2024-07-26 14:18:09.995952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:122144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.033 [2024-07-26 14:18:09.995968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:05.033 [2024-07-26 14:18:09.995989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:122160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.033 [2024-07-26 14:18:09.996005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:05.033 [2024-07-26 14:18:09.996031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:122176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.033 [2024-07-26 14:18:09.996047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:05.033 [2024-07-26 14:18:09.996069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:122192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.033 [2024-07-26 14:18:09.996085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:05.033 [2024-07-26 14:18:09.996107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:122208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.033 [2024-07-26 14:18:09.996123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:05.033 [2024-07-26 14:18:09.996145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:122224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.033 [2024-07-26 14:18:09.996161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:05.033 [2024-07-26 14:18:09.996183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:122240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.033 [2024-07-26 14:18:09.996199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:05.033 [2024-07-26 14:18:09.996221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:122256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.033 [2024-07-26 14:18:09.996236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:05.033 [2024-07-26 14:18:09.996258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:122272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.033 [2024-07-26 14:18:09.996273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:05.033 [2024-07-26 14:18:09.996295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:122288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.033 [2024-07-26 14:18:09.996311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:05.033 [2024-07-26 14:18:09.996333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:122304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.033 [2024-07-26 14:18:09.996349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:05.033 [2024-07-26 14:18:09.996370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:122320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.033 [2024-07-26 14:18:09.996386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:05.033 [2024-07-26 14:18:09.996407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:122336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.033 [2024-07-26 14:18:09.996423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:05.033 [2024-07-26 14:18:09.996444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:122352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.033 [2024-07-26 14:18:09.996460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:05.033 [2024-07-26 14:18:09.996486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:121576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.033 [2024-07-26 14:18:09.996502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:05.033 [2024-07-26 14:18:09.996524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:121608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.033 [2024-07-26 14:18:09.996549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:05.033 [2024-07-26 14:18:09.996573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:121640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.033 [2024-07-26 14:18:09.996589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:05.033 [2024-07-26 14:18:09.998891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:122368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.033 [2024-07-26 14:18:09.998917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:05.033 [2024-07-26 14:18:09.998961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:122384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.033 [2024-07-26 14:18:09.998978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:05.033 [2024-07-26 14:18:09.999000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:121696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.033 [2024-07-26 14:18:09.999016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:05.033 [2024-07-26 14:18:09.999038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:121728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.033 [2024-07-26 14:18:09.999053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:05.033 [2024-07-26 14:18:09.999075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:121760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.033 [2024-07-26 14:18:09.999091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:05.033 [2024-07-26 14:18:09.999112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:122392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.033 [2024-07-26 14:18:09.999128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:05.033 [2024-07-26 14:18:09.999150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:122408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.033 [2024-07-26 14:18:09.999166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:05.033 [2024-07-26 14:18:09.999187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:122424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.033 [2024-07-26 14:18:09.999203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:05.033 [2024-07-26 14:18:09.999224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:122440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.033 [2024-07-26 14:18:09.999240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:05.033 [2024-07-26 14:18:09.999262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:122456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.033 [2024-07-26 14:18:09.999286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:05.033 [2024-07-26 14:18:09.999309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:122472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.033 [2024-07-26 14:18:09.999325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:05.033 [2024-07-26 14:18:09.999347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:122488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.033 [2024-07-26 14:18:09.999362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:05.034 [2024-07-26 14:18:09.999384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:121688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.034 [2024-07-26 14:18:09.999399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:05.034 [2024-07-26 14:18:09.999421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:121720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.034 [2024-07-26 14:18:09.999437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:05.034 [2024-07-26 14:18:09.999458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:121752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.034 [2024-07-26 14:18:09.999474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:05.034 [2024-07-26 14:18:09.999495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:122496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.034 [2024-07-26 14:18:09.999510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:05.034 [2024-07-26 14:18:09.999539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:122512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.034 [2024-07-26 14:18:09.999556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:05.034 [2024-07-26 14:18:09.999578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:122528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.034 [2024-07-26 14:18:09.999594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:05.034 [2024-07-26 14:18:09.999615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:122544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.034 [2024-07-26 14:18:09.999631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:05.034 [2024-07-26 14:18:09.999652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:122560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.034 [2024-07-26 14:18:09.999667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:05.034 [2024-07-26 14:18:09.999689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:122576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.034 [2024-07-26 14:18:09.999704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:05.034 [2024-07-26 14:18:09.999726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:122592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.034 [2024-07-26 14:18:09.999745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:05.034 [2024-07-26 14:18:10.000054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:121800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.034 [2024-07-26 14:18:10.000076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:05.034 [2024-07-26 14:18:10.000103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:121832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.034 [2024-07-26 14:18:10.000119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:05.034 [2024-07-26 14:18:10.000141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:121864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.034 [2024-07-26 14:18:10.000157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:05.034 [2024-07-26 14:18:10.000179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:121896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.034 [2024-07-26 14:18:10.000194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:05.034 [2024-07-26 14:18:10.000215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:121928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.034 [2024-07-26 14:18:10.000231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:05.034 [2024-07-26 14:18:10.000252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:121960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.034 [2024-07-26 14:18:10.000268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:05.034 [2024-07-26 14:18:10.000290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:121992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.034 [2024-07-26 14:18:10.000306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:05.034 [2024-07-26 14:18:10.000327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:122024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.034 [2024-07-26 14:18:10.000342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:05.034 [2024-07-26 14:18:10.000364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:122056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.034 [2024-07-26 14:18:10.000379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:05.034 [2024-07-26 14:18:10.000401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:122088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.034 [2024-07-26 14:18:10.000416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:05.034 [2024-07-26 14:18:10.000438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:122120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.034 [2024-07-26 14:18:10.000453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:05.034 [2024-07-26 14:18:10.000475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:122152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.034 [2024-07-26 14:18:10.000490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:05.034 [2024-07-26 14:18:10.000517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:122184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.034 [2024-07-26 14:18:10.000543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:05.034 [2024-07-26 14:18:10.000567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:122216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.034 [2024-07-26 14:18:10.000583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:05.034 [2024-07-26 14:18:10.000604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:122248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.034 [2024-07-26 14:18:10.000620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:05.034 [2024-07-26 14:18:10.000642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:122600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.034 [2024-07-26 14:18:10.000657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:05.034 [2024-07-26 14:18:10.000678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:122616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.034 [2024-07-26 14:18:10.000694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:05.034 [2024-07-26 14:18:10.000716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:122296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.034 [2024-07-26 14:18:10.000731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:05.034 [2024-07-26 14:18:10.000752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:122328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.034 [2024-07-26 14:18:10.000767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:05.034 [2024-07-26 14:18:10.000789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:122360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.034 [2024-07-26 14:18:10.000805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:05.034 [2024-07-26 14:18:10.000827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:122632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.034 [2024-07-26 14:18:10.000843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:05.034 [2024-07-26 14:18:10.000864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:122376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.034 [2024-07-26 14:18:10.000879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:05.034 [2024-07-26 14:18:10.000901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:121808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.034 [2024-07-26 14:18:10.000916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:05.034 [2024-07-26 14:18:10.000938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.034 [2024-07-26 14:18:10.000953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:05.034 [2024-07-26 14:18:10.000979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:121872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.034 [2024-07-26 14:18:10.000996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:05.034 [2024-07-26 14:18:10.001018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:121904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.034 [2024-07-26 14:18:10.001034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:05.034 [2024-07-26 14:18:10.001055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:121936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.034 [2024-07-26 14:18:10.001071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:05.034 [2024-07-26 14:18:10.001092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:121968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.034 [2024-07-26 14:18:10.001108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:05.034 [2024-07-26 14:18:10.001130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:122000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.035 [2024-07-26 14:18:10.001145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:05.035 [2024-07-26 14:18:10.001167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:122032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.035 [2024-07-26 14:18:10.001183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:05.035 [2024-07-26 14:18:10.001205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:122064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.035 [2024-07-26 14:18:10.001221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:05.035 [2024-07-26 14:18:10.001243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:122096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.035 [2024-07-26 14:18:10.001258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:05.035 [2024-07-26 14:18:10.001280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:121600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.035 [2024-07-26 14:18:10.001296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:05.035 [2024-07-26 14:18:10.001319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:121664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.035 [2024-07-26 14:18:10.001335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:05.035 [2024-07-26 14:18:10.001357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:122144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.035 [2024-07-26 14:18:10.001373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:05.035 [2024-07-26 14:18:10.001396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:122176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.035 [2024-07-26 14:18:10.001419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:05.035 [2024-07-26 14:18:10.004020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:122208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.035 [2024-07-26 14:18:10.004049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:05.035 [2024-07-26 14:18:10.004078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:122240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.035 [2024-07-26 14:18:10.004096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:05.035 [2024-07-26 14:18:10.004118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:122272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.035 [2024-07-26 14:18:10.004134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:05.035 [2024-07-26 14:18:10.004156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:122304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.035 [2024-07-26 14:18:10.004172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:05.035 [2024-07-26 14:18:10.004194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:122336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.035 [2024-07-26 14:18:10.004210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:05.035 [2024-07-26 14:18:10.004232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:121576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.035 [2024-07-26 14:18:10.004248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:05.035 [2024-07-26 14:18:10.004270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:121640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.035 [2024-07-26 14:18:10.004286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:05.035 [2024-07-26 14:18:10.004307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:122416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.035 [2024-07-26 14:18:10.004323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:05.035 [2024-07-26 14:18:10.004349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:122448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.035 [2024-07-26 14:18:10.004376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:05.035 [2024-07-26 14:18:10.004422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:122480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.035 [2024-07-26 14:18:10.004449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:05.035 [2024-07-26 14:18:10.004483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:122520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.035 [2024-07-26 14:18:10.004504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:05.035 [2024-07-26 14:18:10.004543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:122552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.035 [2024-07-26 14:18:10.004568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:05.035 [2024-07-26 14:18:10.004609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:122584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.035 [2024-07-26 14:18:10.004637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:05.035 [2024-07-26 14:18:10.004679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:122656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.035 [2024-07-26 14:18:10.004701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:05.035 [2024-07-26 14:18:10.004731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:122672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.035 [2024-07-26 14:18:10.004753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:05.035 [2024-07-26 14:18:10.004792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:122688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.035 [2024-07-26 14:18:10.004813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:05.035 [2024-07-26 14:18:10.004854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:122704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.035 [2024-07-26 14:18:10.004874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:05.035 [2024-07-26 14:18:10.004904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:122384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.035 [2024-07-26 14:18:10.004925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:05.035 [2024-07-26 14:18:10.004955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:121728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.035 [2024-07-26 14:18:10.004975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:05.035 [2024-07-26 14:18:10.005004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:122392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.035 [2024-07-26 14:18:10.005025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:05.035 [2024-07-26 14:18:10.005054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:122424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.035 [2024-07-26 14:18:10.005075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:05.035 [2024-07-26 14:18:10.005105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:122456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.035 [2024-07-26 14:18:10.005127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:05.035 [2024-07-26 14:18:10.005157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:122488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.035 [2024-07-26 14:18:10.005179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:05.035 [2024-07-26 14:18:10.005209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:121720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.035 [2024-07-26 14:18:10.005230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:05.035 [2024-07-26 14:18:10.005260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:122496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.035 [2024-07-26 14:18:10.005287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:05.035 [2024-07-26 14:18:10.005318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:122528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.035 [2024-07-26 14:18:10.005340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.035 [2024-07-26 14:18:10.005369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:122560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.035 [2024-07-26 14:18:10.005391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:05.035 [2024-07-26 14:18:10.005421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:122592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.035 [2024-07-26 14:18:10.005443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:05.035 [2024-07-26 14:18:10.006027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:122624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.035 [2024-07-26 14:18:10.006058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:05.035 [2024-07-26 14:18:10.006097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:121792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.035 [2024-07-26 14:18:10.006120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:05.035 [2024-07-26 14:18:10.006150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:121856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.036 [2024-07-26 14:18:10.006171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:05.036 [2024-07-26 14:18:10.006201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:121920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.036 [2024-07-26 14:18:10.006221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:05.036 [2024-07-26 14:18:10.006250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:121984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.036 [2024-07-26 14:18:10.006271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:05.036 [2024-07-26 14:18:10.006301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:122048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.036 [2024-07-26 14:18:10.006321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:05.036 [2024-07-26 14:18:10.006352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:122112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.036 [2024-07-26 14:18:10.006374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:05.036 [2024-07-26 14:18:10.006405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:122712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.036 [2024-07-26 14:18:10.006427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:05.036 [2024-07-26 14:18:10.006457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:122728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.036 [2024-07-26 14:18:10.006478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:05.036 [2024-07-26 14:18:10.006514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:122744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.036 [2024-07-26 14:18:10.006549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:05.036 [2024-07-26 14:18:10.006592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:122760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.036 [2024-07-26 14:18:10.006614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:05.036 [2024-07-26 14:18:10.006645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:122776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.036 [2024-07-26 14:18:10.006665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:05.036 [2024-07-26 14:18:10.006696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.036 [2024-07-26 14:18:10.006716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:05.036 [2024-07-26 14:18:10.006746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:122808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.036 [2024-07-26 14:18:10.006767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:05.036 [2024-07-26 14:18:10.006797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:122824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.036 [2024-07-26 14:18:10.006818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:05.036 [2024-07-26 14:18:10.006848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:122840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.036 [2024-07-26 14:18:10.006869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:05.036 [2024-07-26 14:18:10.006898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:122856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.036 [2024-07-26 14:18:10.006918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:05.036 [2024-07-26 14:18:10.006951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:122160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.036 [2024-07-26 14:18:10.006971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:05.036 [2024-07-26 14:18:10.007001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:122224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.036 [2024-07-26 14:18:10.007021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:05.036 [2024-07-26 14:18:10.007068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:122288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.036 [2024-07-26 14:18:10.007089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:05.036 [2024-07-26 14:18:10.007118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:122352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.036 [2024-07-26 14:18:10.007145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:05.036 [2024-07-26 14:18:10.007194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:121832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.036 [2024-07-26 14:18:10.007218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:05.036 [2024-07-26 14:18:10.007250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:121896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.036 [2024-07-26 14:18:10.007285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:05.036 [2024-07-26 14:18:10.007321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:121960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.036 [2024-07-26 14:18:10.007349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:05.036 [2024-07-26 14:18:10.007386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:122024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.036 [2024-07-26 14:18:10.007426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:05.036 [2024-07-26 14:18:10.007466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:122088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.036 [2024-07-26 14:18:10.007493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:05.036 [2024-07-26 14:18:10.007546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:122152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.036 [2024-07-26 14:18:10.007573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:05.036 [2024-07-26 14:18:10.007609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:122216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.036 [2024-07-26 14:18:10.007631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:05.036 [2024-07-26 14:18:10.007660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:122600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.036 [2024-07-26 14:18:10.007681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:05.036 [2024-07-26 14:18:10.007710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:122296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.036 [2024-07-26 14:18:10.007731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:05.036 [2024-07-26 14:18:10.007762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:122360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.036 [2024-07-26 14:18:10.007783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:05.036 [2024-07-26 14:18:10.007816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:122376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.036 [2024-07-26 14:18:10.007839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:05.036 [2024-07-26 14:18:10.007874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:121840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.036 [2024-07-26 14:18:10.007898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:05.036 [2024-07-26 14:18:10.007939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:121904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.036 [2024-07-26 14:18:10.007964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:05.036 [2024-07-26 14:18:10.008002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:121968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.036 [2024-07-26 14:18:10.008025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:05.036 [2024-07-26 14:18:10.008068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:122032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.036 [2024-07-26 14:18:10.008104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:05.036 [2024-07-26 14:18:10.008141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:122096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.037 [2024-07-26 14:18:10.008171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:05.037 [2024-07-26 14:18:10.008202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:121664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.037 [2024-07-26 14:18:10.008225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:05.037 [2024-07-26 14:18:10.008259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:122176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.037 [2024-07-26 14:18:10.008285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:05.037 [2024-07-26 14:18:10.009338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:122664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.037 [2024-07-26 14:18:10.009364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:05.037 [2024-07-26 14:18:10.009393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:122696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.037 [2024-07-26 14:18:10.009410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:05.037 [2024-07-26 14:18:10.009434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:122408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.037 [2024-07-26 14:18:10.009450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:05.037 [2024-07-26 14:18:10.009473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:122472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.037 [2024-07-26 14:18:10.009488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:05.037 [2024-07-26 14:18:10.009510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:122544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.037 [2024-07-26 14:18:10.009526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:05.037 [2024-07-26 14:18:10.009557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:122872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.037 [2024-07-26 14:18:10.009573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:05.037 [2024-07-26 14:18:10.009595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:122888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.037 [2024-07-26 14:18:10.009616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:05.037 [2024-07-26 14:18:10.009639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:122904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.037 [2024-07-26 14:18:10.009655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:05.037 [2024-07-26 14:18:10.009677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:122920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.037 [2024-07-26 14:18:10.009693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:05.037 [2024-07-26 14:18:10.009715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:122240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.037 [2024-07-26 14:18:10.009730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:05.037 [2024-07-26 14:18:10.009752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:122304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.037 [2024-07-26 14:18:10.009768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:05.037 [2024-07-26 14:18:10.009789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:121576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.037 [2024-07-26 14:18:10.009805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:05.037 [2024-07-26 14:18:10.009826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:122416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.037 [2024-07-26 14:18:10.009842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:05.037 [2024-07-26 14:18:10.009864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:122480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.037 [2024-07-26 14:18:10.009879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:05.037 [2024-07-26 14:18:10.009901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:122552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.037 [2024-07-26 14:18:10.009916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:05.037 [2024-07-26 14:18:10.009938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:122656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.037 [2024-07-26 14:18:10.009953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:05.037 [2024-07-26 14:18:10.009975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:122688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.037 [2024-07-26 14:18:10.009990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:05.037 [2024-07-26 14:18:10.010012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:122384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.037 [2024-07-26 14:18:10.010027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:05.037 [2024-07-26 14:18:10.010049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:122392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.037 [2024-07-26 14:18:10.010068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:05.037 [2024-07-26 14:18:10.010090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:122456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.037 [2024-07-26 14:18:10.010106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:05.037 [2024-07-26 14:18:10.010128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:121720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.037 [2024-07-26 14:18:10.010143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:05.037 [2024-07-26 14:18:10.010165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:122528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.037 [2024-07-26 14:18:10.010180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:05.037 [2024-07-26 14:18:10.010204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:122592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.037 [2024-07-26 14:18:10.010220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:05.037 [2024-07-26 14:18:10.011111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:122936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.037 [2024-07-26 14:18:10.011136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:05.037 [2024-07-26 14:18:10.011164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:122952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.037 [2024-07-26 14:18:10.011181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:05.037 [2024-07-26 14:18:10.011203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:121792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.037 [2024-07-26 14:18:10.011219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:05.037 [2024-07-26 14:18:10.011241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:121920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.037 [2024-07-26 14:18:10.011257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:05.037 [2024-07-26 14:18:10.011278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:122048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.037 [2024-07-26 14:18:10.011294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:05.037 [2024-07-26 14:18:10.011316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:122712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.037 [2024-07-26 14:18:10.011331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:05.037 [2024-07-26 14:18:10.011353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:122744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.037 [2024-07-26 14:18:10.011369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:05.037 [2024-07-26 14:18:10.011390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:122776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.037 [2024-07-26 14:18:10.011406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:05.037 [2024-07-26 14:18:10.011432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:122808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.037 [2024-07-26 14:18:10.011449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:05.037 [2024-07-26 14:18:10.011471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:122840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.037 [2024-07-26 14:18:10.011486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:05.037 [2024-07-26 14:18:10.011508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:122160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.037 [2024-07-26 14:18:10.011524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:05.037 [2024-07-26 14:18:10.011556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:122288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.038 [2024-07-26 14:18:10.011573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:05.038 [2024-07-26 14:18:10.011594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:121832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.038 [2024-07-26 14:18:10.011610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:05.038 [2024-07-26 14:18:10.011632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:121960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.038 [2024-07-26 14:18:10.011648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:05.038 [2024-07-26 14:18:10.011670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:122088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.038 [2024-07-26 14:18:10.011685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:05.038 [2024-07-26 14:18:10.011707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:122216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.038 [2024-07-26 14:18:10.011722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:05.038 [2024-07-26 14:18:10.011744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:122296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.038 [2024-07-26 14:18:10.011760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:05.038 [2024-07-26 14:18:10.011781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:122376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.038 [2024-07-26 14:18:10.011797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:05.038 [2024-07-26 14:18:10.011818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:121904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.038 [2024-07-26 14:18:10.011834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:05.038 [2024-07-26 14:18:10.011855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:122032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.038 [2024-07-26 14:18:10.011870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:05.038 [2024-07-26 14:18:10.011897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:121664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.038 [2024-07-26 14:18:10.011913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:05.038 [2024-07-26 14:18:10.013232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:122720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.038 [2024-07-26 14:18:10.013258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:05.038 [2024-07-26 14:18:10.013285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:122752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.038 [2024-07-26 14:18:10.013302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:05.038 [2024-07-26 14:18:10.013324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:122784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.038 [2024-07-26 14:18:10.013340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:05.038 [2024-07-26 14:18:10.013361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:122816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.038 [2024-07-26 14:18:10.013377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:05.038 [2024-07-26 14:18:10.013398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:122848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.038 [2024-07-26 14:18:10.013413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:05.038 [2024-07-26 14:18:10.013434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:122696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.038 [2024-07-26 14:18:10.013450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:05.038 [2024-07-26 14:18:10.013472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:122472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.038 [2024-07-26 14:18:10.013487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:05.038 [2024-07-26 14:18:10.013509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:122872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.038 [2024-07-26 14:18:10.013524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:05.038 [2024-07-26 14:18:10.013555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:122904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.038 [2024-07-26 14:18:10.013572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:05.038 [2024-07-26 14:18:10.013593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:122240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.038 [2024-07-26 14:18:10.013610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:05.038 [2024-07-26 14:18:10.013632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:121576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.038 [2024-07-26 14:18:10.013647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:05.038 [2024-07-26 14:18:10.013674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:122480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.038 [2024-07-26 14:18:10.013690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:05.038 [2024-07-26 14:18:10.013712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:122656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.038 [2024-07-26 14:18:10.013728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:05.038 [2024-07-26 14:18:10.013749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:122384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.038 [2024-07-26 14:18:10.013765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:05.038 [2024-07-26 14:18:10.013787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:122456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.038 [2024-07-26 14:18:10.013802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:05.038 [2024-07-26 14:18:10.013823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:122528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.038 [2024-07-26 14:18:10.013839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:05.038 [2024-07-26 14:18:10.013861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:122616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.038 [2024-07-26 14:18:10.013876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:05.038 [2024-07-26 14:18:10.013898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:121808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.038 [2024-07-26 14:18:10.013913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:05.038 [2024-07-26 14:18:10.013934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:121936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.038 [2024-07-26 14:18:10.013950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:05.038 [2024-07-26 14:18:10.013971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:122968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.038 [2024-07-26 14:18:10.013987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:05.038 [2024-07-26 14:18:10.014008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:122984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.038 [2024-07-26 14:18:10.014024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:05.038 [2024-07-26 14:18:10.014045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:123000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.038 [2024-07-26 14:18:10.014060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:05.038 [2024-07-26 14:18:10.014081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:123016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.038 [2024-07-26 14:18:10.014098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:05.038 [2024-07-26 14:18:10.014119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:122064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.038 [2024-07-26 14:18:10.014141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:05.038 [2024-07-26 14:18:10.014164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:122952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.038 [2024-07-26 14:18:10.014180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:05.038 [2024-07-26 14:18:10.014202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:121920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.038 [2024-07-26 14:18:10.014217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:05.038 [2024-07-26 14:18:10.014238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:122712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.038 [2024-07-26 14:18:10.014254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:05.038 [2024-07-26 14:18:10.014276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:122776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.038 [2024-07-26 14:18:10.014291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:05.038 [2024-07-26 14:18:10.014312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:122840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.039 [2024-07-26 14:18:10.014328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:05.039 [2024-07-26 14:18:10.014350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:122288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.039 [2024-07-26 14:18:10.014365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:05.039 [2024-07-26 14:18:10.014386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:121960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.039 [2024-07-26 14:18:10.014402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:05.039 [2024-07-26 14:18:10.014423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:122216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.039 [2024-07-26 14:18:10.014439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:05.039 [2024-07-26 14:18:10.014460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:122376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.039 [2024-07-26 14:18:10.014476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:05.039 [2024-07-26 14:18:10.014498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:122032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.039 [2024-07-26 14:18:10.014513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:05.039 [2024-07-26 14:18:10.016125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:122880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.039 [2024-07-26 14:18:10.016151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:05.039 [2024-07-26 14:18:10.016178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:122912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.039 [2024-07-26 14:18:10.016201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:05.039 [2024-07-26 14:18:10.016224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:122272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.039 [2024-07-26 14:18:10.016241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:05.039 [2024-07-26 14:18:10.016262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:123032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.039 [2024-07-26 14:18:10.016278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:05.039 [2024-07-26 14:18:10.016309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:123048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.039 [2024-07-26 14:18:10.016324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:05.039 [2024-07-26 14:18:10.016346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:123064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.039 [2024-07-26 14:18:10.016362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:05.039 [2024-07-26 14:18:10.016383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:123080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.039 [2024-07-26 14:18:10.016399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:05.039 [2024-07-26 14:18:10.016420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:123096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.039 [2024-07-26 14:18:10.016436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:05.039 [2024-07-26 14:18:10.016457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:123112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.039 [2024-07-26 14:18:10.016472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.039 [2024-07-26 14:18:10.016494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:123128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.039 [2024-07-26 14:18:10.016510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:05.039 [2024-07-26 14:18:10.017550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:123144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.039 [2024-07-26 14:18:10.017584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:05.039 [2024-07-26 14:18:10.017611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:123160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.039 [2024-07-26 14:18:10.017628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:05.039 [2024-07-26 14:18:10.017650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:122336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.039 [2024-07-26 14:18:10.017665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:05.039 [2024-07-26 14:18:10.017687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:122704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.039 [2024-07-26 14:18:10.017703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:05.039 [2024-07-26 14:18:10.017729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:122752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.039 [2024-07-26 14:18:10.017746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:05.039 [2024-07-26 14:18:10.017767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:122816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.039 [2024-07-26 14:18:10.017783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:05.039 [2024-07-26 14:18:10.017805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:122696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.039 [2024-07-26 14:18:10.017821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:05.039 [2024-07-26 14:18:10.017842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:122872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.039 [2024-07-26 14:18:10.017858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:05.039 [2024-07-26 14:18:10.017880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:122240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.039 [2024-07-26 14:18:10.017896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:05.039 [2024-07-26 14:18:10.017917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:122480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.039 [2024-07-26 14:18:10.017933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:05.039 [2024-07-26 14:18:10.017954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:122384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.039 [2024-07-26 14:18:10.017969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:05.039 [2024-07-26 14:18:10.017991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:122528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.039 [2024-07-26 14:18:10.018007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:05.039 [2024-07-26 14:18:10.018028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:121808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.039 [2024-07-26 14:18:10.018044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:05.039 [2024-07-26 14:18:10.018065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:122968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.039 [2024-07-26 14:18:10.018081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:05.039 [2024-07-26 14:18:10.018102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:123000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.039 [2024-07-26 14:18:10.018118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:05.039 [2024-07-26 14:18:10.018139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:122064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.039 [2024-07-26 14:18:10.018155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:05.039 [2024-07-26 14:18:10.018181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:121920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.039 [2024-07-26 14:18:10.018198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:05.039 [2024-07-26 14:18:10.018220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:122776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.039 [2024-07-26 14:18:10.018236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:05.039 [2024-07-26 14:18:10.018257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:122288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.039 [2024-07-26 14:18:10.018273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:05.039 [2024-07-26 14:18:10.018295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:122216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.039 [2024-07-26 14:18:10.018310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:05.039 [2024-07-26 14:18:10.018332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:122032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.039 [2024-07-26 14:18:10.018347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:05.039 [2024-07-26 14:18:10.018369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:122488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.039 [2024-07-26 14:18:10.018384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:05.039 [2024-07-26 14:18:10.018405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:122560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.039 [2024-07-26 14:18:10.018421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:05.040 [2024-07-26 14:18:10.018442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:123184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.040 [2024-07-26 14:18:10.018457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:05.040 [2024-07-26 14:18:10.018479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:123200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.040 [2024-07-26 14:18:10.018496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:05.040 [2024-07-26 14:18:10.018517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:122928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.040 [2024-07-26 14:18:10.018543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:05.040 [2024-07-26 14:18:10.018566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:122728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.040 [2024-07-26 14:18:10.018582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:05.040 [2024-07-26 14:18:10.018604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:122792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.040 [2024-07-26 14:18:10.018619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:05.040 [2024-07-26 14:18:10.018645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:122856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.040 [2024-07-26 14:18:10.018662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:05.040 [2024-07-26 14:18:10.018683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:121840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.040 [2024-07-26 14:18:10.018699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:05.040 [2024-07-26 14:18:10.018721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:122096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.040 [2024-07-26 14:18:10.018736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:05.040 [2024-07-26 14:18:10.018758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:123216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.040 [2024-07-26 14:18:10.018773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:05.040 [2024-07-26 14:18:10.018795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:123232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.040 [2024-07-26 14:18:10.018811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:05.040 [2024-07-26 14:18:10.018833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:123248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.040 [2024-07-26 14:18:10.018848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:05.040 [2024-07-26 14:18:10.018869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:123264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.040 [2024-07-26 14:18:10.018885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:05.040 [2024-07-26 14:18:10.018907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:123280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.040 [2024-07-26 14:18:10.018924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:05.040 [2024-07-26 14:18:10.018945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:122912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.040 [2024-07-26 14:18:10.018961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:05.040 [2024-07-26 14:18:10.018983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:123032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.040 [2024-07-26 14:18:10.018999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:05.040 [2024-07-26 14:18:10.019021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:123064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.040 [2024-07-26 14:18:10.019036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:05.040 [2024-07-26 14:18:10.019058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:123096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.040 [2024-07-26 14:18:10.019073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:05.040 [2024-07-26 14:18:10.019096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:123128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.040 [2024-07-26 14:18:10.019116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:05.040 [2024-07-26 14:18:10.020969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:122920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.040 [2024-07-26 14:18:10.020995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:05.040 [2024-07-26 14:18:10.021023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:122688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.040 [2024-07-26 14:18:10.021041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:05.040 [2024-07-26 14:18:10.021063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:122592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.040 [2024-07-26 14:18:10.021078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:05.040 [2024-07-26 14:18:10.021100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:123296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.040 [2024-07-26 14:18:10.021116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:05.040 [2024-07-26 14:18:10.021138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:123312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.040 [2024-07-26 14:18:10.021154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:05.040 [2024-07-26 14:18:10.021175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:123328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.040 [2024-07-26 14:18:10.021191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:05.040 [2024-07-26 14:18:10.021212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:123344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.040 [2024-07-26 14:18:10.021228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:05.040 [2024-07-26 14:18:10.021250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:123360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.040 [2024-07-26 14:18:10.021265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:05.040 [2024-07-26 14:18:10.021287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:123376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.040 [2024-07-26 14:18:10.021303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:05.040 [2024-07-26 14:18:10.021325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:122976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.040 [2024-07-26 14:18:10.021341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:05.040 [2024-07-26 14:18:10.021362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:123008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.040 [2024-07-26 14:18:10.021378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:05.040 [2024-07-26 14:18:10.021399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:123160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.040 [2024-07-26 14:18:10.021420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:05.040 [2024-07-26 14:18:10.021443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:122704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.040 [2024-07-26 14:18:10.021459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:05.040 [2024-07-26 14:18:10.021480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:122816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.040 [2024-07-26 14:18:10.021495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:05.040 [2024-07-26 14:18:10.021517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:122872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.040 [2024-07-26 14:18:10.021540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:05.040 [2024-07-26 14:18:10.021564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:122480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.040 [2024-07-26 14:18:10.021580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:05.040 [2024-07-26 14:18:10.021601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:122528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.040 [2024-07-26 14:18:10.021617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:05.040 [2024-07-26 14:18:10.021638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:122968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.040 [2024-07-26 14:18:10.021654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:05.040 [2024-07-26 14:18:10.021676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:122064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.040 [2024-07-26 14:18:10.021692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:05.040 [2024-07-26 14:18:10.021713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:122776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.040 [2024-07-26 14:18:10.021729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:05.040 [2024-07-26 14:18:10.021751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:122216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.041 [2024-07-26 14:18:10.021766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:05.041 [2024-07-26 14:18:10.021788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:122488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.041 [2024-07-26 14:18:10.021803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:05.041 [2024-07-26 14:18:10.021824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:123184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.041 [2024-07-26 14:18:10.021840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:05.041 [2024-07-26 14:18:10.021862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:122928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.041 [2024-07-26 14:18:10.021877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:05.041 [2024-07-26 14:18:10.021903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:122792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.041 [2024-07-26 14:18:10.021919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:05.041 [2024-07-26 14:18:10.021941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:121840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.041 [2024-07-26 14:18:10.021957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:05.041 [2024-07-26 14:18:10.021978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:123216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.041 [2024-07-26 14:18:10.021993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:05.041 [2024-07-26 14:18:10.022015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:123248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.041 [2024-07-26 14:18:10.022031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:05.041 [2024-07-26 14:18:10.022054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:123280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.041 [2024-07-26 14:18:10.022070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:05.041 [2024-07-26 14:18:10.023540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:123032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.041 [2024-07-26 14:18:10.023565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:05.041 [2024-07-26 14:18:10.023596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:123096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.041 [2024-07-26 14:18:10.023613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:05.041 [2024-07-26 14:18:10.023635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:122936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.041 [2024-07-26 14:18:10.023652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:05.041 [2024-07-26 14:18:10.023673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:122808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.041 [2024-07-26 14:18:10.023689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:05.041 [2024-07-26 14:18:10.023710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:123384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.041 [2024-07-26 14:18:10.023726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:05.041 [2024-07-26 14:18:10.023748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:123400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.041 [2024-07-26 14:18:10.023763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:05.041 [2024-07-26 14:18:10.023785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:123416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.041 [2024-07-26 14:18:10.023801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:05.041 [2024-07-26 14:18:10.023827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:123432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.041 [2024-07-26 14:18:10.023844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:05.041 [2024-07-26 14:18:10.023865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:123448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.041 [2024-07-26 14:18:10.023881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:05.041 [2024-07-26 14:18:10.023902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:123464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.041 [2024-07-26 14:18:10.023918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:05.041 [2024-07-26 14:18:10.023939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:123040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.041 [2024-07-26 14:18:10.023954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:05.041 [2024-07-26 14:18:10.023975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:123072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.041 [2024-07-26 14:18:10.023991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:05.041 [2024-07-26 14:18:10.024013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:123104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.041 [2024-07-26 14:18:10.024028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:05.041 [2024-07-26 14:18:10.024049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:123136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.041 [2024-07-26 14:18:10.024065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:05.041 [2024-07-26 14:18:10.024086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:123168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.041 [2024-07-26 14:18:10.024101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:05.041 [2024-07-26 14:18:10.024123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:122656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.041 [2024-07-26 14:18:10.024138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:05.041 [2024-07-26 14:18:10.024160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:122984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.041 [2024-07-26 14:18:10.024175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:05.041 [2024-07-26 14:18:10.024197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:122952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.041 [2024-07-26 14:18:10.024212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:05.041 [2024-07-26 14:18:10.024234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:122840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.041 [2024-07-26 14:18:10.024249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:05.041 [2024-07-26 14:18:10.024271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:123192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.041 [2024-07-26 14:18:10.024290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:05.041 [2024-07-26 14:18:10.024312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:122688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.041 [2024-07-26 14:18:10.024328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:05.041 [2024-07-26 14:18:10.024349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:123296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.041 [2024-07-26 14:18:10.024365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:05.041 [2024-07-26 14:18:10.024387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:123328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.041 [2024-07-26 14:18:10.024402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:05.041 [2024-07-26 14:18:10.024423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:123360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.041 [2024-07-26 14:18:10.024439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:05.042 [2024-07-26 14:18:10.024461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:122976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.042 [2024-07-26 14:18:10.024477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:05.042 [2024-07-26 14:18:10.024499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:123160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.042 [2024-07-26 14:18:10.024515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:05.042 [2024-07-26 14:18:10.024543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:122816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.042 [2024-07-26 14:18:10.024561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:05.042 [2024-07-26 14:18:10.024591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:122480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.042 [2024-07-26 14:18:10.024606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:05.042 [2024-07-26 14:18:10.024628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:122968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.042 [2024-07-26 14:18:10.024643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:05.042 [2024-07-26 14:18:10.024665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:122776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.042 [2024-07-26 14:18:10.024681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:05.042 [2024-07-26 14:18:10.024702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:122488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.042 [2024-07-26 14:18:10.024718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:05.042 [2024-07-26 14:18:10.024739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:122928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.042 [2024-07-26 14:18:10.024759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:05.042 [2024-07-26 14:18:10.024782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:121840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.042 [2024-07-26 14:18:10.024798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:05.042 [2024-07-26 14:18:10.024821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:123248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.042 [2024-07-26 14:18:10.024836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:05.042 [2024-07-26 14:18:10.026739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:123472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.042 [2024-07-26 14:18:10.026764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:05.042 [2024-07-26 14:18:10.026792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:123488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.042 [2024-07-26 14:18:10.026808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:05.042 [2024-07-26 14:18:10.026830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:123504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.042 [2024-07-26 14:18:10.026846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:05.042 [2024-07-26 14:18:10.026876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:123520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.042 [2024-07-26 14:18:10.026892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:05.042 [2024-07-26 14:18:10.026914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:123536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.042 [2024-07-26 14:18:10.026929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:05.042 [2024-07-26 14:18:10.026950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:123224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.042 [2024-07-26 14:18:10.026966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:05.042 [2024-07-26 14:18:10.026988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:123256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.042 [2024-07-26 14:18:10.027003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:05.042 [2024-07-26 14:18:10.027025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:123048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.042 [2024-07-26 14:18:10.027040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:05.042 [2024-07-26 14:18:10.027062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:123112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.042 [2024-07-26 14:18:10.027077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:05.042 [2024-07-26 14:18:10.027099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:123560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.042 [2024-07-26 14:18:10.027119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:05.042 [2024-07-26 14:18:10.027142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:123576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.042 [2024-07-26 14:18:10.027158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:05.042 [2024-07-26 14:18:10.027179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:123592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.042 [2024-07-26 14:18:10.027195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:05.042 [2024-07-26 14:18:10.027216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:123608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.042 [2024-07-26 14:18:10.027232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:05.042 [2024-07-26 14:18:10.027253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:123624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.042 [2024-07-26 14:18:10.027269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:05.042 [2024-07-26 14:18:10.027290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:123640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.042 [2024-07-26 14:18:10.027306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:05.042 [2024-07-26 14:18:10.027327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:123096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.042 [2024-07-26 14:18:10.027342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:05.042 [2024-07-26 14:18:10.027364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:122808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.042 [2024-07-26 14:18:10.027380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:05.042 [2024-07-26 14:18:10.027401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:123400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.042 [2024-07-26 14:18:10.027417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:05.042 [2024-07-26 14:18:10.027439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:123432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.042 [2024-07-26 14:18:10.027454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:05.042 [2024-07-26 14:18:10.027475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:123464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.042 [2024-07-26 14:18:10.027491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:05.042 [2024-07-26 14:18:10.027512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:123072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.042 [2024-07-26 14:18:10.027534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:05.042 [2024-07-26 14:18:10.027558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:123136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.042 [2024-07-26 14:18:10.027574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:05.042 [2024-07-26 14:18:10.027600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:122656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.042 [2024-07-26 14:18:10.027616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.042 [2024-07-26 14:18:10.027638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:122952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.042 [2024-07-26 14:18:10.027653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:05.042 [2024-07-26 14:18:10.027675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:123192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.042 [2024-07-26 14:18:10.027691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:05.042 [2024-07-26 14:18:10.028246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:123296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.042 [2024-07-26 14:18:10.028270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:05.042 [2024-07-26 14:18:10.028296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:123360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.042 [2024-07-26 14:18:10.028313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:05.042 [2024-07-26 14:18:10.028335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:123160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.042 [2024-07-26 14:18:10.028351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:05.042 [2024-07-26 14:18:10.028373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:122480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.043 [2024-07-26 14:18:10.028388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:05.043 [2024-07-26 14:18:10.028410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:122776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.043 [2024-07-26 14:18:10.028426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:05.043 [2024-07-26 14:18:10.028447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:122928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.043 [2024-07-26 14:18:10.028463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:05.043 [2024-07-26 14:18:10.028484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:123248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.043 [2024-07-26 14:18:10.028500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:05.043 [2024-07-26 14:18:10.028522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:123304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.043 [2024-07-26 14:18:10.028547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:05.043 [2024-07-26 14:18:10.028571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:123336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.043 [2024-07-26 14:18:10.028587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:05.043 [2024-07-26 14:18:10.028613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:123368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.043 [2024-07-26 14:18:10.028629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:05.043 [2024-07-26 14:18:10.028651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:122240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.043 [2024-07-26 14:18:10.028667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:05.043 [2024-07-26 14:18:10.028688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:123000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.043 [2024-07-26 14:18:10.028704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:05.043 [2024-07-26 14:18:10.028725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:123200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.043 [2024-07-26 14:18:10.028741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:05.043 [2024-07-26 14:18:10.028762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:123264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.043 [2024-07-26 14:18:10.028777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:05.043 [2024-07-26 14:18:10.028799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:123648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.043 [2024-07-26 14:18:10.028814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:05.043 [2024-07-26 14:18:10.028836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:123664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.043 [2024-07-26 14:18:10.028851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:05.043 [2024-07-26 14:18:10.028872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:123680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.043 [2024-07-26 14:18:10.028888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:05.043 [2024-07-26 14:18:10.028909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:123696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.043 [2024-07-26 14:18:10.028925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:05.043 [2024-07-26 14:18:10.028947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:123128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.043 [2024-07-26 14:18:10.028962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:05.043 [2024-07-26 14:18:10.030458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:123408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.043 [2024-07-26 14:18:10.030483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:05.043 [2024-07-26 14:18:10.030510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:123440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.043 [2024-07-26 14:18:10.030535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:05.043 [2024-07-26 14:18:10.030561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:123712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.043 [2024-07-26 14:18:10.030582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:05.043 [2024-07-26 14:18:10.030604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:123728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.043 [2024-07-26 14:18:10.030620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:05.043 [2024-07-26 14:18:10.030642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:123744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.043 [2024-07-26 14:18:10.030658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:05.043 [2024-07-26 14:18:10.030679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:123760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.043 [2024-07-26 14:18:10.030694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:05.043 [2024-07-26 14:18:10.030716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:123488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.043 [2024-07-26 14:18:10.030732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:05.043 [2024-07-26 14:18:10.030753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:123520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.043 [2024-07-26 14:18:10.030769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:05.043 [2024-07-26 14:18:10.030790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:123224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.043 [2024-07-26 14:18:10.030806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:05.043 [2024-07-26 14:18:10.030827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:123048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.043 [2024-07-26 14:18:10.030843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:05.043 [2024-07-26 14:18:10.030864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:123560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.043 [2024-07-26 14:18:10.030879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:05.043 [2024-07-26 14:18:10.030901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:123592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.043 [2024-07-26 14:18:10.030916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:05.043 [2024-07-26 14:18:10.030938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:123624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.043 [2024-07-26 14:18:10.030953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:05.043 [2024-07-26 14:18:10.030974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:123096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.043 [2024-07-26 14:18:10.030989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:05.043 [2024-07-26 14:18:10.031011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:123400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.043 [2024-07-26 14:18:10.031034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:05.043 [2024-07-26 14:18:10.031056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:123464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.043 [2024-07-26 14:18:10.031072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:05.043 [2024-07-26 14:18:10.031093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:123136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.043 [2024-07-26 14:18:10.031109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:05.043 [2024-07-26 14:18:10.031130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:122952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.043 [2024-07-26 14:18:10.031146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:05.043 [2024-07-26 14:18:10.031167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:123312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.043 [2024-07-26 14:18:10.031183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:05.043 [2024-07-26 14:18:10.031204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:123376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.043 [2024-07-26 14:18:10.031220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:05.043 [2024-07-26 14:18:10.031241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:122528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.043 [2024-07-26 14:18:10.031256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:05.043 [2024-07-26 14:18:10.031277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:123360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.043 [2024-07-26 14:18:10.031293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:05.043 [2024-07-26 14:18:10.031315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:122480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.043 [2024-07-26 14:18:10.031330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:05.044 [2024-07-26 14:18:10.031352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:122928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.044 [2024-07-26 14:18:10.031367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:05.044 [2024-07-26 14:18:10.031389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:123304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.044 [2024-07-26 14:18:10.031404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:05.044 [2024-07-26 14:18:10.031425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:123368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.044 [2024-07-26 14:18:10.031441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:05.044 [2024-07-26 14:18:10.031462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:123000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.044 [2024-07-26 14:18:10.031478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:05.044 [2024-07-26 14:18:10.031504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:123264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.044 [2024-07-26 14:18:10.031520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:05.044 [2024-07-26 14:18:10.031550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:123664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.044 [2024-07-26 14:18:10.031567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:05.044 [2024-07-26 14:18:10.031589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:123696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.044 [2024-07-26 14:18:10.031605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:05.044 [2024-07-26 14:18:10.034465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:123184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.044 [2024-07-26 14:18:10.034491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:05.044 [2024-07-26 14:18:10.034519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:123280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.044 [2024-07-26 14:18:10.034545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:05.044 [2024-07-26 14:18:10.034570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:123784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.044 [2024-07-26 14:18:10.034586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:05.044 [2024-07-26 14:18:10.034609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:123800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.044 [2024-07-26 14:18:10.034625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:05.044 [2024-07-26 14:18:10.034646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:123816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.044 [2024-07-26 14:18:10.034661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:05.044 [2024-07-26 14:18:10.034683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:123832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.044 [2024-07-26 14:18:10.034698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:05.044 [2024-07-26 14:18:10.034720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:123848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.044 [2024-07-26 14:18:10.034735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:05.044 [2024-07-26 14:18:10.034756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:123864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.044 [2024-07-26 14:18:10.034772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:05.044 [2024-07-26 14:18:10.034793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:123880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.044 [2024-07-26 14:18:10.034809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:05.044 [2024-07-26 14:18:10.034836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:123896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.044 [2024-07-26 14:18:10.034852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:05.044 [2024-07-26 14:18:10.034873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:123912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.044 [2024-07-26 14:18:10.034889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:05.044 [2024-07-26 14:18:10.034910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:123928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.044 [2024-07-26 14:18:10.034926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:05.044 [2024-07-26 14:18:10.034948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:123496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.044 [2024-07-26 14:18:10.034963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:05.044 [2024-07-26 14:18:10.034985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:123528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.044 [2024-07-26 14:18:10.035001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:05.044 [2024-07-26 14:18:10.035022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:123552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.044 [2024-07-26 14:18:10.035038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:05.044 [2024-07-26 14:18:10.035059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:123584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.044 [2024-07-26 14:18:10.035075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:05.044 [2024-07-26 14:18:10.035096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:123616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.044 [2024-07-26 14:18:10.035112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:05.044 [2024-07-26 14:18:10.035133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:123032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.044 [2024-07-26 14:18:10.035149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:05.044 [2024-07-26 14:18:10.035170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:123416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.044 [2024-07-26 14:18:10.035186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:05.044 [2024-07-26 14:18:10.035208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:123440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.044 [2024-07-26 14:18:10.035223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:05.044 [2024-07-26 14:18:10.035244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:123728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.044 [2024-07-26 14:18:10.035260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:05.044 [2024-07-26 14:18:10.035285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:123760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.044 [2024-07-26 14:18:10.035301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:05.044 [2024-07-26 14:18:10.035323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:123520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.044 [2024-07-26 14:18:10.035338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:05.044 [2024-07-26 14:18:10.035360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:123048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.044 [2024-07-26 14:18:10.035376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:05.044 [2024-07-26 14:18:10.035397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:123592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.044 [2024-07-26 14:18:10.035412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:05.044 [2024-07-26 14:18:10.035433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:123096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.044 [2024-07-26 14:18:10.035449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:05.044 [2024-07-26 14:18:10.035471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:123464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.044 [2024-07-26 14:18:10.035486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:05.044 [2024-07-26 14:18:10.035507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:122952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.044 [2024-07-26 14:18:10.035523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:05.044 [2024-07-26 14:18:10.035552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:123376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.044 [2024-07-26 14:18:10.035569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:05.044 [2024-07-26 14:18:10.035590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:123360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.044 [2024-07-26 14:18:10.035606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:05.044 [2024-07-26 14:18:10.035627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:122928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.044 [2024-07-26 14:18:10.035642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:05.044 [2024-07-26 14:18:10.035664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:123368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.045 [2024-07-26 14:18:10.035680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:05.045 [2024-07-26 14:18:10.035701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:123264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.045 [2024-07-26 14:18:10.035717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:05.045 [2024-07-26 14:18:10.035738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:123696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.045 [2024-07-26 14:18:10.035760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:05.045 [2024-07-26 14:18:10.035784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:123944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.045 [2024-07-26 14:18:10.035800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:05.045 [2024-07-26 14:18:10.035822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:123328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.045 [2024-07-26 14:18:10.035838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:05.045 [2024-07-26 14:18:10.035860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:123656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.045 [2024-07-26 14:18:10.035877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:05.045 [2024-07-26 14:18:10.035899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:123688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.045 [2024-07-26 14:18:10.035915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:05.045 [2024-07-26 14:18:10.035937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:123960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.045 [2024-07-26 14:18:10.035953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:05.045 [2024-07-26 14:18:10.035975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:123976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.045 [2024-07-26 14:18:10.035992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:05.045 [2024-07-26 14:18:10.036014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:123992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.045 [2024-07-26 14:18:10.036029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:05.045 [2024-07-26 14:18:10.036051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:124008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.045 [2024-07-26 14:18:10.036067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:05.045 [2024-07-26 14:18:10.036089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:124024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.045 [2024-07-26 14:18:10.036105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:05.045 [2024-07-26 14:18:10.036126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:124040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.045 [2024-07-26 14:18:10.036142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:05.045 [2024-07-26 14:18:10.036164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:124056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.045 [2024-07-26 14:18:10.036181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:05.045 [2024-07-26 14:18:10.037022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:124072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.045 [2024-07-26 14:18:10.037050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:05.045 [2024-07-26 14:18:10.037079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:124088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.045 [2024-07-26 14:18:10.037097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:05.045 [2024-07-26 14:18:10.037119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:124104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.045 [2024-07-26 14:18:10.037136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:05.045 [2024-07-26 14:18:10.037157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:123736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.045 [2024-07-26 14:18:10.037173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:05.045 [2024-07-26 14:18:10.037195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:123768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.045 [2024-07-26 14:18:10.037212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:05.045 [2024-07-26 14:18:10.037234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:123504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.045 [2024-07-26 14:18:10.037250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:05.045 [2024-07-26 14:18:10.037271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:123576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.045 [2024-07-26 14:18:10.037287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:05.045 [2024-07-26 14:18:10.037309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:123640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.045 [2024-07-26 14:18:10.037325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:05.045 [2024-07-26 14:18:10.037736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:124112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.045 [2024-07-26 14:18:10.037760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:05.045 [2024-07-26 14:18:10.037787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:124128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.045 [2024-07-26 14:18:10.037804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:05.045 [2024-07-26 14:18:10.037826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:124144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.045 [2024-07-26 14:18:10.037842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:05.045 [2024-07-26 14:18:10.037863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:124160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.045 [2024-07-26 14:18:10.037879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:05.045 [2024-07-26 14:18:10.037901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:124176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.045 [2024-07-26 14:18:10.037916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:05.045 [2024-07-26 14:18:10.037943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:123160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.045 [2024-07-26 14:18:10.037959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:05.045 [2024-07-26 14:18:10.037981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:123248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.045 [2024-07-26 14:18:10.037997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:05.045 [2024-07-26 14:18:10.038018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:123680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.045 [2024-07-26 14:18:10.038034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:05.045 [2024-07-26 14:18:10.038056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:124192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.045 [2024-07-26 14:18:10.038071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:05.045 [2024-07-26 14:18:10.038093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:124208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.045 [2024-07-26 14:18:10.038109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:05.045 [2024-07-26 14:18:10.039081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:124224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.045 [2024-07-26 14:18:10.039106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:05.045 [2024-07-26 14:18:10.039134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:123280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.045 [2024-07-26 14:18:10.039151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:05.045 [2024-07-26 14:18:10.039173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:123800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.045 [2024-07-26 14:18:10.039189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:05.045 [2024-07-26 14:18:10.039210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:123832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.045 [2024-07-26 14:18:10.039226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:05.045 [2024-07-26 14:18:10.039247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:123864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.046 [2024-07-26 14:18:10.039263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:05.046 [2024-07-26 14:18:10.039285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:123896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.046 [2024-07-26 14:18:10.039301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:05.046 [2024-07-26 14:18:10.039322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:123928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.046 [2024-07-26 14:18:10.039338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:05.046 [2024-07-26 14:18:10.039364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:123528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.046 [2024-07-26 14:18:10.039381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:05.046 [2024-07-26 14:18:10.039403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:123584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.046 [2024-07-26 14:18:10.039418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:05.046 [2024-07-26 14:18:10.039439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:123032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.046 [2024-07-26 14:18:10.039455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:05.046 [2024-07-26 14:18:10.039477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:123440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.046 [2024-07-26 14:18:10.039492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:05.046 [2024-07-26 14:18:10.039514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:123760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.046 [2024-07-26 14:18:10.039536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:05.046 [2024-07-26 14:18:10.039560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:123048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.046 [2024-07-26 14:18:10.039577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:05.046 [2024-07-26 14:18:10.039598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:123096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.046 [2024-07-26 14:18:10.039614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.046 [2024-07-26 14:18:10.039636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:122952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.046 [2024-07-26 14:18:10.039651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:05.046 [2024-07-26 14:18:10.039673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:123360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.046 [2024-07-26 14:18:10.039689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:05.046 [2024-07-26 14:18:10.039710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:123368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.046 [2024-07-26 14:18:10.039726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:05.046 [2024-07-26 14:18:10.039747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:123696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.046 [2024-07-26 14:18:10.039763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:05.046 [2024-07-26 14:18:10.039784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:123328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.046 [2024-07-26 14:18:10.039800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:05.046 [2024-07-26 14:18:10.039826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:123688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.046 [2024-07-26 14:18:10.039842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:05.046 [2024-07-26 14:18:10.039863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:123976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.046 [2024-07-26 14:18:10.039879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:05.046 [2024-07-26 14:18:10.039900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:124008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.046 [2024-07-26 14:18:10.039916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:05.046 [2024-07-26 14:18:10.039937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:124040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.046 [2024-07-26 14:18:10.039952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:05.046 [2024-07-26 14:18:10.039974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:123776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.046 [2024-07-26 14:18:10.039990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:05.046 [2024-07-26 14:18:10.040011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:123808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.046 [2024-07-26 14:18:10.040027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:05.046 [2024-07-26 14:18:10.040048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:123840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.046 [2024-07-26 14:18:10.040064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:05.046 [2024-07-26 14:18:10.040085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:123872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.046 [2024-07-26 14:18:10.040101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:05.046 [2024-07-26 14:18:10.040122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:123904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.046 [2024-07-26 14:18:10.040138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:05.046 [2024-07-26 14:18:10.040159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:124088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.046 [2024-07-26 14:18:10.040175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:05.046 [2024-07-26 14:18:10.040196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:123736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.046 [2024-07-26 14:18:10.040212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:05.046 [2024-07-26 14:18:10.040233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:123504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.046 [2024-07-26 14:18:10.040249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:05.046 [2024-07-26 14:18:10.040270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:123640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.046 [2024-07-26 14:18:10.040290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:05.046 [2024-07-26 14:18:10.040313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:123744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.046 [2024-07-26 14:18:10.040329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:05.046 [2024-07-26 14:18:10.040350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:123560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.046 [2024-07-26 14:18:10.040366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:05.046 [2024-07-26 14:18:10.040388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:124232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.046 [2024-07-26 14:18:10.040403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:05.046 [2024-07-26 14:18:10.040426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:124128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.046 [2024-07-26 14:18:10.040442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:05.046 [2024-07-26 14:18:10.040464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:124160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.046 [2024-07-26 14:18:10.040479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:05.046 [2024-07-26 14:18:10.040501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:123160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.046 [2024-07-26 14:18:10.040517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:05.046 [2024-07-26 14:18:10.040547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:123680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.047 [2024-07-26 14:18:10.040574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:05.047 [2024-07-26 14:18:10.040596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:124208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.047 [2024-07-26 14:18:10.040613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:05.047 [2024-07-26 14:18:10.042525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:123936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.047 [2024-07-26 14:18:10.042560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:05.047 [2024-07-26 14:18:10.042588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:123968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.047 [2024-07-26 14:18:10.042606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:05.047 [2024-07-26 14:18:10.042628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:124000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.047 [2024-07-26 14:18:10.042644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:05.047 [2024-07-26 14:18:10.042666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:124240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.047 [2024-07-26 14:18:10.042687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:05.047 [2024-07-26 14:18:10.042711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:124256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.047 [2024-07-26 14:18:10.042727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:05.047 [2024-07-26 14:18:10.042749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:124272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.047 [2024-07-26 14:18:10.042764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:05.047 [2024-07-26 14:18:10.042786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:124288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.047 [2024-07-26 14:18:10.042802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:05.047 [2024-07-26 14:18:10.042823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:124304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.047 [2024-07-26 14:18:10.042839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:05.047 [2024-07-26 14:18:10.042861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:124032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.047 [2024-07-26 14:18:10.042877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:05.047 [2024-07-26 14:18:10.042898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:124064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.047 [2024-07-26 14:18:10.042914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:05.047 [2024-07-26 14:18:10.042936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:124096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.047 [2024-07-26 14:18:10.042952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:05.047 [2024-07-26 14:18:10.042973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:123280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.047 [2024-07-26 14:18:10.042989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:05.047 [2024-07-26 14:18:10.043011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:123832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.047 [2024-07-26 14:18:10.043027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:05.047 [2024-07-26 14:18:10.043049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:123896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.047 [2024-07-26 14:18:10.043064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:05.047 [2024-07-26 14:18:10.043086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:123528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.047 [2024-07-26 14:18:10.043101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:05.047 [2024-07-26 14:18:10.043123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:123032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.047 [2024-07-26 14:18:10.043139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:05.047 [2024-07-26 14:18:10.043166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:123760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.047 [2024-07-26 14:18:10.043182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:05.047 [2024-07-26 14:18:10.043204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:123096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.047 [2024-07-26 14:18:10.043219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:05.047 [2024-07-26 14:18:10.043241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:123360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.047 [2024-07-26 14:18:10.043257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:05.047 [2024-07-26 14:18:10.043278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:123696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.047 [2024-07-26 14:18:10.043293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:05.047 [2024-07-26 14:18:10.043315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:123688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.047 [2024-07-26 14:18:10.043331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:05.047 [2024-07-26 14:18:10.043353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:124008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.047 [2024-07-26 14:18:10.043369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:05.047 [2024-07-26 14:18:10.043391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:123776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.047 [2024-07-26 14:18:10.043406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:05.047 [2024-07-26 14:18:10.043428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:123840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.047 [2024-07-26 14:18:10.043444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:05.047 [2024-07-26 14:18:10.043465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:123904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.048 [2024-07-26 14:18:10.043481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:05.048 [2024-07-26 14:18:10.043503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:123736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.048 [2024-07-26 14:18:10.043519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:05.048 [2024-07-26 14:18:10.043550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:123640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.048 [2024-07-26 14:18:10.043567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:05.048 [2024-07-26 14:18:10.043588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:123560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.048 [2024-07-26 14:18:10.043604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:05.048 [2024-07-26 14:18:10.043630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:124128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.048 [2024-07-26 14:18:10.043648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:05.048 [2024-07-26 14:18:10.043669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:123160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.048 [2024-07-26 14:18:10.043685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:05.048 [2024-07-26 14:18:10.043707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:124208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.048 [2024-07-26 14:18:10.043722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:05.048 [2024-07-26 14:18:10.044226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:124328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.048 [2024-07-26 14:18:10.044250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:05.048 [2024-07-26 14:18:10.044276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:124344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.048 [2024-07-26 14:18:10.044293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:05.048 [2024-07-26 14:18:10.044315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:124360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.048 [2024-07-26 14:18:10.044331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:05.048 [2024-07-26 14:18:10.044353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:124376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.048 [2024-07-26 14:18:10.044369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:05.048 [2024-07-26 14:18:10.044390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:124392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.048 [2024-07-26 14:18:10.044406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:05.048 [2024-07-26 14:18:10.044427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:124408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.048 [2024-07-26 14:18:10.044443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:05.048 [2024-07-26 14:18:10.044464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:124424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.048 [2024-07-26 14:18:10.044479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:05.048 [2024-07-26 14:18:10.044501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:124120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.048 [2024-07-26 14:18:10.044516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:05.048 [2024-07-26 14:18:10.044545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:124152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.048 [2024-07-26 14:18:10.044571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:05.048 [2024-07-26 14:18:10.044598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:124184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.048 [2024-07-26 14:18:10.044614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:05.048 [2024-07-26 14:18:10.044644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:124216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.048 [2024-07-26 14:18:10.044660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:05.048 [2024-07-26 14:18:10.044681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:123816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.048 [2024-07-26 14:18:10.044697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:05.048 [2024-07-26 14:18:10.044719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:123880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.048 [2024-07-26 14:18:10.044735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:05.048 [2024-07-26 14:18:10.046319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:123728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.048 [2024-07-26 14:18:10.046344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:05.048 [2024-07-26 14:18:10.046371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:123592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.048 [2024-07-26 14:18:10.046388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:05.048 [2024-07-26 14:18:10.046410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:124448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.048 [2024-07-26 14:18:10.046426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:05.048 [2024-07-26 14:18:10.046447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:124464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.048 [2024-07-26 14:18:10.046463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:05.048 [2024-07-26 14:18:10.046485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:123944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.048 [2024-07-26 14:18:10.046500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:05.048 [2024-07-26 14:18:10.046522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:123992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.048 [2024-07-26 14:18:10.046547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:05.048 [2024-07-26 14:18:10.046571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:124056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.048 [2024-07-26 14:18:10.046587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:05.048 [2024-07-26 14:18:10.046608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:123968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.048 [2024-07-26 14:18:10.046623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:05.048 [2024-07-26 14:18:10.046645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:124240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.048 [2024-07-26 14:18:10.046665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:05.048 [2024-07-26 14:18:10.046687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:124272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.048 [2024-07-26 14:18:10.046703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:05.048 [2024-07-26 14:18:10.046724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:124304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.048 [2024-07-26 14:18:10.046740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:05.048 [2024-07-26 14:18:10.046761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:124064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.048 [2024-07-26 14:18:10.046776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:05.048 [2024-07-26 14:18:10.046798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:123280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.048 [2024-07-26 14:18:10.046813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:05.048 [2024-07-26 14:18:10.046835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:123896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.048 [2024-07-26 14:18:10.046850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:05.048 [2024-07-26 14:18:10.046871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:123032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.048 [2024-07-26 14:18:10.046887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:05.048 [2024-07-26 14:18:10.046908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:123096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.048 [2024-07-26 14:18:10.046924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:05.048 [2024-07-26 14:18:10.046946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:123696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.048 [2024-07-26 14:18:10.046961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:05.048 [2024-07-26 14:18:10.046982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:124008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.048 [2024-07-26 14:18:10.046998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:05.048 [2024-07-26 14:18:10.047020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:123840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.048 [2024-07-26 14:18:10.047035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:05.049 [2024-07-26 14:18:10.047057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:123736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.049 [2024-07-26 14:18:10.047073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:05.049 [2024-07-26 14:18:10.047094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:123560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.049 [2024-07-26 14:18:10.047116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:05.049 [2024-07-26 14:18:10.047139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:123160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.049 [2024-07-26 14:18:10.047154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:05.049 [2024-07-26 14:18:10.047176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:124072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.049 [2024-07-26 14:18:10.047191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:05.049 [2024-07-26 14:18:10.047212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:124344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.049 [2024-07-26 14:18:10.047228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:05.049 [2024-07-26 14:18:10.047250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:124376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.049 [2024-07-26 14:18:10.047265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:05.049 [2024-07-26 14:18:10.047286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:124408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.049 [2024-07-26 14:18:10.047301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:05.049 [2024-07-26 14:18:10.047323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:124120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.049 [2024-07-26 14:18:10.047338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:05.049 [2024-07-26 14:18:10.047359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:124184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.049 [2024-07-26 14:18:10.047374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:05.049 [2024-07-26 14:18:10.047396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:123816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.049 [2024-07-26 14:18:10.047412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:05.049 [2024-07-26 14:18:10.048264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:124472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.049 [2024-07-26 14:18:10.048289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:05.049 [2024-07-26 14:18:10.048316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:124488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.049 [2024-07-26 14:18:10.048333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:05.049 [2024-07-26 14:18:10.048355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:124112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.049 [2024-07-26 14:18:10.048372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:05.049 [2024-07-26 14:18:10.048393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:124176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.049 [2024-07-26 14:18:10.048409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:05.049 [2024-07-26 14:18:10.048436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:124504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.049 [2024-07-26 14:18:10.048452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:05.049 [2024-07-26 14:18:10.048474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:124520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.049 [2024-07-26 14:18:10.048490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:05.049 [2024-07-26 14:18:10.048511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:124536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.049 [2024-07-26 14:18:10.048534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:05.049 [2024-07-26 14:18:10.048558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:124552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.049 [2024-07-26 14:18:10.048574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:05.049 [2024-07-26 14:18:10.048596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:124568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.049 [2024-07-26 14:18:10.048611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:05.049 [2024-07-26 14:18:10.048633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:124584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.049 [2024-07-26 14:18:10.048648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:05.049 [2024-07-26 14:18:10.048669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:124600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.049 [2024-07-26 14:18:10.048684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:05.049 [2024-07-26 14:18:10.048706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:124616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.049 [2024-07-26 14:18:10.048722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:05.049 [2024-07-26 14:18:10.048743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:124248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.049 [2024-07-26 14:18:10.048758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:05.049 [2024-07-26 14:18:10.048779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:124280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.049 [2024-07-26 14:18:10.048795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:05.049 [2024-07-26 14:18:10.048816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:124312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.049 [2024-07-26 14:18:10.048832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:05.049 [2024-07-26 14:18:10.048853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:123800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.049 [2024-07-26 14:18:10.048868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:05.049 [2024-07-26 14:18:10.048895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:123928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.049 [2024-07-26 14:18:10.048911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:05.049 [2024-07-26 14:18:10.050119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:124040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.049 [2024-07-26 14:18:10.050144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:05.049 [2024-07-26 14:18:10.050193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:123592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.049 [2024-07-26 14:18:10.050213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:05.049 [2024-07-26 14:18:10.050237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:124464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.049 [2024-07-26 14:18:10.050253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:05.049 [2024-07-26 14:18:10.050275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:123992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.049 [2024-07-26 14:18:10.050291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:05.049 [2024-07-26 14:18:10.050313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:123968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.049 [2024-07-26 14:18:10.050328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:05.049 [2024-07-26 14:18:10.050349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:124272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.049 [2024-07-26 14:18:10.050365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:05.049 [2024-07-26 14:18:10.050387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:124064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.049 [2024-07-26 14:18:10.050402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:05.049 [2024-07-26 14:18:10.050423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:123896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.049 [2024-07-26 14:18:10.050439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:05.049 [2024-07-26 14:18:10.050460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:123096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.049 [2024-07-26 14:18:10.050476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:05.049 [2024-07-26 14:18:10.050498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:124008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.049 [2024-07-26 14:18:10.050513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:05.049 [2024-07-26 14:18:10.050544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:123736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.049 [2024-07-26 14:18:10.050562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:05.050 [2024-07-26 14:18:10.050584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:123160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.050 [2024-07-26 14:18:10.050605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.050 [2024-07-26 14:18:10.050627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:124344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.050 [2024-07-26 14:18:10.050643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:05.050 [2024-07-26 14:18:10.050664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:124408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.050 [2024-07-26 14:18:10.050680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:05.050 [2024-07-26 14:18:10.050701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:124184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.050 [2024-07-26 14:18:10.050717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:05.050 [2024-07-26 14:18:10.050738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:124232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.050 [2024-07-26 14:18:10.050753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:05.050 [2024-07-26 14:18:10.050775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:124632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.050 [2024-07-26 14:18:10.050791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:05.050 [2024-07-26 14:18:10.050812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:124648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.050 [2024-07-26 14:18:10.050828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:05.050 [2024-07-26 14:18:10.050849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:124664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.050 [2024-07-26 14:18:10.050864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:05.050 [2024-07-26 14:18:10.050886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:124320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.050 [2024-07-26 14:18:10.050901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:05.050 [2024-07-26 14:18:10.050923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:124352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.050 [2024-07-26 14:18:10.050938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:05.050 [2024-07-26 14:18:10.050959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:124384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.050 [2024-07-26 14:18:10.050975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:05.050 [2024-07-26 14:18:10.050997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:124416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.050 [2024-07-26 14:18:10.051013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:05.050 Received shutdown signal, test time was about 32.620231 seconds 00:24:05.050 00:24:05.050 Latency(us) 00:24:05.050 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:05.050 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:05.050 Verification LBA range: start 0x0 length 0x4000 00:24:05.050 Nvme0n1 : 32.62 8193.92 32.01 0.00 0.00 15591.47 297.34 4101097.24 00:24:05.050 =================================================================================================================== 00:24:05.050 Total : 8193.92 32.01 0.00 0.00 15591.47 297.34 4101097.24 00:24:05.050 14:18:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:05.308 14:18:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:24:05.308 14:18:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:05.308 14:18:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:24:05.308 14:18:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:05.308 14:18:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:24:05.308 14:18:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:05.308 14:18:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:24:05.308 14:18:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:05.308 14:18:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:05.308 rmmod nvme_tcp 00:24:05.308 rmmod nvme_fabrics 00:24:05.308 rmmod nvme_keyring 00:24:05.308 14:18:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:05.309 14:18:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:24:05.309 14:18:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:24:05.309 14:18:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 295334 ']' 00:24:05.309 14:18:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 295334 00:24:05.309 14:18:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 295334 ']' 00:24:05.309 14:18:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 295334 00:24:05.309 14:18:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:24:05.309 14:18:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:05.309 14:18:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 295334 00:24:05.309 14:18:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:05.309 14:18:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:05.309 14:18:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 295334' 00:24:05.309 killing process with pid 295334 00:24:05.309 14:18:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 295334 00:24:05.309 14:18:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 295334 00:24:05.568 14:18:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:05.568 14:18:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:05.568 14:18:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:05.568 14:18:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:05.568 14:18:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:05.568 14:18:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:05.568 14:18:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:05.568 14:18:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:07.474 14:18:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:07.733 00:24:07.733 real 0m41.235s 00:24:07.733 user 2m4.010s 00:24:07.733 sys 0m10.949s 00:24:07.733 14:18:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:07.733 14:18:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:07.733 ************************************ 00:24:07.733 END TEST nvmf_host_multipath_status 00:24:07.733 ************************************ 00:24:07.733 14:18:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:24:07.733 14:18:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:07.733 14:18:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:07.733 14:18:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.733 ************************************ 00:24:07.733 START TEST nvmf_discovery_remove_ifc 00:24:07.733 ************************************ 00:24:07.733 14:18:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:24:07.733 * Looking for test storage... 00:24:07.733 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:07.733 14:18:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:07.733 14:18:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:24:07.733 14:18:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:07.733 14:18:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:07.733 14:18:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:07.733 14:18:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:07.733 14:18:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:07.733 14:18:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:07.733 14:18:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:07.733 14:18:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:07.733 14:18:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:07.733 14:18:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:07.733 14:18:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:24:07.733 14:18:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:24:07.733 14:18:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:07.734 14:18:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:07.734 14:18:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:07.734 14:18:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:07.734 14:18:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:07.734 14:18:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:07.734 14:18:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:07.734 14:18:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:07.734 14:18:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:07.734 14:18:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:07.734 14:18:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:07.734 14:18:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:24:07.734 14:18:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:07.734 14:18:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:24:07.734 14:18:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:07.734 14:18:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:07.734 14:18:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:07.734 14:18:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:07.734 14:18:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:07.734 14:18:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:07.734 14:18:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:07.734 14:18:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:07.734 14:18:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:24:07.734 14:18:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:24:07.734 14:18:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:24:07.734 14:18:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:24:07.734 14:18:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:24:07.734 14:18:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:24:07.734 14:18:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:24:07.734 14:18:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:07.734 14:18:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:07.734 14:18:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:07.734 14:18:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:07.734 14:18:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:07.734 14:18:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:07.734 14:18:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:07.734 14:18:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:07.734 14:18:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:07.734 14:18:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:07.734 14:18:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:24:07.734 14:18:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:10.267 14:18:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:10.267 14:18:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:24:10.267 14:18:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:10.267 14:18:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:10.267 14:18:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:10.267 14:18:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:10.267 14:18:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:10.267 14:18:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:24:10.267 14:18:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:10.267 14:18:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:24:10.267 14:18:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:24:10.267 14:18:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:24:10.267 14:18:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:24:10.267 14:18:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:24:10.267 14:18:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:24:10.267 14:18:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:10.267 14:18:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:10.267 14:18:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:10.267 14:18:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:10.267 14:18:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:10.267 14:18:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:10.267 14:18:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:10.267 14:18:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:10.267 14:18:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:10.267 14:18:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:10.267 14:18:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:10.267 14:18:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:10.267 14:18:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:10.267 14:18:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:10.267 14:18:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:10.267 14:18:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:10.267 14:18:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:10.267 14:18:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:10.267 14:18:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:24:10.267 Found 0000:09:00.0 (0x8086 - 0x159b) 00:24:10.267 14:18:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:10.267 14:18:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:10.267 14:18:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:10.267 14:18:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:10.267 14:18:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:10.267 14:18:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:10.267 14:18:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:24:10.267 Found 0000:09:00.1 (0x8086 - 0x159b) 00:24:10.267 14:18:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:10.267 14:18:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:10.267 14:18:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:10.267 14:18:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:10.267 14:18:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:10.267 14:18:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:10.267 14:18:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:10.267 14:18:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:10.267 14:18:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:10.267 14:18:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:10.267 14:18:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:10.267 14:18:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:10.267 14:18:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:10.267 14:18:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:10.267 14:18:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:10.267 14:18:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:24:10.267 Found net devices under 0000:09:00.0: cvl_0_0 00:24:10.267 14:18:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:10.267 14:18:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:10.267 14:18:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:10.267 14:18:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:10.267 14:18:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:10.267 14:18:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:10.267 14:18:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:10.267 14:18:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:10.267 14:18:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:24:10.267 Found net devices under 0000:09:00.1: cvl_0_1 00:24:10.267 14:18:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:10.267 14:18:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:10.267 14:18:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:24:10.267 14:18:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:10.267 14:18:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:10.267 14:18:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:10.267 14:18:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:10.267 14:18:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:10.267 14:18:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:10.267 14:18:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:10.267 14:18:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:10.268 14:18:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:10.268 14:18:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:10.268 14:18:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:10.268 14:18:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:10.268 14:18:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:10.268 14:18:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:10.268 14:18:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:10.268 14:18:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:10.268 14:18:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:10.268 14:18:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:10.268 14:18:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:10.268 14:18:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:10.268 14:18:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:10.268 14:18:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:10.268 14:18:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:10.268 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:10.268 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.189 ms 00:24:10.268 00:24:10.268 --- 10.0.0.2 ping statistics --- 00:24:10.268 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:10.268 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:24:10.268 14:18:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:10.268 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:10.268 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:24:10.268 00:24:10.268 --- 10.0.0.1 ping statistics --- 00:24:10.268 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:10.268 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:24:10.268 14:18:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:10.268 14:18:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:24:10.268 14:18:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:10.268 14:18:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:10.268 14:18:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:10.268 14:18:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:10.268 14:18:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:10.268 14:18:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:10.268 14:18:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:10.268 14:18:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:24:10.268 14:18:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:10.268 14:18:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:10.268 14:18:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:10.268 14:18:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=302392 00:24:10.268 14:18:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:10.268 14:18:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 302392 00:24:10.268 14:18:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 302392 ']' 00:24:10.268 14:18:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:10.268 14:18:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:10.268 14:18:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:10.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:10.268 14:18:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:10.268 14:18:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:10.268 [2024-07-26 14:18:17.908804] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:24:10.268 [2024-07-26 14:18:17.908899] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:10.268 EAL: No free 2048 kB hugepages reported on node 1 00:24:10.268 [2024-07-26 14:18:17.970598] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:10.268 [2024-07-26 14:18:18.072194] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:10.268 [2024-07-26 14:18:18.072250] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:10.268 [2024-07-26 14:18:18.072279] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:10.268 [2024-07-26 14:18:18.072290] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:10.268 [2024-07-26 14:18:18.072300] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:10.268 [2024-07-26 14:18:18.072325] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:10.268 14:18:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:10.268 14:18:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:24:10.268 14:18:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:10.268 14:18:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:10.268 14:18:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:10.268 14:18:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:10.268 14:18:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:24:10.268 14:18:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:10.268 14:18:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:10.268 [2024-07-26 14:18:18.221498] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:10.268 [2024-07-26 14:18:18.229719] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:24:10.268 null0 00:24:10.268 [2024-07-26 14:18:18.261684] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:10.268 14:18:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:10.268 14:18:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=302448 00:24:10.268 14:18:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:24:10.268 14:18:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 302448 /tmp/host.sock 00:24:10.268 14:18:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 302448 ']' 00:24:10.268 14:18:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:24:10.268 14:18:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:10.268 14:18:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:24:10.268 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:24:10.268 14:18:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:10.268 14:18:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:10.527 [2024-07-26 14:18:18.334666] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:24:10.527 [2024-07-26 14:18:18.334746] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid302448 ] 00:24:10.527 EAL: No free 2048 kB hugepages reported on node 1 00:24:10.527 [2024-07-26 14:18:18.395525] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:10.527 [2024-07-26 14:18:18.501831] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:10.527 14:18:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:10.527 14:18:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:24:10.527 14:18:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:10.527 14:18:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:24:10.527 14:18:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:10.527 14:18:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:10.785 14:18:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:10.785 14:18:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:24:10.785 14:18:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:10.785 14:18:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:10.785 14:18:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:10.785 14:18:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:24:10.785 14:18:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:10.785 14:18:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:11.718 [2024-07-26 14:18:19.708639] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:11.718 [2024-07-26 14:18:19.708686] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:11.718 [2024-07-26 14:18:19.708711] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:11.976 [2024-07-26 14:18:19.794995] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:24:11.976 [2024-07-26 14:18:19.972629] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:24:11.976 [2024-07-26 14:18:19.972695] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:24:11.976 [2024-07-26 14:18:19.972737] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:24:11.976 [2024-07-26 14:18:19.972762] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:11.976 [2024-07-26 14:18:19.972795] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:11.976 14:18:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.976 14:18:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:24:11.976 14:18:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:11.976 14:18:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:11.976 14:18:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:11.976 14:18:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.976 14:18:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:11.976 14:18:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:11.976 14:18:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:11.976 14:18:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:12.233 [2024-07-26 14:18:20.018320] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x9438e0 was disconnected and freed. delete nvme_qpair. 00:24:12.233 14:18:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:24:12.233 14:18:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:24:12.233 14:18:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:24:12.233 14:18:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:24:12.234 14:18:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:12.234 14:18:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:12.234 14:18:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:12.234 14:18:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:12.234 14:18:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:12.234 14:18:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:12.234 14:18:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:12.234 14:18:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:12.234 14:18:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:12.234 14:18:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:13.165 14:18:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:13.165 14:18:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:13.165 14:18:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:13.165 14:18:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:13.165 14:18:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:13.165 14:18:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:13.165 14:18:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:13.165 14:18:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:13.165 14:18:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:13.165 14:18:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:14.537 14:18:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:14.537 14:18:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:14.537 14:18:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:14.537 14:18:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:14.537 14:18:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:14.537 14:18:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:14.537 14:18:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:14.537 14:18:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:14.537 14:18:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:14.537 14:18:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:15.469 14:18:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:15.469 14:18:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:15.469 14:18:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:15.469 14:18:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:15.469 14:18:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:15.469 14:18:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:15.469 14:18:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:15.469 14:18:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:15.469 14:18:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:15.469 14:18:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:16.402 14:18:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:16.402 14:18:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:16.402 14:18:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:16.402 14:18:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.402 14:18:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:16.402 14:18:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:16.402 14:18:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:16.402 14:18:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:16.402 14:18:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:16.402 14:18:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:17.335 14:18:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:17.335 14:18:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:17.335 14:18:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.335 14:18:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:17.335 14:18:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:17.335 14:18:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:17.335 14:18:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:17.335 14:18:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.335 14:18:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:17.335 14:18:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:17.593 [2024-07-26 14:18:25.414116] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:24:17.593 [2024-07-26 14:18:25.414210] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:17.593 [2024-07-26 14:18:25.414233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.593 [2024-07-26 14:18:25.414252] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:17.593 [2024-07-26 14:18:25.414264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.593 [2024-07-26 14:18:25.414277] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:17.593 [2024-07-26 14:18:25.414289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.593 [2024-07-26 14:18:25.414303] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:17.593 [2024-07-26 14:18:25.414315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.593 [2024-07-26 14:18:25.414328] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:17.593 [2024-07-26 14:18:25.414340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.593 [2024-07-26 14:18:25.414353] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90a320 is same with the state(5) to be set 00:24:17.593 [2024-07-26 14:18:25.424129] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x90a320 (9): Bad file descriptor 00:24:17.593 [2024-07-26 14:18:25.434176] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:18.547 14:18:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:18.547 14:18:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:18.547 14:18:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:18.547 14:18:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:18.547 14:18:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:18.547 14:18:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:18.547 14:18:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:18.547 [2024-07-26 14:18:26.492586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:24:18.547 [2024-07-26 14:18:26.492671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x90a320 with addr=10.0.0.2, port=4420 00:24:18.547 [2024-07-26 14:18:26.492701] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90a320 is same with the state(5) to be set 00:24:18.547 [2024-07-26 14:18:26.492757] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x90a320 (9): Bad file descriptor 00:24:18.547 [2024-07-26 14:18:26.493228] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:18.547 [2024-07-26 14:18:26.493276] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:18.547 [2024-07-26 14:18:26.493296] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:18.547 [2024-07-26 14:18:26.493314] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:18.547 [2024-07-26 14:18:26.493346] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:18.547 [2024-07-26 14:18:26.493369] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:18.547 14:18:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:18.547 14:18:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:18.547 14:18:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:19.481 [2024-07-26 14:18:27.495889] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:19.481 [2024-07-26 14:18:27.495938] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:19.481 [2024-07-26 14:18:27.495968] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:19.481 [2024-07-26 14:18:27.495983] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:24:19.481 [2024-07-26 14:18:27.496012] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:19.481 [2024-07-26 14:18:27.496056] bdev_nvme.c:6762:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:24:19.481 [2024-07-26 14:18:27.496118] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:19.481 [2024-07-26 14:18:27.496154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.481 [2024-07-26 14:18:27.496174] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:19.481 [2024-07-26 14:18:27.496186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.481 [2024-07-26 14:18:27.496200] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:19.481 [2024-07-26 14:18:27.496212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.481 [2024-07-26 14:18:27.496225] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:19.481 [2024-07-26 14:18:27.496237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.481 [2024-07-26 14:18:27.496250] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:19.481 [2024-07-26 14:18:27.496262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.481 [2024-07-26 14:18:27.496275] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:24:19.481 [2024-07-26 14:18:27.496329] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x909780 (9): Bad file descriptor 00:24:19.481 [2024-07-26 14:18:27.497319] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:24:19.481 [2024-07-26 14:18:27.497342] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:24:19.739 14:18:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:19.739 14:18:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:19.739 14:18:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:19.739 14:18:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:19.739 14:18:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:19.739 14:18:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:19.739 14:18:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:19.739 14:18:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:19.739 14:18:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:24:19.739 14:18:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:19.739 14:18:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:19.739 14:18:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:24:19.739 14:18:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:19.739 14:18:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:19.739 14:18:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:19.739 14:18:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:19.739 14:18:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:19.739 14:18:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:19.739 14:18:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:19.739 14:18:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:19.739 14:18:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:24:19.739 14:18:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:20.672 14:18:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:20.672 14:18:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:20.672 14:18:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:20.672 14:18:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:20.672 14:18:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:20.672 14:18:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:20.672 14:18:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:20.672 14:18:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:20.672 14:18:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:24:20.672 14:18:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:21.604 [2024-07-26 14:18:29.555182] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:21.604 [2024-07-26 14:18:29.555224] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:21.604 [2024-07-26 14:18:29.555247] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:21.860 14:18:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:21.860 14:18:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:21.860 14:18:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:21.860 14:18:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:21.860 14:18:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:21.860 14:18:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:21.860 14:18:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:21.860 [2024-07-26 14:18:29.683655] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:24:21.860 14:18:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:21.860 14:18:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:24:21.860 14:18:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:21.860 [2024-07-26 14:18:29.786476] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:24:21.860 [2024-07-26 14:18:29.786525] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:24:21.860 [2024-07-26 14:18:29.786584] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:24:21.860 [2024-07-26 14:18:29.786609] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:24:21.860 [2024-07-26 14:18:29.786622] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:21.860 [2024-07-26 14:18:29.793504] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x9220b0 was disconnected and freed. delete nvme_qpair. 00:24:22.766 14:18:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:22.766 14:18:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:22.766 14:18:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:22.766 14:18:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:22.766 14:18:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:22.766 14:18:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:22.766 14:18:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:22.766 14:18:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:22.766 14:18:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:24:22.766 14:18:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:24:22.766 14:18:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 302448 00:24:22.766 14:18:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 302448 ']' 00:24:22.766 14:18:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 302448 00:24:22.766 14:18:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:24:22.766 14:18:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:22.766 14:18:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 302448 00:24:23.023 14:18:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:23.023 14:18:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:23.023 14:18:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 302448' 00:24:23.023 killing process with pid 302448 00:24:23.023 14:18:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 302448 00:24:23.023 14:18:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 302448 00:24:23.279 14:18:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:24:23.279 14:18:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:23.279 14:18:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:24:23.280 14:18:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:23.280 14:18:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:24:23.280 14:18:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:23.280 14:18:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:23.280 rmmod nvme_tcp 00:24:23.280 rmmod nvme_fabrics 00:24:23.280 rmmod nvme_keyring 00:24:23.280 14:18:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:23.280 14:18:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:24:23.280 14:18:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:24:23.280 14:18:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 302392 ']' 00:24:23.280 14:18:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 302392 00:24:23.280 14:18:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 302392 ']' 00:24:23.280 14:18:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 302392 00:24:23.280 14:18:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:24:23.280 14:18:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:23.280 14:18:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 302392 00:24:23.280 14:18:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:23.280 14:18:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:23.280 14:18:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 302392' 00:24:23.280 killing process with pid 302392 00:24:23.280 14:18:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 302392 00:24:23.280 14:18:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 302392 00:24:23.538 14:18:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:23.538 14:18:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:23.538 14:18:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:23.538 14:18:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:23.538 14:18:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:23.538 14:18:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:23.538 14:18:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:23.538 14:18:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:25.445 14:18:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:25.445 00:24:25.445 real 0m17.883s 00:24:25.445 user 0m25.755s 00:24:25.445 sys 0m3.164s 00:24:25.445 14:18:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:25.445 14:18:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:25.445 ************************************ 00:24:25.445 END TEST nvmf_discovery_remove_ifc 00:24:25.445 ************************************ 00:24:25.445 14:18:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:24:25.445 14:18:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:25.445 14:18:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:25.445 14:18:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.703 ************************************ 00:24:25.703 START TEST nvmf_identify_kernel_target 00:24:25.703 ************************************ 00:24:25.703 14:18:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:24:25.703 * Looking for test storage... 00:24:25.703 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:25.703 14:18:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:25.703 14:18:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:24:25.703 14:18:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:25.703 14:18:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:25.703 14:18:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:25.703 14:18:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:25.703 14:18:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:25.703 14:18:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:25.703 14:18:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:25.703 14:18:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:25.704 14:18:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:25.704 14:18:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:25.704 14:18:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:24:25.704 14:18:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:24:25.704 14:18:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:25.704 14:18:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:25.704 14:18:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:25.704 14:18:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:25.704 14:18:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:25.704 14:18:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:25.704 14:18:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:25.704 14:18:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:25.704 14:18:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:25.704 14:18:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:25.704 14:18:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:25.704 14:18:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:24:25.704 14:18:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:25.704 14:18:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:24:25.704 14:18:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:25.704 14:18:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:25.704 14:18:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:25.704 14:18:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:25.704 14:18:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:25.704 14:18:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:25.704 14:18:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:25.704 14:18:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:25.704 14:18:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:24:25.704 14:18:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:25.704 14:18:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:25.704 14:18:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:25.704 14:18:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:25.704 14:18:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:25.704 14:18:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:25.704 14:18:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:25.704 14:18:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:25.704 14:18:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:25.704 14:18:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:25.704 14:18:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:24:25.704 14:18:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:24:27.603 14:18:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:27.603 14:18:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:24:27.603 14:18:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:27.603 14:18:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:27.603 14:18:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:27.603 14:18:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:27.603 14:18:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:27.603 14:18:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:24:27.603 14:18:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:27.603 14:18:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:24:27.603 14:18:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:24:27.603 14:18:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:24:27.603 14:18:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:24:27.603 14:18:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:24:27.603 14:18:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:24:27.603 14:18:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:27.603 14:18:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:27.603 14:18:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:27.603 14:18:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:27.603 14:18:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:27.604 14:18:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:27.604 14:18:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:27.604 14:18:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:27.604 14:18:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:27.604 14:18:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:27.604 14:18:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:27.604 14:18:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:27.604 14:18:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:27.604 14:18:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:27.604 14:18:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:27.604 14:18:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:27.604 14:18:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:27.604 14:18:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:27.604 14:18:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:24:27.604 Found 0000:09:00.0 (0x8086 - 0x159b) 00:24:27.604 14:18:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:27.604 14:18:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:27.604 14:18:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:27.604 14:18:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:27.604 14:18:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:27.604 14:18:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:27.604 14:18:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:24:27.604 Found 0000:09:00.1 (0x8086 - 0x159b) 00:24:27.604 14:18:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:27.604 14:18:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:27.604 14:18:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:27.604 14:18:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:27.604 14:18:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:27.604 14:18:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:27.604 14:18:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:27.604 14:18:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:27.604 14:18:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:27.604 14:18:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:27.604 14:18:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:27.604 14:18:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:27.604 14:18:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:27.604 14:18:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:27.604 14:18:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:27.604 14:18:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:24:27.604 Found net devices under 0000:09:00.0: cvl_0_0 00:24:27.604 14:18:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:27.604 14:18:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:27.604 14:18:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:27.604 14:18:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:27.604 14:18:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:27.604 14:18:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:27.604 14:18:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:27.604 14:18:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:27.604 14:18:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:24:27.604 Found net devices under 0000:09:00.1: cvl_0_1 00:24:27.604 14:18:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:27.604 14:18:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:27.604 14:18:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:24:27.604 14:18:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:27.604 14:18:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:27.604 14:18:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:27.604 14:18:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:27.604 14:18:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:27.604 14:18:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:27.604 14:18:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:27.604 14:18:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:27.604 14:18:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:27.604 14:18:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:27.604 14:18:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:27.604 14:18:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:27.604 14:18:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:27.604 14:18:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:27.604 14:18:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:27.604 14:18:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:27.863 14:18:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:27.863 14:18:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:27.863 14:18:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:27.863 14:18:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:27.863 14:18:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:27.863 14:18:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:27.863 14:18:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:27.863 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:27.863 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.260 ms 00:24:27.863 00:24:27.863 --- 10.0.0.2 ping statistics --- 00:24:27.863 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:27.863 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:24:27.863 14:18:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:27.863 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:27.863 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.142 ms 00:24:27.863 00:24:27.863 --- 10.0.0.1 ping statistics --- 00:24:27.863 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:27.863 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:24:27.863 14:18:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:27.863 14:18:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:24:27.863 14:18:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:27.863 14:18:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:27.863 14:18:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:27.863 14:18:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:27.863 14:18:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:27.863 14:18:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:27.863 14:18:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:27.863 14:18:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:24:27.863 14:18:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:24:27.863 14:18:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:24:27.863 14:18:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:27.863 14:18:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:27.863 14:18:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:27.863 14:18:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:27.863 14:18:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:27.863 14:18:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:27.863 14:18:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:27.863 14:18:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:27.863 14:18:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:27.863 14:18:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:24:27.863 14:18:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:24:27.863 14:18:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:24:27.863 14:18:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:24:27.863 14:18:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:27.863 14:18:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:27.863 14:18:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:24:27.863 14:18:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:24:27.863 14:18:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:24:27.863 14:18:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:24:27.863 14:18:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:24:27.863 14:18:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:24:29.235 Waiting for block devices as requested 00:24:29.235 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:24:29.235 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:24:29.235 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:24:29.235 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:24:29.235 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:24:29.493 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:24:29.493 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:24:29.493 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:24:29.751 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:24:29.751 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:24:29.751 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:24:30.009 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:24:30.009 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:24:30.009 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:24:30.009 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:24:30.266 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:24:30.266 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:24:30.266 14:18:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:24:30.266 14:18:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:24:30.266 14:18:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:24:30.266 14:18:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:24:30.266 14:18:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:24:30.266 14:18:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:24:30.266 14:18:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:24:30.266 14:18:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:24:30.266 14:18:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:24:30.266 No valid GPT data, bailing 00:24:30.266 14:18:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:24:30.524 14:18:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:24:30.524 14:18:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:24:30.524 14:18:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:24:30.524 14:18:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:24:30.524 14:18:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:30.524 14:18:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:30.524 14:18:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:24:30.525 14:18:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:24:30.525 14:18:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:24:30.525 14:18:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:24:30.525 14:18:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:24:30.525 14:18:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:24:30.525 14:18:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:24:30.525 14:18:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:24:30.525 14:18:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:24:30.525 14:18:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:24:30.525 14:18:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.1 -t tcp -s 4420 00:24:30.525 00:24:30.525 Discovery Log Number of Records 2, Generation counter 2 00:24:30.525 =====Discovery Log Entry 0====== 00:24:30.525 trtype: tcp 00:24:30.525 adrfam: ipv4 00:24:30.525 subtype: current discovery subsystem 00:24:30.525 treq: not specified, sq flow control disable supported 00:24:30.525 portid: 1 00:24:30.525 trsvcid: 4420 00:24:30.525 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:24:30.525 traddr: 10.0.0.1 00:24:30.525 eflags: none 00:24:30.525 sectype: none 00:24:30.525 =====Discovery Log Entry 1====== 00:24:30.525 trtype: tcp 00:24:30.525 adrfam: ipv4 00:24:30.525 subtype: nvme subsystem 00:24:30.525 treq: not specified, sq flow control disable supported 00:24:30.525 portid: 1 00:24:30.525 trsvcid: 4420 00:24:30.525 subnqn: nqn.2016-06.io.spdk:testnqn 00:24:30.525 traddr: 10.0.0.1 00:24:30.525 eflags: none 00:24:30.525 sectype: none 00:24:30.525 14:18:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:24:30.525 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:24:30.525 EAL: No free 2048 kB hugepages reported on node 1 00:24:30.525 ===================================================== 00:24:30.525 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:24:30.525 ===================================================== 00:24:30.525 Controller Capabilities/Features 00:24:30.525 ================================ 00:24:30.525 Vendor ID: 0000 00:24:30.525 Subsystem Vendor ID: 0000 00:24:30.525 Serial Number: cd4eec9f8c7d8a217fb5 00:24:30.525 Model Number: Linux 00:24:30.525 Firmware Version: 6.7.0-68 00:24:30.525 Recommended Arb Burst: 0 00:24:30.525 IEEE OUI Identifier: 00 00 00 00:24:30.525 Multi-path I/O 00:24:30.525 May have multiple subsystem ports: No 00:24:30.525 May have multiple controllers: No 00:24:30.525 Associated with SR-IOV VF: No 00:24:30.525 Max Data Transfer Size: Unlimited 00:24:30.525 Max Number of Namespaces: 0 00:24:30.525 Max Number of I/O Queues: 1024 00:24:30.525 NVMe Specification Version (VS): 1.3 00:24:30.525 NVMe Specification Version (Identify): 1.3 00:24:30.525 Maximum Queue Entries: 1024 00:24:30.525 Contiguous Queues Required: No 00:24:30.525 Arbitration Mechanisms Supported 00:24:30.525 Weighted Round Robin: Not Supported 00:24:30.525 Vendor Specific: Not Supported 00:24:30.525 Reset Timeout: 7500 ms 00:24:30.525 Doorbell Stride: 4 bytes 00:24:30.525 NVM Subsystem Reset: Not Supported 00:24:30.525 Command Sets Supported 00:24:30.525 NVM Command Set: Supported 00:24:30.525 Boot Partition: Not Supported 00:24:30.525 Memory Page Size Minimum: 4096 bytes 00:24:30.525 Memory Page Size Maximum: 4096 bytes 00:24:30.525 Persistent Memory Region: Not Supported 00:24:30.525 Optional Asynchronous Events Supported 00:24:30.525 Namespace Attribute Notices: Not Supported 00:24:30.525 Firmware Activation Notices: Not Supported 00:24:30.525 ANA Change Notices: Not Supported 00:24:30.525 PLE Aggregate Log Change Notices: Not Supported 00:24:30.525 LBA Status Info Alert Notices: Not Supported 00:24:30.525 EGE Aggregate Log Change Notices: Not Supported 00:24:30.525 Normal NVM Subsystem Shutdown event: Not Supported 00:24:30.525 Zone Descriptor Change Notices: Not Supported 00:24:30.525 Discovery Log Change Notices: Supported 00:24:30.525 Controller Attributes 00:24:30.525 128-bit Host Identifier: Not Supported 00:24:30.525 Non-Operational Permissive Mode: Not Supported 00:24:30.525 NVM Sets: Not Supported 00:24:30.525 Read Recovery Levels: Not Supported 00:24:30.525 Endurance Groups: Not Supported 00:24:30.525 Predictable Latency Mode: Not Supported 00:24:30.525 Traffic Based Keep ALive: Not Supported 00:24:30.525 Namespace Granularity: Not Supported 00:24:30.525 SQ Associations: Not Supported 00:24:30.525 UUID List: Not Supported 00:24:30.525 Multi-Domain Subsystem: Not Supported 00:24:30.525 Fixed Capacity Management: Not Supported 00:24:30.525 Variable Capacity Management: Not Supported 00:24:30.525 Delete Endurance Group: Not Supported 00:24:30.525 Delete NVM Set: Not Supported 00:24:30.525 Extended LBA Formats Supported: Not Supported 00:24:30.525 Flexible Data Placement Supported: Not Supported 00:24:30.525 00:24:30.525 Controller Memory Buffer Support 00:24:30.525 ================================ 00:24:30.525 Supported: No 00:24:30.525 00:24:30.525 Persistent Memory Region Support 00:24:30.525 ================================ 00:24:30.525 Supported: No 00:24:30.525 00:24:30.525 Admin Command Set Attributes 00:24:30.525 ============================ 00:24:30.525 Security Send/Receive: Not Supported 00:24:30.525 Format NVM: Not Supported 00:24:30.525 Firmware Activate/Download: Not Supported 00:24:30.525 Namespace Management: Not Supported 00:24:30.525 Device Self-Test: Not Supported 00:24:30.525 Directives: Not Supported 00:24:30.525 NVMe-MI: Not Supported 00:24:30.525 Virtualization Management: Not Supported 00:24:30.525 Doorbell Buffer Config: Not Supported 00:24:30.525 Get LBA Status Capability: Not Supported 00:24:30.525 Command & Feature Lockdown Capability: Not Supported 00:24:30.525 Abort Command Limit: 1 00:24:30.525 Async Event Request Limit: 1 00:24:30.525 Number of Firmware Slots: N/A 00:24:30.525 Firmware Slot 1 Read-Only: N/A 00:24:30.525 Firmware Activation Without Reset: N/A 00:24:30.525 Multiple Update Detection Support: N/A 00:24:30.525 Firmware Update Granularity: No Information Provided 00:24:30.525 Per-Namespace SMART Log: No 00:24:30.525 Asymmetric Namespace Access Log Page: Not Supported 00:24:30.525 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:24:30.525 Command Effects Log Page: Not Supported 00:24:30.525 Get Log Page Extended Data: Supported 00:24:30.525 Telemetry Log Pages: Not Supported 00:24:30.525 Persistent Event Log Pages: Not Supported 00:24:30.525 Supported Log Pages Log Page: May Support 00:24:30.525 Commands Supported & Effects Log Page: Not Supported 00:24:30.525 Feature Identifiers & Effects Log Page:May Support 00:24:30.525 NVMe-MI Commands & Effects Log Page: May Support 00:24:30.525 Data Area 4 for Telemetry Log: Not Supported 00:24:30.525 Error Log Page Entries Supported: 1 00:24:30.525 Keep Alive: Not Supported 00:24:30.525 00:24:30.525 NVM Command Set Attributes 00:24:30.525 ========================== 00:24:30.525 Submission Queue Entry Size 00:24:30.525 Max: 1 00:24:30.525 Min: 1 00:24:30.525 Completion Queue Entry Size 00:24:30.525 Max: 1 00:24:30.525 Min: 1 00:24:30.525 Number of Namespaces: 0 00:24:30.525 Compare Command: Not Supported 00:24:30.525 Write Uncorrectable Command: Not Supported 00:24:30.525 Dataset Management Command: Not Supported 00:24:30.525 Write Zeroes Command: Not Supported 00:24:30.525 Set Features Save Field: Not Supported 00:24:30.525 Reservations: Not Supported 00:24:30.525 Timestamp: Not Supported 00:24:30.525 Copy: Not Supported 00:24:30.525 Volatile Write Cache: Not Present 00:24:30.525 Atomic Write Unit (Normal): 1 00:24:30.525 Atomic Write Unit (PFail): 1 00:24:30.525 Atomic Compare & Write Unit: 1 00:24:30.525 Fused Compare & Write: Not Supported 00:24:30.525 Scatter-Gather List 00:24:30.525 SGL Command Set: Supported 00:24:30.525 SGL Keyed: Not Supported 00:24:30.525 SGL Bit Bucket Descriptor: Not Supported 00:24:30.525 SGL Metadata Pointer: Not Supported 00:24:30.525 Oversized SGL: Not Supported 00:24:30.525 SGL Metadata Address: Not Supported 00:24:30.525 SGL Offset: Supported 00:24:30.525 Transport SGL Data Block: Not Supported 00:24:30.525 Replay Protected Memory Block: Not Supported 00:24:30.525 00:24:30.525 Firmware Slot Information 00:24:30.525 ========================= 00:24:30.525 Active slot: 0 00:24:30.525 00:24:30.525 00:24:30.525 Error Log 00:24:30.525 ========= 00:24:30.525 00:24:30.525 Active Namespaces 00:24:30.525 ================= 00:24:30.525 Discovery Log Page 00:24:30.525 ================== 00:24:30.525 Generation Counter: 2 00:24:30.525 Number of Records: 2 00:24:30.525 Record Format: 0 00:24:30.525 00:24:30.525 Discovery Log Entry 0 00:24:30.525 ---------------------- 00:24:30.525 Transport Type: 3 (TCP) 00:24:30.525 Address Family: 1 (IPv4) 00:24:30.525 Subsystem Type: 3 (Current Discovery Subsystem) 00:24:30.525 Entry Flags: 00:24:30.525 Duplicate Returned Information: 0 00:24:30.525 Explicit Persistent Connection Support for Discovery: 0 00:24:30.525 Transport Requirements: 00:24:30.526 Secure Channel: Not Specified 00:24:30.526 Port ID: 1 (0x0001) 00:24:30.526 Controller ID: 65535 (0xffff) 00:24:30.526 Admin Max SQ Size: 32 00:24:30.526 Transport Service Identifier: 4420 00:24:30.526 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:24:30.526 Transport Address: 10.0.0.1 00:24:30.526 Discovery Log Entry 1 00:24:30.526 ---------------------- 00:24:30.526 Transport Type: 3 (TCP) 00:24:30.526 Address Family: 1 (IPv4) 00:24:30.526 Subsystem Type: 2 (NVM Subsystem) 00:24:30.526 Entry Flags: 00:24:30.526 Duplicate Returned Information: 0 00:24:30.526 Explicit Persistent Connection Support for Discovery: 0 00:24:30.526 Transport Requirements: 00:24:30.526 Secure Channel: Not Specified 00:24:30.526 Port ID: 1 (0x0001) 00:24:30.526 Controller ID: 65535 (0xffff) 00:24:30.526 Admin Max SQ Size: 32 00:24:30.526 Transport Service Identifier: 4420 00:24:30.526 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:24:30.526 Transport Address: 10.0.0.1 00:24:30.526 14:18:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:30.526 EAL: No free 2048 kB hugepages reported on node 1 00:24:30.785 get_feature(0x01) failed 00:24:30.785 get_feature(0x02) failed 00:24:30.785 get_feature(0x04) failed 00:24:30.785 ===================================================== 00:24:30.785 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:24:30.785 ===================================================== 00:24:30.785 Controller Capabilities/Features 00:24:30.785 ================================ 00:24:30.785 Vendor ID: 0000 00:24:30.785 Subsystem Vendor ID: 0000 00:24:30.785 Serial Number: 065caae589bcbb79ae95 00:24:30.785 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:24:30.785 Firmware Version: 6.7.0-68 00:24:30.785 Recommended Arb Burst: 6 00:24:30.785 IEEE OUI Identifier: 00 00 00 00:24:30.785 Multi-path I/O 00:24:30.785 May have multiple subsystem ports: Yes 00:24:30.785 May have multiple controllers: Yes 00:24:30.785 Associated with SR-IOV VF: No 00:24:30.785 Max Data Transfer Size: Unlimited 00:24:30.785 Max Number of Namespaces: 1024 00:24:30.785 Max Number of I/O Queues: 128 00:24:30.785 NVMe Specification Version (VS): 1.3 00:24:30.785 NVMe Specification Version (Identify): 1.3 00:24:30.785 Maximum Queue Entries: 1024 00:24:30.785 Contiguous Queues Required: No 00:24:30.785 Arbitration Mechanisms Supported 00:24:30.785 Weighted Round Robin: Not Supported 00:24:30.785 Vendor Specific: Not Supported 00:24:30.785 Reset Timeout: 7500 ms 00:24:30.785 Doorbell Stride: 4 bytes 00:24:30.785 NVM Subsystem Reset: Not Supported 00:24:30.785 Command Sets Supported 00:24:30.785 NVM Command Set: Supported 00:24:30.785 Boot Partition: Not Supported 00:24:30.785 Memory Page Size Minimum: 4096 bytes 00:24:30.785 Memory Page Size Maximum: 4096 bytes 00:24:30.785 Persistent Memory Region: Not Supported 00:24:30.785 Optional Asynchronous Events Supported 00:24:30.785 Namespace Attribute Notices: Supported 00:24:30.785 Firmware Activation Notices: Not Supported 00:24:30.785 ANA Change Notices: Supported 00:24:30.785 PLE Aggregate Log Change Notices: Not Supported 00:24:30.785 LBA Status Info Alert Notices: Not Supported 00:24:30.785 EGE Aggregate Log Change Notices: Not Supported 00:24:30.785 Normal NVM Subsystem Shutdown event: Not Supported 00:24:30.785 Zone Descriptor Change Notices: Not Supported 00:24:30.785 Discovery Log Change Notices: Not Supported 00:24:30.785 Controller Attributes 00:24:30.785 128-bit Host Identifier: Supported 00:24:30.785 Non-Operational Permissive Mode: Not Supported 00:24:30.785 NVM Sets: Not Supported 00:24:30.785 Read Recovery Levels: Not Supported 00:24:30.785 Endurance Groups: Not Supported 00:24:30.785 Predictable Latency Mode: Not Supported 00:24:30.785 Traffic Based Keep ALive: Supported 00:24:30.785 Namespace Granularity: Not Supported 00:24:30.785 SQ Associations: Not Supported 00:24:30.785 UUID List: Not Supported 00:24:30.785 Multi-Domain Subsystem: Not Supported 00:24:30.785 Fixed Capacity Management: Not Supported 00:24:30.785 Variable Capacity Management: Not Supported 00:24:30.785 Delete Endurance Group: Not Supported 00:24:30.785 Delete NVM Set: Not Supported 00:24:30.785 Extended LBA Formats Supported: Not Supported 00:24:30.785 Flexible Data Placement Supported: Not Supported 00:24:30.785 00:24:30.785 Controller Memory Buffer Support 00:24:30.785 ================================ 00:24:30.785 Supported: No 00:24:30.785 00:24:30.785 Persistent Memory Region Support 00:24:30.785 ================================ 00:24:30.785 Supported: No 00:24:30.785 00:24:30.785 Admin Command Set Attributes 00:24:30.785 ============================ 00:24:30.785 Security Send/Receive: Not Supported 00:24:30.785 Format NVM: Not Supported 00:24:30.785 Firmware Activate/Download: Not Supported 00:24:30.785 Namespace Management: Not Supported 00:24:30.785 Device Self-Test: Not Supported 00:24:30.786 Directives: Not Supported 00:24:30.786 NVMe-MI: Not Supported 00:24:30.786 Virtualization Management: Not Supported 00:24:30.786 Doorbell Buffer Config: Not Supported 00:24:30.786 Get LBA Status Capability: Not Supported 00:24:30.786 Command & Feature Lockdown Capability: Not Supported 00:24:30.786 Abort Command Limit: 4 00:24:30.786 Async Event Request Limit: 4 00:24:30.786 Number of Firmware Slots: N/A 00:24:30.786 Firmware Slot 1 Read-Only: N/A 00:24:30.786 Firmware Activation Without Reset: N/A 00:24:30.786 Multiple Update Detection Support: N/A 00:24:30.786 Firmware Update Granularity: No Information Provided 00:24:30.786 Per-Namespace SMART Log: Yes 00:24:30.786 Asymmetric Namespace Access Log Page: Supported 00:24:30.786 ANA Transition Time : 10 sec 00:24:30.786 00:24:30.786 Asymmetric Namespace Access Capabilities 00:24:30.786 ANA Optimized State : Supported 00:24:30.786 ANA Non-Optimized State : Supported 00:24:30.786 ANA Inaccessible State : Supported 00:24:30.786 ANA Persistent Loss State : Supported 00:24:30.786 ANA Change State : Supported 00:24:30.786 ANAGRPID is not changed : No 00:24:30.786 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:24:30.786 00:24:30.786 ANA Group Identifier Maximum : 128 00:24:30.786 Number of ANA Group Identifiers : 128 00:24:30.786 Max Number of Allowed Namespaces : 1024 00:24:30.786 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:24:30.786 Command Effects Log Page: Supported 00:24:30.786 Get Log Page Extended Data: Supported 00:24:30.786 Telemetry Log Pages: Not Supported 00:24:30.786 Persistent Event Log Pages: Not Supported 00:24:30.786 Supported Log Pages Log Page: May Support 00:24:30.786 Commands Supported & Effects Log Page: Not Supported 00:24:30.786 Feature Identifiers & Effects Log Page:May Support 00:24:30.786 NVMe-MI Commands & Effects Log Page: May Support 00:24:30.786 Data Area 4 for Telemetry Log: Not Supported 00:24:30.786 Error Log Page Entries Supported: 128 00:24:30.786 Keep Alive: Supported 00:24:30.786 Keep Alive Granularity: 1000 ms 00:24:30.786 00:24:30.786 NVM Command Set Attributes 00:24:30.786 ========================== 00:24:30.786 Submission Queue Entry Size 00:24:30.786 Max: 64 00:24:30.786 Min: 64 00:24:30.786 Completion Queue Entry Size 00:24:30.786 Max: 16 00:24:30.786 Min: 16 00:24:30.786 Number of Namespaces: 1024 00:24:30.786 Compare Command: Not Supported 00:24:30.786 Write Uncorrectable Command: Not Supported 00:24:30.786 Dataset Management Command: Supported 00:24:30.786 Write Zeroes Command: Supported 00:24:30.786 Set Features Save Field: Not Supported 00:24:30.786 Reservations: Not Supported 00:24:30.786 Timestamp: Not Supported 00:24:30.786 Copy: Not Supported 00:24:30.786 Volatile Write Cache: Present 00:24:30.786 Atomic Write Unit (Normal): 1 00:24:30.786 Atomic Write Unit (PFail): 1 00:24:30.786 Atomic Compare & Write Unit: 1 00:24:30.786 Fused Compare & Write: Not Supported 00:24:30.786 Scatter-Gather List 00:24:30.786 SGL Command Set: Supported 00:24:30.786 SGL Keyed: Not Supported 00:24:30.786 SGL Bit Bucket Descriptor: Not Supported 00:24:30.786 SGL Metadata Pointer: Not Supported 00:24:30.786 Oversized SGL: Not Supported 00:24:30.786 SGL Metadata Address: Not Supported 00:24:30.786 SGL Offset: Supported 00:24:30.786 Transport SGL Data Block: Not Supported 00:24:30.786 Replay Protected Memory Block: Not Supported 00:24:30.786 00:24:30.786 Firmware Slot Information 00:24:30.786 ========================= 00:24:30.786 Active slot: 0 00:24:30.786 00:24:30.786 Asymmetric Namespace Access 00:24:30.786 =========================== 00:24:30.786 Change Count : 0 00:24:30.786 Number of ANA Group Descriptors : 1 00:24:30.786 ANA Group Descriptor : 0 00:24:30.786 ANA Group ID : 1 00:24:30.786 Number of NSID Values : 1 00:24:30.786 Change Count : 0 00:24:30.786 ANA State : 1 00:24:30.786 Namespace Identifier : 1 00:24:30.786 00:24:30.786 Commands Supported and Effects 00:24:30.786 ============================== 00:24:30.786 Admin Commands 00:24:30.786 -------------- 00:24:30.786 Get Log Page (02h): Supported 00:24:30.786 Identify (06h): Supported 00:24:30.786 Abort (08h): Supported 00:24:30.786 Set Features (09h): Supported 00:24:30.786 Get Features (0Ah): Supported 00:24:30.786 Asynchronous Event Request (0Ch): Supported 00:24:30.786 Keep Alive (18h): Supported 00:24:30.786 I/O Commands 00:24:30.786 ------------ 00:24:30.786 Flush (00h): Supported 00:24:30.786 Write (01h): Supported LBA-Change 00:24:30.786 Read (02h): Supported 00:24:30.786 Write Zeroes (08h): Supported LBA-Change 00:24:30.786 Dataset Management (09h): Supported 00:24:30.786 00:24:30.786 Error Log 00:24:30.786 ========= 00:24:30.786 Entry: 0 00:24:30.786 Error Count: 0x3 00:24:30.786 Submission Queue Id: 0x0 00:24:30.786 Command Id: 0x5 00:24:30.786 Phase Bit: 0 00:24:30.786 Status Code: 0x2 00:24:30.786 Status Code Type: 0x0 00:24:30.786 Do Not Retry: 1 00:24:30.786 Error Location: 0x28 00:24:30.786 LBA: 0x0 00:24:30.786 Namespace: 0x0 00:24:30.786 Vendor Log Page: 0x0 00:24:30.786 ----------- 00:24:30.786 Entry: 1 00:24:30.786 Error Count: 0x2 00:24:30.786 Submission Queue Id: 0x0 00:24:30.786 Command Id: 0x5 00:24:30.786 Phase Bit: 0 00:24:30.786 Status Code: 0x2 00:24:30.786 Status Code Type: 0x0 00:24:30.786 Do Not Retry: 1 00:24:30.786 Error Location: 0x28 00:24:30.786 LBA: 0x0 00:24:30.786 Namespace: 0x0 00:24:30.786 Vendor Log Page: 0x0 00:24:30.786 ----------- 00:24:30.786 Entry: 2 00:24:30.786 Error Count: 0x1 00:24:30.786 Submission Queue Id: 0x0 00:24:30.786 Command Id: 0x4 00:24:30.786 Phase Bit: 0 00:24:30.786 Status Code: 0x2 00:24:30.786 Status Code Type: 0x0 00:24:30.786 Do Not Retry: 1 00:24:30.786 Error Location: 0x28 00:24:30.786 LBA: 0x0 00:24:30.786 Namespace: 0x0 00:24:30.786 Vendor Log Page: 0x0 00:24:30.786 00:24:30.786 Number of Queues 00:24:30.786 ================ 00:24:30.786 Number of I/O Submission Queues: 128 00:24:30.786 Number of I/O Completion Queues: 128 00:24:30.786 00:24:30.786 ZNS Specific Controller Data 00:24:30.786 ============================ 00:24:30.786 Zone Append Size Limit: 0 00:24:30.786 00:24:30.786 00:24:30.786 Active Namespaces 00:24:30.786 ================= 00:24:30.786 get_feature(0x05) failed 00:24:30.786 Namespace ID:1 00:24:30.786 Command Set Identifier: NVM (00h) 00:24:30.786 Deallocate: Supported 00:24:30.786 Deallocated/Unwritten Error: Not Supported 00:24:30.786 Deallocated Read Value: Unknown 00:24:30.786 Deallocate in Write Zeroes: Not Supported 00:24:30.786 Deallocated Guard Field: 0xFFFF 00:24:30.786 Flush: Supported 00:24:30.786 Reservation: Not Supported 00:24:30.786 Namespace Sharing Capabilities: Multiple Controllers 00:24:30.786 Size (in LBAs): 1953525168 (931GiB) 00:24:30.786 Capacity (in LBAs): 1953525168 (931GiB) 00:24:30.786 Utilization (in LBAs): 1953525168 (931GiB) 00:24:30.786 UUID: b83b8cca-6e6f-49c5-9e8a-71eda1d51c27 00:24:30.786 Thin Provisioning: Not Supported 00:24:30.786 Per-NS Atomic Units: Yes 00:24:30.786 Atomic Boundary Size (Normal): 0 00:24:30.786 Atomic Boundary Size (PFail): 0 00:24:30.786 Atomic Boundary Offset: 0 00:24:30.786 NGUID/EUI64 Never Reused: No 00:24:30.786 ANA group ID: 1 00:24:30.786 Namespace Write Protected: No 00:24:30.786 Number of LBA Formats: 1 00:24:30.786 Current LBA Format: LBA Format #00 00:24:30.786 LBA Format #00: Data Size: 512 Metadata Size: 0 00:24:30.786 00:24:30.787 14:18:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:24:30.787 14:18:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:30.787 14:18:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:24:30.787 14:18:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:30.787 14:18:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:24:30.787 14:18:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:30.787 14:18:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:30.787 rmmod nvme_tcp 00:24:30.787 rmmod nvme_fabrics 00:24:30.787 14:18:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:30.787 14:18:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:24:30.787 14:18:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:24:30.787 14:18:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:24:30.787 14:18:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:30.787 14:18:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:30.787 14:18:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:30.787 14:18:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:30.787 14:18:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:30.787 14:18:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:30.787 14:18:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:30.787 14:18:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:32.692 14:18:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:32.692 14:18:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:24:32.692 14:18:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:24:32.692 14:18:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:24:32.692 14:18:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:32.692 14:18:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:32.692 14:18:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:24:32.692 14:18:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:32.692 14:18:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:24:32.692 14:18:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:24:32.950 14:18:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:24:34.327 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:24:34.327 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:24:34.327 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:24:34.327 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:24:34.327 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:24:34.327 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:24:34.327 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:24:34.327 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:24:34.327 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:24:34.327 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:24:34.327 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:24:34.327 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:24:34.327 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:24:34.327 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:24:34.327 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:24:34.327 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:24:35.265 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:24:35.265 00:24:35.265 real 0m9.672s 00:24:35.265 user 0m2.050s 00:24:35.265 sys 0m3.539s 00:24:35.265 14:18:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:35.265 14:18:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:24:35.265 ************************************ 00:24:35.265 END TEST nvmf_identify_kernel_target 00:24:35.265 ************************************ 00:24:35.265 14:18:43 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:24:35.265 14:18:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:35.265 14:18:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:35.265 14:18:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.265 ************************************ 00:24:35.265 START TEST nvmf_auth_host 00:24:35.265 ************************************ 00:24:35.265 14:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:24:35.265 * Looking for test storage... 00:24:35.265 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:35.265 14:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:35.265 14:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:24:35.265 14:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:35.265 14:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:35.265 14:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:35.265 14:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:35.265 14:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:35.265 14:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:35.265 14:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:35.265 14:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:35.265 14:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:35.265 14:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:35.265 14:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:24:35.265 14:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:24:35.265 14:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:35.265 14:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:35.265 14:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:35.265 14:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:35.265 14:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:35.265 14:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:35.265 14:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:35.265 14:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:35.265 14:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:35.265 14:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:35.265 14:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:35.265 14:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:24:35.265 14:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:35.265 14:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:24:35.265 14:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:35.265 14:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:35.265 14:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:35.265 14:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:35.265 14:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:35.265 14:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:35.265 14:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:35.265 14:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:35.265 14:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:24:35.265 14:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:24:35.265 14:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:24:35.265 14:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:24:35.265 14:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:35.266 14:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:35.266 14:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:24:35.266 14:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:24:35.266 14:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:24:35.266 14:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:35.266 14:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:35.266 14:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:35.266 14:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:35.266 14:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:35.266 14:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:35.266 14:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:35.266 14:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:35.523 14:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:35.523 14:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:35.523 14:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:24:35.523 14:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.422 14:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:37.422 14:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:24:37.422 14:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:37.422 14:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:37.422 14:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:37.422 14:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:37.422 14:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:37.422 14:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:24:37.422 14:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:37.422 14:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:24:37.422 14:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:24:37.422 14:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:24:37.422 14:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:24:37.422 14:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:24:37.422 14:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:24:37.422 14:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:37.422 14:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:37.422 14:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:37.422 14:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:37.422 14:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:37.422 14:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:37.422 14:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:37.422 14:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:37.422 14:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:37.422 14:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:37.422 14:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:37.422 14:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:37.422 14:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:37.422 14:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:37.422 14:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:37.422 14:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:37.422 14:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:37.422 14:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:37.422 14:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:24:37.422 Found 0000:09:00.0 (0x8086 - 0x159b) 00:24:37.422 14:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:37.422 14:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:37.422 14:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:37.422 14:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:37.422 14:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:37.422 14:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:37.422 14:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:24:37.422 Found 0000:09:00.1 (0x8086 - 0x159b) 00:24:37.422 14:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:37.422 14:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:37.422 14:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:37.422 14:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:37.422 14:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:37.422 14:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:37.422 14:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:37.422 14:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:37.422 14:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:37.422 14:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:37.422 14:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:37.422 14:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:37.422 14:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:37.422 14:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:37.422 14:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:37.422 14:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:24:37.422 Found net devices under 0000:09:00.0: cvl_0_0 00:24:37.423 14:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:37.423 14:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:37.423 14:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:37.423 14:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:37.423 14:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:37.423 14:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:37.423 14:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:37.423 14:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:37.423 14:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:24:37.423 Found net devices under 0000:09:00.1: cvl_0_1 00:24:37.423 14:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:37.423 14:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:37.423 14:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:24:37.423 14:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:37.423 14:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:37.423 14:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:37.423 14:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:37.423 14:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:37.423 14:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:37.423 14:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:37.423 14:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:37.423 14:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:37.423 14:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:37.423 14:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:37.423 14:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:37.423 14:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:37.423 14:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:37.423 14:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:37.423 14:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:37.423 14:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:37.423 14:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:37.423 14:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:37.423 14:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:37.423 14:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:37.423 14:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:37.423 14:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:37.423 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:37.423 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.129 ms 00:24:37.423 00:24:37.423 --- 10.0.0.2 ping statistics --- 00:24:37.423 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:37.423 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:24:37.423 14:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:37.423 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:37.423 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.063 ms 00:24:37.423 00:24:37.423 --- 10.0.0.1 ping statistics --- 00:24:37.423 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:37.423 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:24:37.423 14:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:37.423 14:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:24:37.423 14:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:37.423 14:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:37.423 14:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:37.423 14:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:37.423 14:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:37.423 14:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:37.423 14:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:37.423 14:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:24:37.423 14:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:37.423 14:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:37.423 14:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.423 14:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=309644 00:24:37.423 14:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:24:37.423 14:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 309644 00:24:37.423 14:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 309644 ']' 00:24:37.423 14:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:37.423 14:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:37.423 14:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:37.423 14:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:37.423 14:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.797 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:38.797 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:24:38.798 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:38.798 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:38.798 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.798 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:38.798 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:24:38.798 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:24:38.798 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:38.798 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:38.798 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:38.798 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:24:38.798 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:24:38.798 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:38.798 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=b2210f77aedb7be0a6f6c531abb51a6d 00:24:38.798 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:24:38.798 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.GzL 00:24:38.798 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key b2210f77aedb7be0a6f6c531abb51a6d 0 00:24:38.798 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 b2210f77aedb7be0a6f6c531abb51a6d 0 00:24:38.798 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:38.798 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:38.798 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=b2210f77aedb7be0a6f6c531abb51a6d 00:24:38.798 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:24:38.798 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:38.798 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.GzL 00:24:38.798 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.GzL 00:24:38.798 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.GzL 00:24:38.798 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:24:38.798 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:38.798 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:38.798 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:38.798 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:24:38.798 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:24:38.798 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:24:38.798 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=d1c3a028daaf01e5e2cae4e8fe04a59c74b8868e48a20f9ad3ca1ad807815cb8 00:24:38.798 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:24:38.798 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.koK 00:24:38.798 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key d1c3a028daaf01e5e2cae4e8fe04a59c74b8868e48a20f9ad3ca1ad807815cb8 3 00:24:38.798 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 d1c3a028daaf01e5e2cae4e8fe04a59c74b8868e48a20f9ad3ca1ad807815cb8 3 00:24:38.798 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:38.798 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:38.798 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=d1c3a028daaf01e5e2cae4e8fe04a59c74b8868e48a20f9ad3ca1ad807815cb8 00:24:38.798 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:24:38.798 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:38.798 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.koK 00:24:38.798 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.koK 00:24:38.798 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.koK 00:24:38.798 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:24:38.798 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:38.798 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:38.798 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:38.798 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:24:38.798 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:24:38.798 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:38.798 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=7901eea656e5d665add720d6ed451af477881aac2270fe2a 00:24:38.798 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:24:38.798 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.8ZN 00:24:38.798 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 7901eea656e5d665add720d6ed451af477881aac2270fe2a 0 00:24:38.798 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 7901eea656e5d665add720d6ed451af477881aac2270fe2a 0 00:24:38.798 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:38.798 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:38.798 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=7901eea656e5d665add720d6ed451af477881aac2270fe2a 00:24:38.798 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:24:38.798 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:38.798 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.8ZN 00:24:38.798 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.8ZN 00:24:38.798 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.8ZN 00:24:38.798 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:24:38.798 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:38.798 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:38.798 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:38.798 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:24:38.798 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:24:38.798 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:38.798 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=df5a2cbf745f71ed72d30a318c2fe484e3a82915109856d4 00:24:38.798 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:24:38.798 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.hYX 00:24:38.798 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key df5a2cbf745f71ed72d30a318c2fe484e3a82915109856d4 2 00:24:38.798 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 df5a2cbf745f71ed72d30a318c2fe484e3a82915109856d4 2 00:24:38.799 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:38.799 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:38.799 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=df5a2cbf745f71ed72d30a318c2fe484e3a82915109856d4 00:24:38.799 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:24:38.799 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:38.799 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.hYX 00:24:38.799 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.hYX 00:24:38.799 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.hYX 00:24:38.799 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:24:38.799 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:38.799 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:38.799 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:38.799 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:24:38.799 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:24:38.799 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:38.799 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=b976efe052070ce5a594a1bf70bcb6de 00:24:38.799 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:24:38.799 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.cQw 00:24:38.799 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key b976efe052070ce5a594a1bf70bcb6de 1 00:24:38.799 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 b976efe052070ce5a594a1bf70bcb6de 1 00:24:38.799 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:38.799 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:38.799 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=b976efe052070ce5a594a1bf70bcb6de 00:24:38.799 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:24:38.799 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:38.799 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.cQw 00:24:38.799 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.cQw 00:24:38.799 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.cQw 00:24:38.799 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:24:38.799 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:38.799 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:38.799 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:38.799 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:24:38.799 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:24:38.799 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:38.799 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=e42756f942ddb24bd3c999c29a4a4a24 00:24:38.799 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:24:38.799 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.lUv 00:24:38.799 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key e42756f942ddb24bd3c999c29a4a4a24 1 00:24:38.799 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 e42756f942ddb24bd3c999c29a4a4a24 1 00:24:38.799 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:38.799 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:38.799 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=e42756f942ddb24bd3c999c29a4a4a24 00:24:38.799 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:24:38.799 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:39.057 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.lUv 00:24:39.057 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.lUv 00:24:39.057 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.lUv 00:24:39.057 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:24:39.057 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:39.057 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:39.057 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:39.057 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:24:39.057 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:24:39.057 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:39.057 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=45f1c29b008a063f4d44e110c771dfac1a58cce83c065ae1 00:24:39.057 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:24:39.057 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.F1U 00:24:39.057 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 45f1c29b008a063f4d44e110c771dfac1a58cce83c065ae1 2 00:24:39.058 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 45f1c29b008a063f4d44e110c771dfac1a58cce83c065ae1 2 00:24:39.058 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:39.058 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:39.058 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=45f1c29b008a063f4d44e110c771dfac1a58cce83c065ae1 00:24:39.058 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:24:39.058 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:39.058 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.F1U 00:24:39.058 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.F1U 00:24:39.058 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.F1U 00:24:39.058 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:24:39.058 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:39.058 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:39.058 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:39.058 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:24:39.058 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:24:39.058 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:39.058 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=440624a2d9d18ac360f7329090044620 00:24:39.058 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:24:39.058 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.2LS 00:24:39.058 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 440624a2d9d18ac360f7329090044620 0 00:24:39.058 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 440624a2d9d18ac360f7329090044620 0 00:24:39.058 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:39.058 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:39.058 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=440624a2d9d18ac360f7329090044620 00:24:39.058 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:24:39.058 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:39.058 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.2LS 00:24:39.058 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.2LS 00:24:39.058 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.2LS 00:24:39.058 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:24:39.058 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:39.058 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:39.058 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:39.058 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:24:39.058 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:24:39.058 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:24:39.058 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=2392042d200e390defeab7dd1133c8aac41a581315c625ed5877f194795e482b 00:24:39.058 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:24:39.058 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.k2n 00:24:39.058 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 2392042d200e390defeab7dd1133c8aac41a581315c625ed5877f194795e482b 3 00:24:39.058 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 2392042d200e390defeab7dd1133c8aac41a581315c625ed5877f194795e482b 3 00:24:39.058 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:39.058 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:39.058 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=2392042d200e390defeab7dd1133c8aac41a581315c625ed5877f194795e482b 00:24:39.058 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:24:39.058 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:39.058 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.k2n 00:24:39.058 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.k2n 00:24:39.058 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.k2n 00:24:39.058 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:24:39.058 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 309644 00:24:39.058 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 309644 ']' 00:24:39.058 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:39.058 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:39.058 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:39.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:39.058 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:39.058 14:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.316 14:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:39.316 14:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:24:39.316 14:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:39.316 14:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.GzL 00:24:39.316 14:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.316 14:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.316 14:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.316 14:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.koK ]] 00:24:39.316 14:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.koK 00:24:39.316 14:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.316 14:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.316 14:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.316 14:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:39.316 14:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.8ZN 00:24:39.316 14:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.316 14:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.316 14:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.316 14:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.hYX ]] 00:24:39.316 14:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.hYX 00:24:39.316 14:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.316 14:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.316 14:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.317 14:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:39.317 14:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.cQw 00:24:39.317 14:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.317 14:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.317 14:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.317 14:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.lUv ]] 00:24:39.317 14:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.lUv 00:24:39.317 14:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.317 14:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.317 14:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.317 14:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:39.317 14:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.F1U 00:24:39.317 14:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.317 14:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.575 14:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.575 14:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.2LS ]] 00:24:39.575 14:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.2LS 00:24:39.575 14:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.575 14:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.575 14:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.575 14:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:39.575 14:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.k2n 00:24:39.575 14:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.575 14:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.575 14:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.575 14:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:24:39.575 14:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:24:39.575 14:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:24:39.575 14:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:39.575 14:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:39.575 14:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:39.575 14:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:39.575 14:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:39.575 14:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:39.575 14:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:39.575 14:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:39.575 14:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:39.575 14:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:39.575 14:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:24:39.575 14:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:24:39.575 14:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:24:39.575 14:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:39.575 14:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:39.575 14:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:24:39.575 14:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:24:39.575 14:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:24:39.575 14:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:24:39.575 14:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:24:39.575 14:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:24:40.508 Waiting for block devices as requested 00:24:40.508 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:24:40.508 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:24:40.508 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:24:40.766 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:24:40.766 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:24:40.766 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:24:40.766 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:24:41.024 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:24:41.024 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:24:41.282 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:24:41.282 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:24:41.282 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:24:41.282 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:24:41.282 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:24:41.540 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:24:41.540 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:24:41.540 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:24:42.106 14:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:24:42.106 14:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:24:42.106 14:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:24:42.106 14:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:24:42.106 14:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:24:42.106 14:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:24:42.106 14:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:24:42.106 14:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:24:42.106 14:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:24:42.106 No valid GPT data, bailing 00:24:42.106 14:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:24:42.106 14:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:24:42.106 14:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:24:42.106 14:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:24:42.106 14:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:24:42.106 14:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:42.106 14:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:42.106 14:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:24:42.106 14:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:24:42.106 14:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:24:42.106 14:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:24:42.106 14:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:24:42.106 14:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:24:42.106 14:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:24:42.106 14:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:24:42.106 14:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:24:42.107 14:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:24:42.107 14:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.1 -t tcp -s 4420 00:24:42.107 00:24:42.107 Discovery Log Number of Records 2, Generation counter 2 00:24:42.107 =====Discovery Log Entry 0====== 00:24:42.107 trtype: tcp 00:24:42.107 adrfam: ipv4 00:24:42.107 subtype: current discovery subsystem 00:24:42.107 treq: not specified, sq flow control disable supported 00:24:42.107 portid: 1 00:24:42.107 trsvcid: 4420 00:24:42.107 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:24:42.107 traddr: 10.0.0.1 00:24:42.107 eflags: none 00:24:42.107 sectype: none 00:24:42.107 =====Discovery Log Entry 1====== 00:24:42.107 trtype: tcp 00:24:42.107 adrfam: ipv4 00:24:42.107 subtype: nvme subsystem 00:24:42.107 treq: not specified, sq flow control disable supported 00:24:42.107 portid: 1 00:24:42.107 trsvcid: 4420 00:24:42.107 subnqn: nqn.2024-02.io.spdk:cnode0 00:24:42.107 traddr: 10.0.0.1 00:24:42.107 eflags: none 00:24:42.107 sectype: none 00:24:42.107 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:42.107 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:24:42.107 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:24:42.107 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:42.107 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:42.107 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:42.107 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:42.107 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:42.107 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzkwMWVlYTY1NmU1ZDY2NWFkZDcyMGQ2ZWQ0NTFhZjQ3Nzg4MWFhYzIyNzBmZTJhi0COUQ==: 00:24:42.107 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGY1YTJjYmY3NDVmNzFlZDcyZDMwYTMxOGMyZmU0ODRlM2E4MjkxNTEwOTg1NmQ0OthzsQ==: 00:24:42.107 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:42.107 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:42.107 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzkwMWVlYTY1NmU1ZDY2NWFkZDcyMGQ2ZWQ0NTFhZjQ3Nzg4MWFhYzIyNzBmZTJhi0COUQ==: 00:24:42.107 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGY1YTJjYmY3NDVmNzFlZDcyZDMwYTMxOGMyZmU0ODRlM2E4MjkxNTEwOTg1NmQ0OthzsQ==: ]] 00:24:42.107 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGY1YTJjYmY3NDVmNzFlZDcyZDMwYTMxOGMyZmU0ODRlM2E4MjkxNTEwOTg1NmQ0OthzsQ==: 00:24:42.107 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:24:42.107 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:24:42.107 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:24:42.107 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:42.107 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:24:42.107 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:42.107 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:24:42.107 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:42.107 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:42.107 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:42.107 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:42.107 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:42.107 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.107 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:42.107 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:42.107 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:42.366 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:42.366 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:42.366 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:42.366 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:42.366 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:42.366 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:42.366 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:42.366 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:42.366 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:42.366 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:42.366 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:42.366 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.366 nvme0n1 00:24:42.366 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:42.366 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:42.366 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:42.366 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.366 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:42.366 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:42.366 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:42.366 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:42.366 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:42.366 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.366 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:42.366 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:24:42.366 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:42.366 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:42.366 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:24:42.367 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:42.367 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:42.367 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:42.367 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:42.367 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjIyMTBmNzdhZWRiN2JlMGE2ZjZjNTMxYWJiNTFhNmR2kUqO: 00:24:42.367 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDFjM2EwMjhkYWFmMDFlNWUyY2FlNGU4ZmUwNGE1OWM3NGI4ODY4ZTQ4YTIwZjlhZDNjYTFhZDgwNzgxNWNiOD9idjE=: 00:24:42.367 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:42.367 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:42.367 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjIyMTBmNzdhZWRiN2JlMGE2ZjZjNTMxYWJiNTFhNmR2kUqO: 00:24:42.367 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDFjM2EwMjhkYWFmMDFlNWUyY2FlNGU4ZmUwNGE1OWM3NGI4ODY4ZTQ4YTIwZjlhZDNjYTFhZDgwNzgxNWNiOD9idjE=: ]] 00:24:42.367 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDFjM2EwMjhkYWFmMDFlNWUyY2FlNGU4ZmUwNGE1OWM3NGI4ODY4ZTQ4YTIwZjlhZDNjYTFhZDgwNzgxNWNiOD9idjE=: 00:24:42.367 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:24:42.367 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:42.367 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:42.367 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:42.367 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:42.367 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:42.367 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:42.367 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:42.367 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.367 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:42.367 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:42.367 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:42.367 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:42.367 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:42.367 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:42.367 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:42.367 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:42.367 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:42.367 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:42.367 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:42.367 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:42.367 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:42.367 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:42.367 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.625 nvme0n1 00:24:42.625 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:42.625 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:42.625 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:42.625 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.625 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:42.625 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:42.625 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:42.625 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:42.625 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:42.625 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.625 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:42.625 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:42.625 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:42.625 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:42.625 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:42.625 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:42.625 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:42.625 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzkwMWVlYTY1NmU1ZDY2NWFkZDcyMGQ2ZWQ0NTFhZjQ3Nzg4MWFhYzIyNzBmZTJhi0COUQ==: 00:24:42.625 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGY1YTJjYmY3NDVmNzFlZDcyZDMwYTMxOGMyZmU0ODRlM2E4MjkxNTEwOTg1NmQ0OthzsQ==: 00:24:42.625 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:42.625 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:42.625 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzkwMWVlYTY1NmU1ZDY2NWFkZDcyMGQ2ZWQ0NTFhZjQ3Nzg4MWFhYzIyNzBmZTJhi0COUQ==: 00:24:42.625 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGY1YTJjYmY3NDVmNzFlZDcyZDMwYTMxOGMyZmU0ODRlM2E4MjkxNTEwOTg1NmQ0OthzsQ==: ]] 00:24:42.625 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGY1YTJjYmY3NDVmNzFlZDcyZDMwYTMxOGMyZmU0ODRlM2E4MjkxNTEwOTg1NmQ0OthzsQ==: 00:24:42.625 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:24:42.625 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:42.625 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:42.625 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:42.625 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:42.625 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:42.625 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:42.625 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:42.625 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.625 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:42.626 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:42.626 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:42.626 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:42.626 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:42.626 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:42.626 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:42.626 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:42.626 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:42.626 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:42.626 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:42.626 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:42.626 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:42.626 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:42.626 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.884 nvme0n1 00:24:42.884 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:42.884 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:42.884 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:42.884 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.884 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:42.884 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:42.884 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:42.884 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:42.884 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:42.884 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.884 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:42.884 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:42.884 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:24:42.884 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:42.884 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:42.884 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:42.884 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:42.884 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yjk3NmVmZTA1MjA3MGNlNWE1OTRhMWJmNzBiY2I2ZGWBS6Zf: 00:24:42.884 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTQyNzU2Zjk0MmRkYjI0YmQzYzk5OWMyOWE0YTRhMjR6nWvq: 00:24:42.884 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:42.884 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:42.884 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yjk3NmVmZTA1MjA3MGNlNWE1OTRhMWJmNzBiY2I2ZGWBS6Zf: 00:24:42.884 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTQyNzU2Zjk0MmRkYjI0YmQzYzk5OWMyOWE0YTRhMjR6nWvq: ]] 00:24:42.884 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTQyNzU2Zjk0MmRkYjI0YmQzYzk5OWMyOWE0YTRhMjR6nWvq: 00:24:42.884 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:24:42.884 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:42.884 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:42.884 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:42.884 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:42.884 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:42.884 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:42.884 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:42.884 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.884 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:42.884 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:42.884 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:42.884 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:42.884 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:42.884 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:42.884 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:42.884 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:42.885 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:42.885 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:42.885 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:42.885 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:42.885 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:42.885 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:42.885 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.143 nvme0n1 00:24:43.143 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:43.143 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:43.143 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:43.143 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:43.143 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.143 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:43.143 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:43.143 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:43.143 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:43.143 14:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.143 14:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:43.143 14:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:43.143 14:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:24:43.143 14:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:43.143 14:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:43.143 14:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:43.143 14:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:43.143 14:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDVmMWMyOWIwMDhhMDYzZjRkNDRlMTEwYzc3MWRmYWMxYTU4Y2NlODNjMDY1YWUx/fPkeA==: 00:24:43.143 14:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDQwNjI0YTJkOWQxOGFjMzYwZjczMjkwOTAwNDQ2MjBCrX1D: 00:24:43.143 14:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:43.143 14:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:43.143 14:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDVmMWMyOWIwMDhhMDYzZjRkNDRlMTEwYzc3MWRmYWMxYTU4Y2NlODNjMDY1YWUx/fPkeA==: 00:24:43.143 14:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDQwNjI0YTJkOWQxOGFjMzYwZjczMjkwOTAwNDQ2MjBCrX1D: ]] 00:24:43.143 14:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDQwNjI0YTJkOWQxOGFjMzYwZjczMjkwOTAwNDQ2MjBCrX1D: 00:24:43.143 14:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:24:43.143 14:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:43.143 14:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:43.143 14:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:43.143 14:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:43.143 14:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:43.143 14:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:43.143 14:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:43.143 14:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.143 14:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:43.143 14:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:43.143 14:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:43.143 14:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:43.143 14:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:43.143 14:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:43.143 14:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:43.143 14:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:43.143 14:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:43.143 14:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:43.143 14:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:43.143 14:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:43.143 14:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:43.143 14:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:43.143 14:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.402 nvme0n1 00:24:43.402 14:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:43.402 14:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:43.402 14:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:43.402 14:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:43.402 14:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.402 14:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:43.402 14:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:43.402 14:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:43.402 14:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:43.402 14:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.402 14:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:43.402 14:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:43.402 14:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:24:43.402 14:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:43.402 14:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:43.402 14:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:43.402 14:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:43.402 14:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjM5MjA0MmQyMDBlMzkwZGVmZWFiN2RkMTEzM2M4YWFjNDFhNTgxMzE1YzYyNWVkNTg3N2YxOTQ3OTVlNDgyYvCZVHQ=: 00:24:43.402 14:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:43.402 14:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:43.402 14:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:43.402 14:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjM5MjA0MmQyMDBlMzkwZGVmZWFiN2RkMTEzM2M4YWFjNDFhNTgxMzE1YzYyNWVkNTg3N2YxOTQ3OTVlNDgyYvCZVHQ=: 00:24:43.402 14:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:43.402 14:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:24:43.402 14:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:43.402 14:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:43.402 14:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:43.402 14:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:43.402 14:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:43.402 14:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:43.402 14:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:43.402 14:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.402 14:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:43.402 14:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:43.402 14:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:43.402 14:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:43.402 14:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:43.402 14:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:43.402 14:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:43.402 14:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:43.402 14:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:43.402 14:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:43.402 14:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:43.402 14:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:43.402 14:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:43.402 14:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:43.402 14:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.402 nvme0n1 00:24:43.402 14:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:43.402 14:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:43.402 14:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:43.402 14:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:43.402 14:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.403 14:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:43.660 14:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:43.660 14:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:43.660 14:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:43.660 14:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.660 14:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:43.660 14:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:43.660 14:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:43.660 14:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:24:43.660 14:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:43.660 14:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:43.660 14:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:43.660 14:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:43.660 14:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjIyMTBmNzdhZWRiN2JlMGE2ZjZjNTMxYWJiNTFhNmR2kUqO: 00:24:43.660 14:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDFjM2EwMjhkYWFmMDFlNWUyY2FlNGU4ZmUwNGE1OWM3NGI4ODY4ZTQ4YTIwZjlhZDNjYTFhZDgwNzgxNWNiOD9idjE=: 00:24:43.660 14:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:43.660 14:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:43.919 14:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjIyMTBmNzdhZWRiN2JlMGE2ZjZjNTMxYWJiNTFhNmR2kUqO: 00:24:43.919 14:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDFjM2EwMjhkYWFmMDFlNWUyY2FlNGU4ZmUwNGE1OWM3NGI4ODY4ZTQ4YTIwZjlhZDNjYTFhZDgwNzgxNWNiOD9idjE=: ]] 00:24:43.919 14:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDFjM2EwMjhkYWFmMDFlNWUyY2FlNGU4ZmUwNGE1OWM3NGI4ODY4ZTQ4YTIwZjlhZDNjYTFhZDgwNzgxNWNiOD9idjE=: 00:24:43.919 14:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:24:43.919 14:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:43.919 14:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:43.919 14:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:43.919 14:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:43.919 14:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:43.919 14:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:43.919 14:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:43.919 14:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.919 14:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:43.919 14:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:43.919 14:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:43.919 14:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:43.919 14:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:43.919 14:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:43.919 14:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:43.919 14:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:43.919 14:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:43.919 14:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:43.919 14:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:43.919 14:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:43.919 14:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:43.919 14:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:43.919 14:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.177 nvme0n1 00:24:44.177 14:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:44.177 14:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:44.177 14:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:44.177 14:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.177 14:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:44.177 14:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:44.177 14:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:44.177 14:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:44.177 14:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:44.177 14:18:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.177 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:44.177 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:44.177 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:24:44.177 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:44.177 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:44.177 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:44.177 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:44.177 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzkwMWVlYTY1NmU1ZDY2NWFkZDcyMGQ2ZWQ0NTFhZjQ3Nzg4MWFhYzIyNzBmZTJhi0COUQ==: 00:24:44.177 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGY1YTJjYmY3NDVmNzFlZDcyZDMwYTMxOGMyZmU0ODRlM2E4MjkxNTEwOTg1NmQ0OthzsQ==: 00:24:44.177 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:44.177 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:44.177 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzkwMWVlYTY1NmU1ZDY2NWFkZDcyMGQ2ZWQ0NTFhZjQ3Nzg4MWFhYzIyNzBmZTJhi0COUQ==: 00:24:44.177 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGY1YTJjYmY3NDVmNzFlZDcyZDMwYTMxOGMyZmU0ODRlM2E4MjkxNTEwOTg1NmQ0OthzsQ==: ]] 00:24:44.177 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGY1YTJjYmY3NDVmNzFlZDcyZDMwYTMxOGMyZmU0ODRlM2E4MjkxNTEwOTg1NmQ0OthzsQ==: 00:24:44.177 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:24:44.177 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:44.177 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:44.177 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:44.177 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:44.177 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:44.177 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:44.178 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:44.178 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.178 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:44.178 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:44.178 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:44.178 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:44.178 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:44.178 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:44.178 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:44.178 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:44.178 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:44.178 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:44.178 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:44.178 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:44.178 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:44.178 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:44.178 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.436 nvme0n1 00:24:44.436 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:44.436 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:44.436 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:44.436 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.436 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:44.436 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:44.436 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:44.436 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:44.437 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:44.437 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.437 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:44.437 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:44.437 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:24:44.437 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:44.437 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:44.437 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:44.437 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:44.437 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yjk3NmVmZTA1MjA3MGNlNWE1OTRhMWJmNzBiY2I2ZGWBS6Zf: 00:24:44.437 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTQyNzU2Zjk0MmRkYjI0YmQzYzk5OWMyOWE0YTRhMjR6nWvq: 00:24:44.437 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:44.437 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:44.437 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yjk3NmVmZTA1MjA3MGNlNWE1OTRhMWJmNzBiY2I2ZGWBS6Zf: 00:24:44.437 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTQyNzU2Zjk0MmRkYjI0YmQzYzk5OWMyOWE0YTRhMjR6nWvq: ]] 00:24:44.437 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTQyNzU2Zjk0MmRkYjI0YmQzYzk5OWMyOWE0YTRhMjR6nWvq: 00:24:44.437 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:24:44.437 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:44.437 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:44.437 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:44.437 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:44.437 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:44.437 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:44.437 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:44.437 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.437 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:44.437 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:44.437 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:44.437 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:44.437 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:44.437 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:44.437 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:44.437 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:44.437 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:44.437 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:44.437 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:44.437 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:44.437 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:44.437 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:44.437 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.437 nvme0n1 00:24:44.437 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:44.437 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:44.437 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:44.437 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.437 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:44.695 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:44.695 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:44.695 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:44.695 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:44.695 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.695 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:44.695 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:44.695 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:24:44.695 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:44.695 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:44.695 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:44.695 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:44.695 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDVmMWMyOWIwMDhhMDYzZjRkNDRlMTEwYzc3MWRmYWMxYTU4Y2NlODNjMDY1YWUx/fPkeA==: 00:24:44.695 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDQwNjI0YTJkOWQxOGFjMzYwZjczMjkwOTAwNDQ2MjBCrX1D: 00:24:44.695 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:44.695 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:44.695 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDVmMWMyOWIwMDhhMDYzZjRkNDRlMTEwYzc3MWRmYWMxYTU4Y2NlODNjMDY1YWUx/fPkeA==: 00:24:44.695 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDQwNjI0YTJkOWQxOGFjMzYwZjczMjkwOTAwNDQ2MjBCrX1D: ]] 00:24:44.695 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDQwNjI0YTJkOWQxOGFjMzYwZjczMjkwOTAwNDQ2MjBCrX1D: 00:24:44.695 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:24:44.695 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:44.695 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:44.695 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:44.695 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:44.695 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:44.695 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:44.695 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:44.695 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.695 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:44.695 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:44.695 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:44.695 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:44.695 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:44.695 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:44.695 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:44.695 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:44.695 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:44.695 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:44.695 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:44.695 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:44.695 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:44.695 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:44.695 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.695 nvme0n1 00:24:44.696 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:44.696 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:44.696 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:44.696 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.696 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:44.961 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:44.961 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:44.961 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:44.961 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:44.961 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.961 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:44.961 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:44.961 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:24:44.961 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:44.961 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:44.961 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:44.961 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:44.961 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjM5MjA0MmQyMDBlMzkwZGVmZWFiN2RkMTEzM2M4YWFjNDFhNTgxMzE1YzYyNWVkNTg3N2YxOTQ3OTVlNDgyYvCZVHQ=: 00:24:44.961 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:44.961 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:44.961 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:44.961 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjM5MjA0MmQyMDBlMzkwZGVmZWFiN2RkMTEzM2M4YWFjNDFhNTgxMzE1YzYyNWVkNTg3N2YxOTQ3OTVlNDgyYvCZVHQ=: 00:24:44.961 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:44.961 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:24:44.962 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:44.962 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:44.962 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:44.962 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:44.962 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:44.962 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:44.962 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:44.962 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.962 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:44.962 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:44.962 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:44.962 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:44.962 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:44.962 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:44.962 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:44.962 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:44.962 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:44.962 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:44.962 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:44.962 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:44.962 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:44.962 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:44.962 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.962 nvme0n1 00:24:44.962 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:44.962 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:44.962 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:44.962 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:44.962 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.962 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:45.273 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:45.273 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:45.273 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:45.273 14:18:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.273 14:18:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:45.273 14:18:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:45.273 14:18:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:45.273 14:18:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:24:45.273 14:18:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:45.273 14:18:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:45.273 14:18:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:45.273 14:18:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:45.273 14:18:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjIyMTBmNzdhZWRiN2JlMGE2ZjZjNTMxYWJiNTFhNmR2kUqO: 00:24:45.273 14:18:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDFjM2EwMjhkYWFmMDFlNWUyY2FlNGU4ZmUwNGE1OWM3NGI4ODY4ZTQ4YTIwZjlhZDNjYTFhZDgwNzgxNWNiOD9idjE=: 00:24:45.273 14:18:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:45.273 14:18:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:45.583 14:18:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjIyMTBmNzdhZWRiN2JlMGE2ZjZjNTMxYWJiNTFhNmR2kUqO: 00:24:45.583 14:18:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDFjM2EwMjhkYWFmMDFlNWUyY2FlNGU4ZmUwNGE1OWM3NGI4ODY4ZTQ4YTIwZjlhZDNjYTFhZDgwNzgxNWNiOD9idjE=: ]] 00:24:45.583 14:18:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDFjM2EwMjhkYWFmMDFlNWUyY2FlNGU4ZmUwNGE1OWM3NGI4ODY4ZTQ4YTIwZjlhZDNjYTFhZDgwNzgxNWNiOD9idjE=: 00:24:45.583 14:18:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:24:45.583 14:18:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:45.583 14:18:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:45.583 14:18:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:45.583 14:18:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:45.583 14:18:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:45.583 14:18:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:45.583 14:18:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:45.583 14:18:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.866 14:18:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:45.866 14:18:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:45.866 14:18:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:45.866 14:18:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:45.866 14:18:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:45.866 14:18:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:45.866 14:18:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:45.866 14:18:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:45.866 14:18:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:45.866 14:18:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:45.866 14:18:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:45.866 14:18:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:45.866 14:18:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:45.866 14:18:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:45.866 14:18:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.866 nvme0n1 00:24:45.866 14:18:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:45.866 14:18:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:45.866 14:18:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:45.866 14:18:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:45.866 14:18:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.147 14:18:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:46.147 14:18:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:46.147 14:18:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:46.147 14:18:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:46.147 14:18:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.147 14:18:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:46.147 14:18:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:46.147 14:18:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:24:46.147 14:18:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:46.147 14:18:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:46.147 14:18:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:46.147 14:18:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:46.147 14:18:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzkwMWVlYTY1NmU1ZDY2NWFkZDcyMGQ2ZWQ0NTFhZjQ3Nzg4MWFhYzIyNzBmZTJhi0COUQ==: 00:24:46.147 14:18:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGY1YTJjYmY3NDVmNzFlZDcyZDMwYTMxOGMyZmU0ODRlM2E4MjkxNTEwOTg1NmQ0OthzsQ==: 00:24:46.147 14:18:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:46.147 14:18:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:46.148 14:18:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzkwMWVlYTY1NmU1ZDY2NWFkZDcyMGQ2ZWQ0NTFhZjQ3Nzg4MWFhYzIyNzBmZTJhi0COUQ==: 00:24:46.148 14:18:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGY1YTJjYmY3NDVmNzFlZDcyZDMwYTMxOGMyZmU0ODRlM2E4MjkxNTEwOTg1NmQ0OthzsQ==: ]] 00:24:46.148 14:18:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGY1YTJjYmY3NDVmNzFlZDcyZDMwYTMxOGMyZmU0ODRlM2E4MjkxNTEwOTg1NmQ0OthzsQ==: 00:24:46.148 14:18:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:24:46.148 14:18:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:46.148 14:18:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:46.148 14:18:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:46.148 14:18:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:46.148 14:18:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:46.148 14:18:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:46.148 14:18:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:46.148 14:18:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.148 14:18:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:46.148 14:18:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:46.148 14:18:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:46.148 14:18:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:46.148 14:18:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:46.148 14:18:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:46.148 14:18:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:46.148 14:18:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:46.148 14:18:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:46.148 14:18:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:46.148 14:18:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:46.148 14:18:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:46.148 14:18:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:46.148 14:18:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:46.148 14:18:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.406 nvme0n1 00:24:46.406 14:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:46.406 14:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:46.406 14:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:46.406 14:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.406 14:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:46.406 14:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:46.406 14:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:46.406 14:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:46.406 14:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:46.406 14:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.406 14:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:46.406 14:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:46.406 14:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:24:46.406 14:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:46.406 14:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:46.406 14:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:46.406 14:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:46.406 14:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yjk3NmVmZTA1MjA3MGNlNWE1OTRhMWJmNzBiY2I2ZGWBS6Zf: 00:24:46.406 14:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTQyNzU2Zjk0MmRkYjI0YmQzYzk5OWMyOWE0YTRhMjR6nWvq: 00:24:46.406 14:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:46.406 14:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:46.406 14:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yjk3NmVmZTA1MjA3MGNlNWE1OTRhMWJmNzBiY2I2ZGWBS6Zf: 00:24:46.406 14:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTQyNzU2Zjk0MmRkYjI0YmQzYzk5OWMyOWE0YTRhMjR6nWvq: ]] 00:24:46.406 14:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTQyNzU2Zjk0MmRkYjI0YmQzYzk5OWMyOWE0YTRhMjR6nWvq: 00:24:46.406 14:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:24:46.406 14:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:46.406 14:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:46.406 14:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:46.406 14:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:46.406 14:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:46.406 14:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:46.406 14:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:46.406 14:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.406 14:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:46.406 14:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:46.406 14:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:46.406 14:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:46.406 14:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:46.406 14:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:46.406 14:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:46.406 14:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:46.406 14:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:46.406 14:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:46.407 14:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:46.407 14:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:46.407 14:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:46.407 14:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:46.407 14:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.664 nvme0n1 00:24:46.664 14:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:46.664 14:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:46.664 14:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:46.664 14:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.664 14:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:46.664 14:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:46.664 14:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:46.664 14:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:46.664 14:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:46.664 14:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.664 14:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:46.664 14:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:46.664 14:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:24:46.664 14:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:46.664 14:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:46.664 14:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:46.664 14:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:46.665 14:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDVmMWMyOWIwMDhhMDYzZjRkNDRlMTEwYzc3MWRmYWMxYTU4Y2NlODNjMDY1YWUx/fPkeA==: 00:24:46.665 14:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDQwNjI0YTJkOWQxOGFjMzYwZjczMjkwOTAwNDQ2MjBCrX1D: 00:24:46.665 14:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:46.665 14:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:46.665 14:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDVmMWMyOWIwMDhhMDYzZjRkNDRlMTEwYzc3MWRmYWMxYTU4Y2NlODNjMDY1YWUx/fPkeA==: 00:24:46.665 14:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDQwNjI0YTJkOWQxOGFjMzYwZjczMjkwOTAwNDQ2MjBCrX1D: ]] 00:24:46.665 14:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDQwNjI0YTJkOWQxOGFjMzYwZjczMjkwOTAwNDQ2MjBCrX1D: 00:24:46.665 14:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:24:46.665 14:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:46.665 14:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:46.665 14:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:46.665 14:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:46.665 14:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:46.665 14:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:46.665 14:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:46.665 14:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.665 14:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:46.665 14:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:46.665 14:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:46.665 14:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:46.665 14:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:46.665 14:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:46.665 14:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:46.665 14:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:46.665 14:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:46.665 14:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:46.665 14:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:46.665 14:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:46.665 14:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:46.665 14:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:46.665 14:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.923 nvme0n1 00:24:46.923 14:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:46.923 14:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:46.923 14:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:46.923 14:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:46.923 14:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.923 14:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:46.923 14:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:46.923 14:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:46.923 14:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:46.923 14:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.923 14:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:46.923 14:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:46.923 14:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:24:46.923 14:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:46.923 14:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:46.923 14:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:46.923 14:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:46.923 14:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjM5MjA0MmQyMDBlMzkwZGVmZWFiN2RkMTEzM2M4YWFjNDFhNTgxMzE1YzYyNWVkNTg3N2YxOTQ3OTVlNDgyYvCZVHQ=: 00:24:46.923 14:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:46.923 14:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:46.923 14:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:46.923 14:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjM5MjA0MmQyMDBlMzkwZGVmZWFiN2RkMTEzM2M4YWFjNDFhNTgxMzE1YzYyNWVkNTg3N2YxOTQ3OTVlNDgyYvCZVHQ=: 00:24:46.923 14:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:46.923 14:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:24:46.923 14:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:46.923 14:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:46.923 14:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:46.923 14:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:46.923 14:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:46.923 14:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:46.923 14:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:46.923 14:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.181 14:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.181 14:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:47.181 14:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:47.181 14:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:47.181 14:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:47.181 14:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:47.181 14:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:47.181 14:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:47.181 14:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:47.181 14:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:47.181 14:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:47.181 14:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:47.181 14:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:47.181 14:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.181 14:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.439 nvme0n1 00:24:47.439 14:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.439 14:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:47.439 14:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:47.439 14:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.439 14:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.439 14:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.439 14:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:47.439 14:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:47.439 14:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.439 14:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.439 14:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.439 14:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:47.439 14:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:47.439 14:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:24:47.439 14:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:47.439 14:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:47.439 14:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:47.439 14:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:47.439 14:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjIyMTBmNzdhZWRiN2JlMGE2ZjZjNTMxYWJiNTFhNmR2kUqO: 00:24:47.439 14:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDFjM2EwMjhkYWFmMDFlNWUyY2FlNGU4ZmUwNGE1OWM3NGI4ODY4ZTQ4YTIwZjlhZDNjYTFhZDgwNzgxNWNiOD9idjE=: 00:24:47.439 14:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:47.439 14:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:49.338 14:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjIyMTBmNzdhZWRiN2JlMGE2ZjZjNTMxYWJiNTFhNmR2kUqO: 00:24:49.338 14:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDFjM2EwMjhkYWFmMDFlNWUyY2FlNGU4ZmUwNGE1OWM3NGI4ODY4ZTQ4YTIwZjlhZDNjYTFhZDgwNzgxNWNiOD9idjE=: ]] 00:24:49.338 14:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDFjM2EwMjhkYWFmMDFlNWUyY2FlNGU4ZmUwNGE1OWM3NGI4ODY4ZTQ4YTIwZjlhZDNjYTFhZDgwNzgxNWNiOD9idjE=: 00:24:49.338 14:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:24:49.338 14:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:49.338 14:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:49.338 14:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:49.338 14:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:49.338 14:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:49.338 14:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:49.338 14:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.338 14:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.338 14:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.338 14:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:49.338 14:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:49.338 14:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:49.338 14:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:49.338 14:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:49.338 14:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:49.338 14:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:49.338 14:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:49.338 14:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:49.338 14:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:49.338 14:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:49.338 14:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:49.338 14:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.338 14:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.596 nvme0n1 00:24:49.596 14:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.596 14:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:49.596 14:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.596 14:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.596 14:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:49.596 14:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.596 14:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:49.596 14:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:49.596 14:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.596 14:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.596 14:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.596 14:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:49.596 14:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:24:49.596 14:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:49.596 14:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:49.596 14:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:49.596 14:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:49.596 14:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzkwMWVlYTY1NmU1ZDY2NWFkZDcyMGQ2ZWQ0NTFhZjQ3Nzg4MWFhYzIyNzBmZTJhi0COUQ==: 00:24:49.596 14:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGY1YTJjYmY3NDVmNzFlZDcyZDMwYTMxOGMyZmU0ODRlM2E4MjkxNTEwOTg1NmQ0OthzsQ==: 00:24:49.596 14:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:49.596 14:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:49.596 14:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzkwMWVlYTY1NmU1ZDY2NWFkZDcyMGQ2ZWQ0NTFhZjQ3Nzg4MWFhYzIyNzBmZTJhi0COUQ==: 00:24:49.596 14:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGY1YTJjYmY3NDVmNzFlZDcyZDMwYTMxOGMyZmU0ODRlM2E4MjkxNTEwOTg1NmQ0OthzsQ==: ]] 00:24:49.596 14:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGY1YTJjYmY3NDVmNzFlZDcyZDMwYTMxOGMyZmU0ODRlM2E4MjkxNTEwOTg1NmQ0OthzsQ==: 00:24:49.596 14:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:24:49.596 14:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:49.596 14:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:49.596 14:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:49.596 14:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:49.596 14:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:49.596 14:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:49.596 14:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.596 14:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.597 14:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.597 14:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:49.597 14:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:49.597 14:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:49.597 14:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:49.597 14:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:49.597 14:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:49.597 14:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:49.597 14:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:49.597 14:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:49.597 14:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:49.597 14:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:49.597 14:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:49.597 14:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.597 14:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.162 nvme0n1 00:24:50.162 14:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.162 14:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:50.162 14:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.162 14:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.162 14:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:50.162 14:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.162 14:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:50.162 14:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:50.162 14:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.162 14:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.162 14:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.162 14:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:50.162 14:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:24:50.162 14:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:50.162 14:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:50.162 14:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:50.162 14:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:50.162 14:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yjk3NmVmZTA1MjA3MGNlNWE1OTRhMWJmNzBiY2I2ZGWBS6Zf: 00:24:50.162 14:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTQyNzU2Zjk0MmRkYjI0YmQzYzk5OWMyOWE0YTRhMjR6nWvq: 00:24:50.162 14:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:50.162 14:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:50.162 14:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yjk3NmVmZTA1MjA3MGNlNWE1OTRhMWJmNzBiY2I2ZGWBS6Zf: 00:24:50.162 14:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTQyNzU2Zjk0MmRkYjI0YmQzYzk5OWMyOWE0YTRhMjR6nWvq: ]] 00:24:50.162 14:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTQyNzU2Zjk0MmRkYjI0YmQzYzk5OWMyOWE0YTRhMjR6nWvq: 00:24:50.162 14:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:24:50.162 14:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:50.162 14:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:50.162 14:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:50.162 14:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:50.162 14:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:50.162 14:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:50.162 14:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.162 14:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.162 14:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.162 14:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:50.162 14:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:50.162 14:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:50.162 14:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:50.162 14:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:50.162 14:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:50.163 14:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:50.163 14:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:50.163 14:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:50.163 14:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:50.163 14:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:50.163 14:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:50.163 14:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.163 14:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.729 nvme0n1 00:24:50.729 14:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.729 14:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:50.729 14:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:50.729 14:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.729 14:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.729 14:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.729 14:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:50.729 14:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:50.729 14:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.729 14:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.729 14:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.729 14:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:50.729 14:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:24:50.729 14:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:50.729 14:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:50.729 14:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:50.729 14:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:50.729 14:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDVmMWMyOWIwMDhhMDYzZjRkNDRlMTEwYzc3MWRmYWMxYTU4Y2NlODNjMDY1YWUx/fPkeA==: 00:24:50.729 14:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDQwNjI0YTJkOWQxOGFjMzYwZjczMjkwOTAwNDQ2MjBCrX1D: 00:24:50.729 14:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:50.729 14:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:50.729 14:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDVmMWMyOWIwMDhhMDYzZjRkNDRlMTEwYzc3MWRmYWMxYTU4Y2NlODNjMDY1YWUx/fPkeA==: 00:24:50.729 14:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDQwNjI0YTJkOWQxOGFjMzYwZjczMjkwOTAwNDQ2MjBCrX1D: ]] 00:24:50.729 14:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDQwNjI0YTJkOWQxOGFjMzYwZjczMjkwOTAwNDQ2MjBCrX1D: 00:24:50.729 14:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:24:50.729 14:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:50.729 14:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:50.729 14:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:50.729 14:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:50.729 14:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:50.729 14:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:50.729 14:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.729 14:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.729 14:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.729 14:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:50.729 14:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:50.729 14:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:50.729 14:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:50.729 14:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:50.729 14:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:50.730 14:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:50.730 14:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:50.730 14:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:50.730 14:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:50.730 14:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:50.730 14:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:50.730 14:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.730 14:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.296 nvme0n1 00:24:51.296 14:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:51.296 14:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:51.296 14:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:51.296 14:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.296 14:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.296 14:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:51.296 14:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:51.296 14:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:51.296 14:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.296 14:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.296 14:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:51.296 14:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:51.296 14:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:24:51.296 14:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:51.296 14:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:51.296 14:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:51.296 14:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:51.296 14:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjM5MjA0MmQyMDBlMzkwZGVmZWFiN2RkMTEzM2M4YWFjNDFhNTgxMzE1YzYyNWVkNTg3N2YxOTQ3OTVlNDgyYvCZVHQ=: 00:24:51.296 14:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:51.296 14:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:51.296 14:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:51.296 14:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjM5MjA0MmQyMDBlMzkwZGVmZWFiN2RkMTEzM2M4YWFjNDFhNTgxMzE1YzYyNWVkNTg3N2YxOTQ3OTVlNDgyYvCZVHQ=: 00:24:51.296 14:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:51.296 14:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:24:51.296 14:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:51.296 14:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:51.296 14:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:51.296 14:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:51.296 14:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:51.296 14:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:51.296 14:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.296 14:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.296 14:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:51.296 14:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:51.296 14:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:51.296 14:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:51.296 14:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:51.296 14:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:51.296 14:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:51.296 14:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:51.296 14:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:51.296 14:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:51.296 14:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:51.296 14:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:51.296 14:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:51.296 14:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.296 14:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.863 nvme0n1 00:24:51.863 14:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:51.863 14:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:51.863 14:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.863 14:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.863 14:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:51.863 14:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:51.863 14:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:51.863 14:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:51.863 14:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.863 14:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.863 14:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:51.863 14:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:51.863 14:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:51.863 14:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:24:51.863 14:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:51.863 14:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:51.863 14:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:51.863 14:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:51.863 14:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjIyMTBmNzdhZWRiN2JlMGE2ZjZjNTMxYWJiNTFhNmR2kUqO: 00:24:51.863 14:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDFjM2EwMjhkYWFmMDFlNWUyY2FlNGU4ZmUwNGE1OWM3NGI4ODY4ZTQ4YTIwZjlhZDNjYTFhZDgwNzgxNWNiOD9idjE=: 00:24:51.863 14:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:51.863 14:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:51.863 14:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjIyMTBmNzdhZWRiN2JlMGE2ZjZjNTMxYWJiNTFhNmR2kUqO: 00:24:51.863 14:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDFjM2EwMjhkYWFmMDFlNWUyY2FlNGU4ZmUwNGE1OWM3NGI4ODY4ZTQ4YTIwZjlhZDNjYTFhZDgwNzgxNWNiOD9idjE=: ]] 00:24:51.863 14:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDFjM2EwMjhkYWFmMDFlNWUyY2FlNGU4ZmUwNGE1OWM3NGI4ODY4ZTQ4YTIwZjlhZDNjYTFhZDgwNzgxNWNiOD9idjE=: 00:24:51.863 14:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:24:51.863 14:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:51.863 14:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:51.863 14:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:51.863 14:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:51.863 14:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:51.863 14:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:51.863 14:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.863 14:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.863 14:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:51.863 14:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:51.863 14:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:51.863 14:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:51.863 14:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:51.863 14:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:51.863 14:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:51.863 14:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:51.863 14:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:51.863 14:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:51.863 14:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:51.863 14:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:51.863 14:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:51.863 14:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.863 14:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.797 nvme0n1 00:24:52.797 14:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.797 14:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:52.797 14:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:52.797 14:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.797 14:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.797 14:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.797 14:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:52.797 14:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:52.797 14:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.797 14:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.797 14:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.797 14:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:52.797 14:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:24:52.797 14:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:52.797 14:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:52.797 14:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:52.797 14:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:52.797 14:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzkwMWVlYTY1NmU1ZDY2NWFkZDcyMGQ2ZWQ0NTFhZjQ3Nzg4MWFhYzIyNzBmZTJhi0COUQ==: 00:24:52.797 14:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGY1YTJjYmY3NDVmNzFlZDcyZDMwYTMxOGMyZmU0ODRlM2E4MjkxNTEwOTg1NmQ0OthzsQ==: 00:24:52.797 14:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:52.797 14:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:52.797 14:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzkwMWVlYTY1NmU1ZDY2NWFkZDcyMGQ2ZWQ0NTFhZjQ3Nzg4MWFhYzIyNzBmZTJhi0COUQ==: 00:24:52.798 14:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGY1YTJjYmY3NDVmNzFlZDcyZDMwYTMxOGMyZmU0ODRlM2E4MjkxNTEwOTg1NmQ0OthzsQ==: ]] 00:24:52.798 14:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGY1YTJjYmY3NDVmNzFlZDcyZDMwYTMxOGMyZmU0ODRlM2E4MjkxNTEwOTg1NmQ0OthzsQ==: 00:24:52.798 14:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:24:52.798 14:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:52.798 14:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:52.798 14:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:52.798 14:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:52.798 14:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:52.798 14:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:52.798 14:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.798 14:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.798 14:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.798 14:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:52.798 14:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:52.798 14:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:52.798 14:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:52.798 14:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:52.798 14:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:52.798 14:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:52.798 14:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:52.798 14:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:52.798 14:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:52.798 14:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:52.798 14:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:52.798 14:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.798 14:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.774 nvme0n1 00:24:53.774 14:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:53.774 14:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:53.774 14:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:53.775 14:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.775 14:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:53.775 14:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:53.775 14:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:53.775 14:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:53.775 14:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:53.775 14:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.775 14:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:53.775 14:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:53.775 14:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:24:53.775 14:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:53.775 14:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:53.775 14:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:53.775 14:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:53.775 14:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yjk3NmVmZTA1MjA3MGNlNWE1OTRhMWJmNzBiY2I2ZGWBS6Zf: 00:24:53.775 14:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTQyNzU2Zjk0MmRkYjI0YmQzYzk5OWMyOWE0YTRhMjR6nWvq: 00:24:53.775 14:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:53.775 14:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:53.775 14:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yjk3NmVmZTA1MjA3MGNlNWE1OTRhMWJmNzBiY2I2ZGWBS6Zf: 00:24:53.775 14:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTQyNzU2Zjk0MmRkYjI0YmQzYzk5OWMyOWE0YTRhMjR6nWvq: ]] 00:24:53.775 14:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTQyNzU2Zjk0MmRkYjI0YmQzYzk5OWMyOWE0YTRhMjR6nWvq: 00:24:53.775 14:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:24:53.775 14:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:53.775 14:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:53.775 14:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:53.775 14:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:53.775 14:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:53.775 14:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:53.775 14:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:53.775 14:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.775 14:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:53.775 14:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:53.775 14:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:53.775 14:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:53.775 14:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:53.775 14:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:53.775 14:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:53.775 14:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:53.775 14:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:53.775 14:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:53.775 14:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:53.775 14:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:53.775 14:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:53.775 14:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:53.775 14:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.338 nvme0n1 00:24:54.338 14:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:54.338 14:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:54.338 14:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:54.339 14:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:54.339 14:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.339 14:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:54.596 14:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:54.596 14:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:54.596 14:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:54.596 14:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.596 14:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:54.596 14:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:54.597 14:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:24:54.597 14:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:54.597 14:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:54.597 14:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:54.597 14:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:54.597 14:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDVmMWMyOWIwMDhhMDYzZjRkNDRlMTEwYzc3MWRmYWMxYTU4Y2NlODNjMDY1YWUx/fPkeA==: 00:24:54.597 14:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDQwNjI0YTJkOWQxOGFjMzYwZjczMjkwOTAwNDQ2MjBCrX1D: 00:24:54.597 14:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:54.597 14:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:54.597 14:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDVmMWMyOWIwMDhhMDYzZjRkNDRlMTEwYzc3MWRmYWMxYTU4Y2NlODNjMDY1YWUx/fPkeA==: 00:24:54.597 14:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDQwNjI0YTJkOWQxOGFjMzYwZjczMjkwOTAwNDQ2MjBCrX1D: ]] 00:24:54.597 14:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDQwNjI0YTJkOWQxOGFjMzYwZjczMjkwOTAwNDQ2MjBCrX1D: 00:24:54.597 14:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:24:54.597 14:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:54.597 14:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:54.597 14:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:54.597 14:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:54.597 14:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:54.597 14:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:54.597 14:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:54.597 14:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.597 14:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:54.597 14:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:54.597 14:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:54.597 14:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:54.597 14:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:54.597 14:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:54.597 14:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:54.597 14:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:54.597 14:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:54.597 14:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:54.597 14:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:54.597 14:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:54.597 14:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:54.597 14:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:54.597 14:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.529 nvme0n1 00:24:55.529 14:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.529 14:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:55.529 14:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:55.529 14:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.529 14:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.529 14:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.529 14:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:55.529 14:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:55.529 14:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.529 14:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.529 14:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.529 14:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:55.529 14:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:24:55.529 14:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:55.529 14:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:55.529 14:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:55.529 14:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:55.529 14:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjM5MjA0MmQyMDBlMzkwZGVmZWFiN2RkMTEzM2M4YWFjNDFhNTgxMzE1YzYyNWVkNTg3N2YxOTQ3OTVlNDgyYvCZVHQ=: 00:24:55.529 14:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:55.529 14:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:55.529 14:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:55.529 14:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjM5MjA0MmQyMDBlMzkwZGVmZWFiN2RkMTEzM2M4YWFjNDFhNTgxMzE1YzYyNWVkNTg3N2YxOTQ3OTVlNDgyYvCZVHQ=: 00:24:55.529 14:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:55.529 14:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:24:55.529 14:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:55.529 14:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:55.529 14:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:55.529 14:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:55.529 14:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:55.529 14:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:55.529 14:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.529 14:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.529 14:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.529 14:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:55.529 14:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:55.529 14:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:55.529 14:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:55.529 14:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:55.529 14:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:55.529 14:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:55.529 14:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:55.529 14:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:55.529 14:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:55.529 14:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:55.529 14:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:55.529 14:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.529 14:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.463 nvme0n1 00:24:56.463 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:56.463 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:56.463 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:56.463 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:56.463 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.463 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:56.463 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:56.463 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:56.463 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:56.463 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.463 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:56.463 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:24:56.463 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:56.463 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:56.463 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:24:56.463 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:56.463 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:56.463 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:56.463 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:56.463 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjIyMTBmNzdhZWRiN2JlMGE2ZjZjNTMxYWJiNTFhNmR2kUqO: 00:24:56.463 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDFjM2EwMjhkYWFmMDFlNWUyY2FlNGU4ZmUwNGE1OWM3NGI4ODY4ZTQ4YTIwZjlhZDNjYTFhZDgwNzgxNWNiOD9idjE=: 00:24:56.463 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:56.463 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:56.463 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjIyMTBmNzdhZWRiN2JlMGE2ZjZjNTMxYWJiNTFhNmR2kUqO: 00:24:56.463 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDFjM2EwMjhkYWFmMDFlNWUyY2FlNGU4ZmUwNGE1OWM3NGI4ODY4ZTQ4YTIwZjlhZDNjYTFhZDgwNzgxNWNiOD9idjE=: ]] 00:24:56.463 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDFjM2EwMjhkYWFmMDFlNWUyY2FlNGU4ZmUwNGE1OWM3NGI4ODY4ZTQ4YTIwZjlhZDNjYTFhZDgwNzgxNWNiOD9idjE=: 00:24:56.463 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:24:56.463 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:56.463 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:56.463 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:56.463 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:56.463 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:56.463 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:56.463 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:56.463 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.463 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:56.463 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:56.463 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:56.463 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:56.463 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:56.463 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:56.463 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:56.463 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:56.463 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:56.463 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:56.463 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:56.463 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:56.463 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:56.463 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:56.463 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.463 nvme0n1 00:24:56.463 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:56.463 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:56.463 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:56.463 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:56.463 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.463 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:56.463 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:56.463 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:56.463 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:56.463 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.463 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:56.463 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:56.463 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:24:56.463 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:56.463 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:56.463 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:56.463 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:56.463 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzkwMWVlYTY1NmU1ZDY2NWFkZDcyMGQ2ZWQ0NTFhZjQ3Nzg4MWFhYzIyNzBmZTJhi0COUQ==: 00:24:56.463 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGY1YTJjYmY3NDVmNzFlZDcyZDMwYTMxOGMyZmU0ODRlM2E4MjkxNTEwOTg1NmQ0OthzsQ==: 00:24:56.464 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:56.464 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:56.464 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzkwMWVlYTY1NmU1ZDY2NWFkZDcyMGQ2ZWQ0NTFhZjQ3Nzg4MWFhYzIyNzBmZTJhi0COUQ==: 00:24:56.464 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGY1YTJjYmY3NDVmNzFlZDcyZDMwYTMxOGMyZmU0ODRlM2E4MjkxNTEwOTg1NmQ0OthzsQ==: ]] 00:24:56.464 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGY1YTJjYmY3NDVmNzFlZDcyZDMwYTMxOGMyZmU0ODRlM2E4MjkxNTEwOTg1NmQ0OthzsQ==: 00:24:56.464 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:24:56.464 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:56.464 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:56.464 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:56.464 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:56.464 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:56.464 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:56.464 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:56.464 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.464 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:56.464 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:56.464 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:56.464 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:56.464 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:56.464 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:56.464 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:56.464 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:56.464 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:56.464 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:56.464 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:56.464 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:56.464 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:56.464 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:56.464 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.723 nvme0n1 00:24:56.723 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:56.723 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:56.723 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:56.723 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:56.723 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.723 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:56.723 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:56.723 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:56.723 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:56.723 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.723 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:56.723 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:56.723 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:24:56.723 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:56.723 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:56.723 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:56.723 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:56.723 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yjk3NmVmZTA1MjA3MGNlNWE1OTRhMWJmNzBiY2I2ZGWBS6Zf: 00:24:56.723 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTQyNzU2Zjk0MmRkYjI0YmQzYzk5OWMyOWE0YTRhMjR6nWvq: 00:24:56.723 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:56.723 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:56.723 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yjk3NmVmZTA1MjA3MGNlNWE1OTRhMWJmNzBiY2I2ZGWBS6Zf: 00:24:56.723 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTQyNzU2Zjk0MmRkYjI0YmQzYzk5OWMyOWE0YTRhMjR6nWvq: ]] 00:24:56.723 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTQyNzU2Zjk0MmRkYjI0YmQzYzk5OWMyOWE0YTRhMjR6nWvq: 00:24:56.723 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:24:56.723 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:56.723 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:56.723 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:56.723 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:56.723 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:56.723 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:56.723 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:56.723 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.723 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:56.723 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:56.723 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:56.723 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:56.723 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:56.723 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:56.723 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:56.723 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:56.723 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:56.723 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:56.723 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:56.723 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:56.723 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:56.723 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:56.723 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.981 nvme0n1 00:24:56.981 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:56.981 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:56.981 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:56.981 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.981 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:56.981 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:56.981 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:56.981 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:56.981 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:56.981 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.981 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:56.981 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:56.981 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:24:56.981 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:56.981 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:56.981 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:56.982 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:56.982 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDVmMWMyOWIwMDhhMDYzZjRkNDRlMTEwYzc3MWRmYWMxYTU4Y2NlODNjMDY1YWUx/fPkeA==: 00:24:56.982 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDQwNjI0YTJkOWQxOGFjMzYwZjczMjkwOTAwNDQ2MjBCrX1D: 00:24:56.982 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:56.982 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:56.982 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDVmMWMyOWIwMDhhMDYzZjRkNDRlMTEwYzc3MWRmYWMxYTU4Y2NlODNjMDY1YWUx/fPkeA==: 00:24:56.982 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDQwNjI0YTJkOWQxOGFjMzYwZjczMjkwOTAwNDQ2MjBCrX1D: ]] 00:24:56.982 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDQwNjI0YTJkOWQxOGFjMzYwZjczMjkwOTAwNDQ2MjBCrX1D: 00:24:56.982 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:24:56.982 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:56.982 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:56.982 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:56.982 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:56.982 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:56.982 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:56.982 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:56.982 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.982 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:56.982 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:56.982 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:56.982 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:56.982 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:56.982 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:56.982 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:56.982 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:56.982 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:56.982 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:56.982 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:56.982 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:56.982 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:56.982 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:56.982 14:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.240 nvme0n1 00:24:57.240 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:57.240 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:57.240 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:57.240 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.240 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:57.240 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:57.240 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:57.240 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:57.240 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:57.240 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.240 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:57.240 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:57.240 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:24:57.240 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:57.240 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:57.240 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:57.240 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:57.240 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjM5MjA0MmQyMDBlMzkwZGVmZWFiN2RkMTEzM2M4YWFjNDFhNTgxMzE1YzYyNWVkNTg3N2YxOTQ3OTVlNDgyYvCZVHQ=: 00:24:57.240 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:57.240 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:57.240 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:57.240 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjM5MjA0MmQyMDBlMzkwZGVmZWFiN2RkMTEzM2M4YWFjNDFhNTgxMzE1YzYyNWVkNTg3N2YxOTQ3OTVlNDgyYvCZVHQ=: 00:24:57.240 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:57.240 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:24:57.240 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:57.240 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:57.240 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:57.240 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:57.240 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:57.240 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:57.240 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:57.240 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.240 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:57.240 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:57.240 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:57.240 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:57.240 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:57.240 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:57.240 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:57.240 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:57.240 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:57.240 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:57.240 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:57.240 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:57.240 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:57.240 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:57.240 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.498 nvme0n1 00:24:57.498 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:57.498 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:57.498 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:57.498 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.498 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:57.498 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:57.498 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:57.498 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:57.498 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:57.498 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.498 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:57.498 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:57.498 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:57.498 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:24:57.498 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:57.498 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:57.498 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:57.498 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:57.498 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjIyMTBmNzdhZWRiN2JlMGE2ZjZjNTMxYWJiNTFhNmR2kUqO: 00:24:57.498 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDFjM2EwMjhkYWFmMDFlNWUyY2FlNGU4ZmUwNGE1OWM3NGI4ODY4ZTQ4YTIwZjlhZDNjYTFhZDgwNzgxNWNiOD9idjE=: 00:24:57.498 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:57.498 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:57.498 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjIyMTBmNzdhZWRiN2JlMGE2ZjZjNTMxYWJiNTFhNmR2kUqO: 00:24:57.498 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDFjM2EwMjhkYWFmMDFlNWUyY2FlNGU4ZmUwNGE1OWM3NGI4ODY4ZTQ4YTIwZjlhZDNjYTFhZDgwNzgxNWNiOD9idjE=: ]] 00:24:57.498 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDFjM2EwMjhkYWFmMDFlNWUyY2FlNGU4ZmUwNGE1OWM3NGI4ODY4ZTQ4YTIwZjlhZDNjYTFhZDgwNzgxNWNiOD9idjE=: 00:24:57.498 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:24:57.498 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:57.498 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:57.498 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:57.498 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:57.498 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:57.498 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:57.498 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:57.498 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.498 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:57.498 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:57.498 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:57.498 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:57.498 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:57.498 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:57.498 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:57.498 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:57.498 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:57.498 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:57.498 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:57.498 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:57.499 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:57.499 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:57.499 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.756 nvme0n1 00:24:57.756 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:57.756 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:57.756 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:57.756 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:57.756 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.756 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:57.756 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:57.756 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:57.756 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:57.756 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.756 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:57.756 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:57.756 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:24:57.756 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:57.756 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:57.756 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:57.756 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:57.756 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzkwMWVlYTY1NmU1ZDY2NWFkZDcyMGQ2ZWQ0NTFhZjQ3Nzg4MWFhYzIyNzBmZTJhi0COUQ==: 00:24:57.756 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGY1YTJjYmY3NDVmNzFlZDcyZDMwYTMxOGMyZmU0ODRlM2E4MjkxNTEwOTg1NmQ0OthzsQ==: 00:24:57.756 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:57.756 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:57.756 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzkwMWVlYTY1NmU1ZDY2NWFkZDcyMGQ2ZWQ0NTFhZjQ3Nzg4MWFhYzIyNzBmZTJhi0COUQ==: 00:24:57.756 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGY1YTJjYmY3NDVmNzFlZDcyZDMwYTMxOGMyZmU0ODRlM2E4MjkxNTEwOTg1NmQ0OthzsQ==: ]] 00:24:57.756 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGY1YTJjYmY3NDVmNzFlZDcyZDMwYTMxOGMyZmU0ODRlM2E4MjkxNTEwOTg1NmQ0OthzsQ==: 00:24:57.756 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:24:57.757 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:57.757 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:57.757 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:57.757 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:57.757 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:57.757 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:57.757 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:57.757 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.757 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:57.757 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:57.757 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:57.757 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:57.757 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:57.757 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:57.757 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:57.757 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:57.757 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:57.757 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:57.757 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:57.757 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:57.757 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:57.757 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:57.757 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.015 nvme0n1 00:24:58.015 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.015 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:58.015 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.015 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.015 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:58.015 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.015 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:58.015 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:58.015 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.015 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.015 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.015 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:58.015 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:24:58.015 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:58.015 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:58.015 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:58.015 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:58.015 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yjk3NmVmZTA1MjA3MGNlNWE1OTRhMWJmNzBiY2I2ZGWBS6Zf: 00:24:58.015 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTQyNzU2Zjk0MmRkYjI0YmQzYzk5OWMyOWE0YTRhMjR6nWvq: 00:24:58.015 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:58.015 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:58.015 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yjk3NmVmZTA1MjA3MGNlNWE1OTRhMWJmNzBiY2I2ZGWBS6Zf: 00:24:58.015 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTQyNzU2Zjk0MmRkYjI0YmQzYzk5OWMyOWE0YTRhMjR6nWvq: ]] 00:24:58.015 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTQyNzU2Zjk0MmRkYjI0YmQzYzk5OWMyOWE0YTRhMjR6nWvq: 00:24:58.015 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:24:58.015 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:58.015 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:58.015 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:58.015 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:58.015 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:58.015 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:58.015 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.015 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.015 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.015 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:58.015 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:58.015 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:58.015 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:58.015 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:58.015 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:58.015 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:58.015 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:58.015 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:58.015 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:58.015 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:58.015 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:58.015 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.015 14:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.273 nvme0n1 00:24:58.273 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.273 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:58.273 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.273 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.273 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:58.273 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.273 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:58.273 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:58.273 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.273 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.273 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.273 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:58.273 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:24:58.273 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:58.274 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:58.274 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:58.274 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:58.274 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDVmMWMyOWIwMDhhMDYzZjRkNDRlMTEwYzc3MWRmYWMxYTU4Y2NlODNjMDY1YWUx/fPkeA==: 00:24:58.274 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDQwNjI0YTJkOWQxOGFjMzYwZjczMjkwOTAwNDQ2MjBCrX1D: 00:24:58.274 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:58.274 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:58.274 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDVmMWMyOWIwMDhhMDYzZjRkNDRlMTEwYzc3MWRmYWMxYTU4Y2NlODNjMDY1YWUx/fPkeA==: 00:24:58.274 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDQwNjI0YTJkOWQxOGFjMzYwZjczMjkwOTAwNDQ2MjBCrX1D: ]] 00:24:58.274 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDQwNjI0YTJkOWQxOGFjMzYwZjczMjkwOTAwNDQ2MjBCrX1D: 00:24:58.274 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:24:58.274 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:58.274 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:58.274 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:58.274 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:58.274 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:58.274 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:58.274 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.274 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.274 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.274 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:58.274 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:58.274 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:58.274 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:58.274 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:58.274 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:58.274 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:58.274 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:58.274 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:58.274 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:58.274 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:58.274 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:58.274 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.274 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.532 nvme0n1 00:24:58.532 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.532 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:58.532 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.532 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:58.532 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.532 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.532 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:58.532 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:58.532 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.532 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.532 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.532 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:58.532 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:24:58.532 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:58.532 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:58.532 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:58.532 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:58.532 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjM5MjA0MmQyMDBlMzkwZGVmZWFiN2RkMTEzM2M4YWFjNDFhNTgxMzE1YzYyNWVkNTg3N2YxOTQ3OTVlNDgyYvCZVHQ=: 00:24:58.532 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:58.532 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:58.532 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:58.532 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjM5MjA0MmQyMDBlMzkwZGVmZWFiN2RkMTEzM2M4YWFjNDFhNTgxMzE1YzYyNWVkNTg3N2YxOTQ3OTVlNDgyYvCZVHQ=: 00:24:58.532 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:58.532 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:24:58.532 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:58.532 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:58.532 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:58.532 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:58.532 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:58.532 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:58.532 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.532 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.532 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.532 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:58.532 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:58.532 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:58.532 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:58.532 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:58.532 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:58.532 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:58.532 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:58.532 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:58.532 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:58.532 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:58.532 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:58.532 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.532 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.532 nvme0n1 00:24:58.532 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.532 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:58.532 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.532 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:58.532 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.532 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.791 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:58.791 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:58.791 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.791 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.791 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.791 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:58.791 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:58.791 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:24:58.791 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:58.791 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:58.791 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:58.791 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:58.791 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjIyMTBmNzdhZWRiN2JlMGE2ZjZjNTMxYWJiNTFhNmR2kUqO: 00:24:58.791 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDFjM2EwMjhkYWFmMDFlNWUyY2FlNGU4ZmUwNGE1OWM3NGI4ODY4ZTQ4YTIwZjlhZDNjYTFhZDgwNzgxNWNiOD9idjE=: 00:24:58.791 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:58.791 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:58.791 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjIyMTBmNzdhZWRiN2JlMGE2ZjZjNTMxYWJiNTFhNmR2kUqO: 00:24:58.791 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDFjM2EwMjhkYWFmMDFlNWUyY2FlNGU4ZmUwNGE1OWM3NGI4ODY4ZTQ4YTIwZjlhZDNjYTFhZDgwNzgxNWNiOD9idjE=: ]] 00:24:58.791 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDFjM2EwMjhkYWFmMDFlNWUyY2FlNGU4ZmUwNGE1OWM3NGI4ODY4ZTQ4YTIwZjlhZDNjYTFhZDgwNzgxNWNiOD9idjE=: 00:24:58.791 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:24:58.791 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:58.791 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:58.791 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:58.791 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:58.791 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:58.791 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:58.791 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.791 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.791 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.791 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:58.791 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:58.791 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:58.791 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:58.791 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:58.791 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:58.791 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:58.791 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:58.791 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:58.791 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:58.791 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:58.791 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:58.791 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.791 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.049 nvme0n1 00:24:59.049 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:59.049 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:59.049 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:59.049 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.049 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:59.049 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:59.049 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:59.049 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:59.049 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:59.049 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.049 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:59.049 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:59.049 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:24:59.049 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:59.049 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:59.049 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:59.049 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:59.049 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzkwMWVlYTY1NmU1ZDY2NWFkZDcyMGQ2ZWQ0NTFhZjQ3Nzg4MWFhYzIyNzBmZTJhi0COUQ==: 00:24:59.049 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGY1YTJjYmY3NDVmNzFlZDcyZDMwYTMxOGMyZmU0ODRlM2E4MjkxNTEwOTg1NmQ0OthzsQ==: 00:24:59.049 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:59.049 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:59.049 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzkwMWVlYTY1NmU1ZDY2NWFkZDcyMGQ2ZWQ0NTFhZjQ3Nzg4MWFhYzIyNzBmZTJhi0COUQ==: 00:24:59.049 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGY1YTJjYmY3NDVmNzFlZDcyZDMwYTMxOGMyZmU0ODRlM2E4MjkxNTEwOTg1NmQ0OthzsQ==: ]] 00:24:59.049 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGY1YTJjYmY3NDVmNzFlZDcyZDMwYTMxOGMyZmU0ODRlM2E4MjkxNTEwOTg1NmQ0OthzsQ==: 00:24:59.049 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:24:59.049 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:59.049 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:59.049 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:59.049 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:59.049 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:59.049 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:59.049 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:59.049 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.049 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:59.049 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:59.049 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:59.049 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:59.049 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:59.049 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:59.049 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:59.049 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:59.049 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:59.049 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:59.049 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:59.049 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:59.049 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:59.049 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:59.049 14:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.307 nvme0n1 00:24:59.307 14:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:59.308 14:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:59.308 14:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:59.308 14:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.308 14:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:59.308 14:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:59.308 14:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:59.308 14:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:59.308 14:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:59.308 14:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.308 14:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:59.308 14:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:59.308 14:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:24:59.308 14:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:59.308 14:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:59.308 14:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:59.308 14:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:59.308 14:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yjk3NmVmZTA1MjA3MGNlNWE1OTRhMWJmNzBiY2I2ZGWBS6Zf: 00:24:59.308 14:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTQyNzU2Zjk0MmRkYjI0YmQzYzk5OWMyOWE0YTRhMjR6nWvq: 00:24:59.308 14:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:59.308 14:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:59.308 14:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yjk3NmVmZTA1MjA3MGNlNWE1OTRhMWJmNzBiY2I2ZGWBS6Zf: 00:24:59.308 14:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTQyNzU2Zjk0MmRkYjI0YmQzYzk5OWMyOWE0YTRhMjR6nWvq: ]] 00:24:59.308 14:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTQyNzU2Zjk0MmRkYjI0YmQzYzk5OWMyOWE0YTRhMjR6nWvq: 00:24:59.308 14:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:24:59.308 14:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:59.308 14:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:59.308 14:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:59.308 14:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:59.308 14:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:59.308 14:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:59.308 14:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:59.308 14:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.308 14:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:59.308 14:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:59.308 14:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:59.308 14:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:59.308 14:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:59.308 14:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:59.308 14:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:59.308 14:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:59.308 14:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:59.308 14:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:59.308 14:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:59.308 14:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:59.308 14:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:59.308 14:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:59.308 14:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.565 nvme0n1 00:24:59.565 14:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:59.565 14:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:59.565 14:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:59.565 14:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:59.565 14:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.565 14:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:59.824 14:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:59.824 14:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:59.824 14:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:59.824 14:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.824 14:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:59.824 14:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:59.824 14:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:24:59.824 14:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:59.824 14:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:59.824 14:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:59.824 14:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:59.824 14:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDVmMWMyOWIwMDhhMDYzZjRkNDRlMTEwYzc3MWRmYWMxYTU4Y2NlODNjMDY1YWUx/fPkeA==: 00:24:59.824 14:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDQwNjI0YTJkOWQxOGFjMzYwZjczMjkwOTAwNDQ2MjBCrX1D: 00:24:59.824 14:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:59.824 14:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:59.824 14:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDVmMWMyOWIwMDhhMDYzZjRkNDRlMTEwYzc3MWRmYWMxYTU4Y2NlODNjMDY1YWUx/fPkeA==: 00:24:59.824 14:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDQwNjI0YTJkOWQxOGFjMzYwZjczMjkwOTAwNDQ2MjBCrX1D: ]] 00:24:59.824 14:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDQwNjI0YTJkOWQxOGFjMzYwZjczMjkwOTAwNDQ2MjBCrX1D: 00:24:59.824 14:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:24:59.824 14:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:59.824 14:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:59.824 14:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:59.824 14:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:59.824 14:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:59.824 14:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:59.824 14:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:59.824 14:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.824 14:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:59.824 14:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:59.824 14:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:59.824 14:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:59.824 14:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:59.824 14:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:59.824 14:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:59.824 14:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:59.824 14:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:59.824 14:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:59.824 14:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:59.824 14:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:59.824 14:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:59.824 14:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:59.824 14:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.082 nvme0n1 00:25:00.082 14:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:00.082 14:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:00.082 14:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:00.082 14:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:00.082 14:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.082 14:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:00.082 14:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:00.083 14:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:00.083 14:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:00.083 14:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.083 14:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:00.083 14:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:00.083 14:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:25:00.083 14:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:00.083 14:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:00.083 14:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:00.083 14:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:00.083 14:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjM5MjA0MmQyMDBlMzkwZGVmZWFiN2RkMTEzM2M4YWFjNDFhNTgxMzE1YzYyNWVkNTg3N2YxOTQ3OTVlNDgyYvCZVHQ=: 00:25:00.083 14:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:00.083 14:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:00.083 14:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:00.083 14:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjM5MjA0MmQyMDBlMzkwZGVmZWFiN2RkMTEzM2M4YWFjNDFhNTgxMzE1YzYyNWVkNTg3N2YxOTQ3OTVlNDgyYvCZVHQ=: 00:25:00.083 14:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:00.083 14:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:25:00.083 14:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:00.083 14:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:00.083 14:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:00.083 14:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:00.083 14:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:00.083 14:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:00.083 14:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:00.083 14:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.083 14:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:00.083 14:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:00.083 14:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:00.083 14:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:00.083 14:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:00.083 14:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:00.083 14:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:00.083 14:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:00.083 14:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:00.083 14:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:00.083 14:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:00.083 14:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:00.083 14:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:00.083 14:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:00.083 14:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.341 nvme0n1 00:25:00.341 14:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:00.341 14:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:00.341 14:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:00.341 14:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:00.341 14:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.341 14:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:00.341 14:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:00.341 14:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:00.341 14:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:00.341 14:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.341 14:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:00.341 14:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:00.341 14:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:00.341 14:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:25:00.341 14:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:00.341 14:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:00.341 14:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:00.341 14:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:00.341 14:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjIyMTBmNzdhZWRiN2JlMGE2ZjZjNTMxYWJiNTFhNmR2kUqO: 00:25:00.341 14:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDFjM2EwMjhkYWFmMDFlNWUyY2FlNGU4ZmUwNGE1OWM3NGI4ODY4ZTQ4YTIwZjlhZDNjYTFhZDgwNzgxNWNiOD9idjE=: 00:25:00.341 14:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:00.341 14:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:00.341 14:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjIyMTBmNzdhZWRiN2JlMGE2ZjZjNTMxYWJiNTFhNmR2kUqO: 00:25:00.341 14:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDFjM2EwMjhkYWFmMDFlNWUyY2FlNGU4ZmUwNGE1OWM3NGI4ODY4ZTQ4YTIwZjlhZDNjYTFhZDgwNzgxNWNiOD9idjE=: ]] 00:25:00.341 14:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDFjM2EwMjhkYWFmMDFlNWUyY2FlNGU4ZmUwNGE1OWM3NGI4ODY4ZTQ4YTIwZjlhZDNjYTFhZDgwNzgxNWNiOD9idjE=: 00:25:00.341 14:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:25:00.341 14:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:00.341 14:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:00.341 14:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:00.341 14:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:00.341 14:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:00.341 14:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:00.341 14:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:00.341 14:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.341 14:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:00.341 14:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:00.341 14:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:00.341 14:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:00.341 14:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:00.341 14:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:00.341 14:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:00.341 14:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:00.341 14:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:00.341 14:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:00.341 14:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:00.341 14:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:00.341 14:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:00.341 14:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:00.341 14:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.906 nvme0n1 00:25:00.906 14:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:00.906 14:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:00.906 14:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:00.906 14:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.906 14:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:00.906 14:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:00.906 14:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:00.906 14:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:00.906 14:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:00.906 14:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.906 14:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:00.906 14:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:00.906 14:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:25:00.906 14:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:00.906 14:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:00.906 14:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:00.906 14:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:00.906 14:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzkwMWVlYTY1NmU1ZDY2NWFkZDcyMGQ2ZWQ0NTFhZjQ3Nzg4MWFhYzIyNzBmZTJhi0COUQ==: 00:25:00.906 14:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGY1YTJjYmY3NDVmNzFlZDcyZDMwYTMxOGMyZmU0ODRlM2E4MjkxNTEwOTg1NmQ0OthzsQ==: 00:25:00.906 14:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:00.906 14:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:00.906 14:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzkwMWVlYTY1NmU1ZDY2NWFkZDcyMGQ2ZWQ0NTFhZjQ3Nzg4MWFhYzIyNzBmZTJhi0COUQ==: 00:25:00.906 14:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGY1YTJjYmY3NDVmNzFlZDcyZDMwYTMxOGMyZmU0ODRlM2E4MjkxNTEwOTg1NmQ0OthzsQ==: ]] 00:25:00.906 14:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGY1YTJjYmY3NDVmNzFlZDcyZDMwYTMxOGMyZmU0ODRlM2E4MjkxNTEwOTg1NmQ0OthzsQ==: 00:25:00.906 14:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:25:00.906 14:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:00.906 14:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:00.906 14:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:00.906 14:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:00.906 14:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:00.906 14:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:00.906 14:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:00.906 14:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.906 14:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:00.906 14:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:00.906 14:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:00.906 14:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:00.906 14:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:00.906 14:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:00.906 14:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:00.906 14:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:00.906 14:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:00.906 14:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:00.906 14:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:00.906 14:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:00.906 14:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:00.906 14:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:00.906 14:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.471 nvme0n1 00:25:01.471 14:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:01.471 14:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:01.471 14:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:01.471 14:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.471 14:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:01.471 14:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:01.471 14:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:01.471 14:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:01.471 14:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:01.471 14:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.471 14:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:01.471 14:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:01.471 14:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:25:01.471 14:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:01.471 14:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:01.471 14:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:01.471 14:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:01.471 14:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yjk3NmVmZTA1MjA3MGNlNWE1OTRhMWJmNzBiY2I2ZGWBS6Zf: 00:25:01.471 14:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTQyNzU2Zjk0MmRkYjI0YmQzYzk5OWMyOWE0YTRhMjR6nWvq: 00:25:01.471 14:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:01.471 14:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:01.471 14:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yjk3NmVmZTA1MjA3MGNlNWE1OTRhMWJmNzBiY2I2ZGWBS6Zf: 00:25:01.472 14:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTQyNzU2Zjk0MmRkYjI0YmQzYzk5OWMyOWE0YTRhMjR6nWvq: ]] 00:25:01.472 14:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTQyNzU2Zjk0MmRkYjI0YmQzYzk5OWMyOWE0YTRhMjR6nWvq: 00:25:01.472 14:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:25:01.472 14:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:01.472 14:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:01.472 14:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:01.472 14:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:01.472 14:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:01.472 14:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:01.472 14:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:01.472 14:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.472 14:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:01.472 14:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:01.472 14:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:01.472 14:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:01.472 14:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:01.472 14:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:01.472 14:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:01.472 14:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:01.472 14:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:01.472 14:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:01.472 14:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:01.472 14:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:01.472 14:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:01.472 14:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:01.472 14:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.038 nvme0n1 00:25:02.038 14:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:02.038 14:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:02.038 14:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:02.038 14:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:02.038 14:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.038 14:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:02.038 14:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:02.038 14:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:02.038 14:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:02.038 14:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.038 14:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:02.038 14:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:02.038 14:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:25:02.038 14:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:02.038 14:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:02.038 14:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:02.038 14:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:02.038 14:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDVmMWMyOWIwMDhhMDYzZjRkNDRlMTEwYzc3MWRmYWMxYTU4Y2NlODNjMDY1YWUx/fPkeA==: 00:25:02.038 14:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDQwNjI0YTJkOWQxOGFjMzYwZjczMjkwOTAwNDQ2MjBCrX1D: 00:25:02.038 14:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:02.038 14:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:02.038 14:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDVmMWMyOWIwMDhhMDYzZjRkNDRlMTEwYzc3MWRmYWMxYTU4Y2NlODNjMDY1YWUx/fPkeA==: 00:25:02.038 14:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDQwNjI0YTJkOWQxOGFjMzYwZjczMjkwOTAwNDQ2MjBCrX1D: ]] 00:25:02.038 14:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDQwNjI0YTJkOWQxOGFjMzYwZjczMjkwOTAwNDQ2MjBCrX1D: 00:25:02.038 14:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:25:02.038 14:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:02.038 14:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:02.038 14:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:02.038 14:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:02.038 14:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:02.038 14:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:02.038 14:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:02.038 14:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.038 14:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:02.038 14:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:02.038 14:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:02.038 14:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:02.038 14:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:02.038 14:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:02.038 14:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:02.038 14:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:02.038 14:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:02.038 14:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:02.038 14:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:02.038 14:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:02.038 14:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:02.038 14:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:02.038 14:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.602 nvme0n1 00:25:02.603 14:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:02.603 14:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:02.603 14:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:02.603 14:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.603 14:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:02.603 14:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:02.603 14:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:02.603 14:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:02.603 14:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:02.603 14:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.603 14:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:02.603 14:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:02.603 14:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:25:02.603 14:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:02.603 14:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:02.603 14:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:02.603 14:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:02.603 14:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjM5MjA0MmQyMDBlMzkwZGVmZWFiN2RkMTEzM2M4YWFjNDFhNTgxMzE1YzYyNWVkNTg3N2YxOTQ3OTVlNDgyYvCZVHQ=: 00:25:02.603 14:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:02.603 14:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:02.603 14:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:02.603 14:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjM5MjA0MmQyMDBlMzkwZGVmZWFiN2RkMTEzM2M4YWFjNDFhNTgxMzE1YzYyNWVkNTg3N2YxOTQ3OTVlNDgyYvCZVHQ=: 00:25:02.603 14:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:02.603 14:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:25:02.603 14:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:02.603 14:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:02.603 14:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:02.603 14:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:02.603 14:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:02.603 14:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:02.603 14:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:02.603 14:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.603 14:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:02.603 14:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:02.603 14:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:02.603 14:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:02.603 14:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:02.603 14:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:02.603 14:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:02.603 14:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:02.603 14:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:02.603 14:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:02.603 14:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:02.603 14:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:02.603 14:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:02.603 14:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:02.603 14:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.168 nvme0n1 00:25:03.168 14:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.168 14:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:03.168 14:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.168 14:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.168 14:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:03.168 14:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.168 14:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:03.168 14:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:03.168 14:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.168 14:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.168 14:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.168 14:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:03.168 14:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:03.168 14:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:25:03.168 14:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:03.168 14:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:03.168 14:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:03.168 14:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:03.168 14:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjIyMTBmNzdhZWRiN2JlMGE2ZjZjNTMxYWJiNTFhNmR2kUqO: 00:25:03.168 14:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDFjM2EwMjhkYWFmMDFlNWUyY2FlNGU4ZmUwNGE1OWM3NGI4ODY4ZTQ4YTIwZjlhZDNjYTFhZDgwNzgxNWNiOD9idjE=: 00:25:03.168 14:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:03.168 14:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:03.168 14:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjIyMTBmNzdhZWRiN2JlMGE2ZjZjNTMxYWJiNTFhNmR2kUqO: 00:25:03.168 14:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDFjM2EwMjhkYWFmMDFlNWUyY2FlNGU4ZmUwNGE1OWM3NGI4ODY4ZTQ4YTIwZjlhZDNjYTFhZDgwNzgxNWNiOD9idjE=: ]] 00:25:03.168 14:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDFjM2EwMjhkYWFmMDFlNWUyY2FlNGU4ZmUwNGE1OWM3NGI4ODY4ZTQ4YTIwZjlhZDNjYTFhZDgwNzgxNWNiOD9idjE=: 00:25:03.168 14:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:25:03.168 14:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:03.168 14:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:03.168 14:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:03.168 14:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:03.168 14:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:03.168 14:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:03.168 14:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.168 14:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.168 14:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.168 14:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:03.168 14:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:03.168 14:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:03.168 14:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:03.168 14:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:03.168 14:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:03.168 14:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:03.168 14:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:03.168 14:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:03.168 14:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:03.168 14:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:03.168 14:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:03.168 14:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.168 14:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.102 nvme0n1 00:25:04.102 14:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.102 14:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:04.102 14:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:04.102 14:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.102 14:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.102 14:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.102 14:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:04.102 14:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:04.102 14:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.102 14:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.102 14:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.102 14:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:04.102 14:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:25:04.102 14:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:04.102 14:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:04.102 14:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:04.102 14:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:04.102 14:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzkwMWVlYTY1NmU1ZDY2NWFkZDcyMGQ2ZWQ0NTFhZjQ3Nzg4MWFhYzIyNzBmZTJhi0COUQ==: 00:25:04.102 14:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGY1YTJjYmY3NDVmNzFlZDcyZDMwYTMxOGMyZmU0ODRlM2E4MjkxNTEwOTg1NmQ0OthzsQ==: 00:25:04.102 14:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:04.102 14:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:04.102 14:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzkwMWVlYTY1NmU1ZDY2NWFkZDcyMGQ2ZWQ0NTFhZjQ3Nzg4MWFhYzIyNzBmZTJhi0COUQ==: 00:25:04.102 14:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGY1YTJjYmY3NDVmNzFlZDcyZDMwYTMxOGMyZmU0ODRlM2E4MjkxNTEwOTg1NmQ0OthzsQ==: ]] 00:25:04.102 14:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGY1YTJjYmY3NDVmNzFlZDcyZDMwYTMxOGMyZmU0ODRlM2E4MjkxNTEwOTg1NmQ0OthzsQ==: 00:25:04.102 14:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:25:04.102 14:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:04.102 14:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:04.102 14:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:04.102 14:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:04.102 14:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:04.102 14:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:04.102 14:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.102 14:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.102 14:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.102 14:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:04.102 14:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:04.102 14:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:04.103 14:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:04.103 14:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:04.103 14:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:04.103 14:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:04.103 14:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:04.103 14:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:04.103 14:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:04.103 14:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:04.103 14:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:04.103 14:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.103 14:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.036 nvme0n1 00:25:05.036 14:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.036 14:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:05.036 14:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.036 14:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.036 14:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:05.036 14:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.036 14:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:05.036 14:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:05.036 14:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.036 14:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.036 14:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.036 14:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:05.036 14:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:25:05.036 14:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:05.036 14:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:05.036 14:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:05.036 14:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:05.036 14:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yjk3NmVmZTA1MjA3MGNlNWE1OTRhMWJmNzBiY2I2ZGWBS6Zf: 00:25:05.036 14:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTQyNzU2Zjk0MmRkYjI0YmQzYzk5OWMyOWE0YTRhMjR6nWvq: 00:25:05.036 14:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:05.036 14:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:05.036 14:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yjk3NmVmZTA1MjA3MGNlNWE1OTRhMWJmNzBiY2I2ZGWBS6Zf: 00:25:05.036 14:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTQyNzU2Zjk0MmRkYjI0YmQzYzk5OWMyOWE0YTRhMjR6nWvq: ]] 00:25:05.036 14:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTQyNzU2Zjk0MmRkYjI0YmQzYzk5OWMyOWE0YTRhMjR6nWvq: 00:25:05.036 14:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:25:05.036 14:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:05.036 14:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:05.036 14:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:05.036 14:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:05.036 14:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:05.036 14:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:05.036 14:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.036 14:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.036 14:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.036 14:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:05.036 14:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:05.036 14:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:05.036 14:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:05.036 14:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:05.036 14:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:05.036 14:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:05.036 14:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:05.036 14:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:05.036 14:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:05.036 14:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:05.036 14:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:05.036 14:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.036 14:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.602 nvme0n1 00:25:05.602 14:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.860 14:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:05.860 14:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:05.860 14:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.860 14:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.860 14:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.860 14:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:05.860 14:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:05.860 14:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.860 14:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.860 14:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.860 14:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:05.860 14:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:25:05.860 14:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:05.860 14:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:05.860 14:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:05.860 14:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:05.860 14:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDVmMWMyOWIwMDhhMDYzZjRkNDRlMTEwYzc3MWRmYWMxYTU4Y2NlODNjMDY1YWUx/fPkeA==: 00:25:05.860 14:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDQwNjI0YTJkOWQxOGFjMzYwZjczMjkwOTAwNDQ2MjBCrX1D: 00:25:05.860 14:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:05.860 14:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:05.860 14:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDVmMWMyOWIwMDhhMDYzZjRkNDRlMTEwYzc3MWRmYWMxYTU4Y2NlODNjMDY1YWUx/fPkeA==: 00:25:05.860 14:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDQwNjI0YTJkOWQxOGFjMzYwZjczMjkwOTAwNDQ2MjBCrX1D: ]] 00:25:05.860 14:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDQwNjI0YTJkOWQxOGFjMzYwZjczMjkwOTAwNDQ2MjBCrX1D: 00:25:05.860 14:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:25:05.860 14:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:05.860 14:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:05.860 14:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:05.860 14:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:05.860 14:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:05.860 14:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:05.860 14:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.860 14:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.860 14:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.860 14:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:05.860 14:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:05.860 14:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:05.860 14:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:05.860 14:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:05.860 14:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:05.860 14:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:05.860 14:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:05.860 14:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:05.860 14:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:05.860 14:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:05.860 14:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:05.860 14:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.860 14:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.794 nvme0n1 00:25:06.794 14:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.794 14:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:06.794 14:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.794 14:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.794 14:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:06.794 14:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.794 14:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:06.794 14:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:06.794 14:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.794 14:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.794 14:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.794 14:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:06.794 14:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:25:06.794 14:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:06.794 14:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:06.794 14:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:06.794 14:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:06.794 14:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjM5MjA0MmQyMDBlMzkwZGVmZWFiN2RkMTEzM2M4YWFjNDFhNTgxMzE1YzYyNWVkNTg3N2YxOTQ3OTVlNDgyYvCZVHQ=: 00:25:06.794 14:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:06.794 14:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:06.794 14:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:06.794 14:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjM5MjA0MmQyMDBlMzkwZGVmZWFiN2RkMTEzM2M4YWFjNDFhNTgxMzE1YzYyNWVkNTg3N2YxOTQ3OTVlNDgyYvCZVHQ=: 00:25:06.794 14:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:06.794 14:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:25:06.794 14:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:06.794 14:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:06.794 14:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:06.794 14:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:06.794 14:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:06.794 14:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:06.794 14:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.794 14:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.794 14:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.794 14:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:06.794 14:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:06.794 14:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:06.794 14:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:06.794 14:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:06.794 14:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:06.794 14:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:06.794 14:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:06.794 14:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:06.794 14:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:06.794 14:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:06.794 14:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:06.794 14:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.794 14:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.727 nvme0n1 00:25:07.727 14:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.727 14:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:07.727 14:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.727 14:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.727 14:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:07.727 14:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.727 14:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:07.727 14:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:07.727 14:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.727 14:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.727 14:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.727 14:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:07.727 14:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:07.727 14:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:07.727 14:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:25:07.727 14:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:07.727 14:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:07.727 14:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:07.727 14:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:07.727 14:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjIyMTBmNzdhZWRiN2JlMGE2ZjZjNTMxYWJiNTFhNmR2kUqO: 00:25:07.727 14:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDFjM2EwMjhkYWFmMDFlNWUyY2FlNGU4ZmUwNGE1OWM3NGI4ODY4ZTQ4YTIwZjlhZDNjYTFhZDgwNzgxNWNiOD9idjE=: 00:25:07.727 14:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:07.727 14:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:07.727 14:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjIyMTBmNzdhZWRiN2JlMGE2ZjZjNTMxYWJiNTFhNmR2kUqO: 00:25:07.727 14:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDFjM2EwMjhkYWFmMDFlNWUyY2FlNGU4ZmUwNGE1OWM3NGI4ODY4ZTQ4YTIwZjlhZDNjYTFhZDgwNzgxNWNiOD9idjE=: ]] 00:25:07.727 14:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDFjM2EwMjhkYWFmMDFlNWUyY2FlNGU4ZmUwNGE1OWM3NGI4ODY4ZTQ4YTIwZjlhZDNjYTFhZDgwNzgxNWNiOD9idjE=: 00:25:07.727 14:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:25:07.727 14:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:07.727 14:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:07.727 14:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:07.727 14:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:07.727 14:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:07.727 14:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:07.727 14:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.727 14:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.727 14:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.727 14:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:07.727 14:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:07.727 14:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:07.727 14:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:07.727 14:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:07.727 14:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:07.727 14:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:07.727 14:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:07.727 14:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:07.727 14:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:07.727 14:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:07.727 14:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:07.727 14:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.727 14:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.727 nvme0n1 00:25:07.727 14:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.727 14:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:07.727 14:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.727 14:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:07.727 14:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.727 14:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.727 14:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:07.727 14:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:07.727 14:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.727 14:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.727 14:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.727 14:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:07.727 14:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:25:07.727 14:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:07.727 14:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:07.727 14:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:07.727 14:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:07.727 14:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzkwMWVlYTY1NmU1ZDY2NWFkZDcyMGQ2ZWQ0NTFhZjQ3Nzg4MWFhYzIyNzBmZTJhi0COUQ==: 00:25:07.727 14:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGY1YTJjYmY3NDVmNzFlZDcyZDMwYTMxOGMyZmU0ODRlM2E4MjkxNTEwOTg1NmQ0OthzsQ==: 00:25:07.727 14:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:07.727 14:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:07.727 14:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzkwMWVlYTY1NmU1ZDY2NWFkZDcyMGQ2ZWQ0NTFhZjQ3Nzg4MWFhYzIyNzBmZTJhi0COUQ==: 00:25:07.727 14:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGY1YTJjYmY3NDVmNzFlZDcyZDMwYTMxOGMyZmU0ODRlM2E4MjkxNTEwOTg1NmQ0OthzsQ==: ]] 00:25:07.727 14:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGY1YTJjYmY3NDVmNzFlZDcyZDMwYTMxOGMyZmU0ODRlM2E4MjkxNTEwOTg1NmQ0OthzsQ==: 00:25:07.727 14:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:25:07.727 14:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:07.727 14:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:07.727 14:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:07.727 14:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:07.727 14:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:07.727 14:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:07.727 14:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.727 14:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.727 14:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.727 14:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:07.727 14:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:07.727 14:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:07.727 14:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:07.727 14:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:07.727 14:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:07.727 14:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:07.727 14:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:07.727 14:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:07.727 14:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:07.727 14:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:07.727 14:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:07.727 14:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.727 14:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.985 nvme0n1 00:25:07.985 14:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.985 14:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:07.985 14:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.985 14:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:07.985 14:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.985 14:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.985 14:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:07.985 14:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:07.985 14:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.985 14:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.985 14:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.985 14:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:07.985 14:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:25:07.985 14:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:07.985 14:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:07.985 14:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:07.985 14:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:07.985 14:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yjk3NmVmZTA1MjA3MGNlNWE1OTRhMWJmNzBiY2I2ZGWBS6Zf: 00:25:07.985 14:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTQyNzU2Zjk0MmRkYjI0YmQzYzk5OWMyOWE0YTRhMjR6nWvq: 00:25:07.985 14:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:07.985 14:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:07.985 14:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yjk3NmVmZTA1MjA3MGNlNWE1OTRhMWJmNzBiY2I2ZGWBS6Zf: 00:25:07.985 14:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTQyNzU2Zjk0MmRkYjI0YmQzYzk5OWMyOWE0YTRhMjR6nWvq: ]] 00:25:07.985 14:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTQyNzU2Zjk0MmRkYjI0YmQzYzk5OWMyOWE0YTRhMjR6nWvq: 00:25:07.985 14:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:25:07.985 14:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:07.985 14:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:07.985 14:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:07.985 14:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:07.986 14:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:07.986 14:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:07.986 14:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.986 14:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.986 14:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.986 14:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:07.986 14:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:07.986 14:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:07.986 14:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:07.986 14:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:07.986 14:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:07.986 14:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:07.986 14:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:07.986 14:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:07.986 14:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:07.986 14:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:07.986 14:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:07.986 14:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.986 14:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.244 nvme0n1 00:25:08.244 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.244 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:08.244 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.244 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.244 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:08.244 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.244 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:08.244 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:08.244 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.244 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.244 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.244 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:08.244 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:25:08.244 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:08.244 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:08.244 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:08.244 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:08.244 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDVmMWMyOWIwMDhhMDYzZjRkNDRlMTEwYzc3MWRmYWMxYTU4Y2NlODNjMDY1YWUx/fPkeA==: 00:25:08.244 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDQwNjI0YTJkOWQxOGFjMzYwZjczMjkwOTAwNDQ2MjBCrX1D: 00:25:08.244 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:08.244 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:08.244 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDVmMWMyOWIwMDhhMDYzZjRkNDRlMTEwYzc3MWRmYWMxYTU4Y2NlODNjMDY1YWUx/fPkeA==: 00:25:08.244 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDQwNjI0YTJkOWQxOGFjMzYwZjczMjkwOTAwNDQ2MjBCrX1D: ]] 00:25:08.244 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDQwNjI0YTJkOWQxOGFjMzYwZjczMjkwOTAwNDQ2MjBCrX1D: 00:25:08.244 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:25:08.244 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:08.244 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:08.244 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:08.244 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:08.244 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:08.244 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:08.244 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.244 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.244 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.244 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:08.244 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:08.244 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:08.244 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:08.244 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:08.244 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:08.244 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:08.244 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:08.244 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:08.244 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:08.244 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:08.244 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:08.244 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.244 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.503 nvme0n1 00:25:08.503 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.503 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:08.503 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.503 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.503 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:08.503 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.503 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:08.503 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:08.503 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.503 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.503 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.503 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:08.503 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:25:08.503 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:08.503 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:08.503 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:08.503 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:08.503 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjM5MjA0MmQyMDBlMzkwZGVmZWFiN2RkMTEzM2M4YWFjNDFhNTgxMzE1YzYyNWVkNTg3N2YxOTQ3OTVlNDgyYvCZVHQ=: 00:25:08.503 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:08.503 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:08.503 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:08.503 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjM5MjA0MmQyMDBlMzkwZGVmZWFiN2RkMTEzM2M4YWFjNDFhNTgxMzE1YzYyNWVkNTg3N2YxOTQ3OTVlNDgyYvCZVHQ=: 00:25:08.503 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:08.503 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:25:08.503 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:08.503 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:08.503 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:08.503 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:08.503 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:08.503 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:08.503 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.503 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.503 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.503 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:08.503 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:08.503 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:08.503 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:08.503 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:08.503 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:08.503 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:08.503 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:08.503 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:08.503 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:08.503 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:08.503 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:08.503 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.503 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.503 nvme0n1 00:25:08.503 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.503 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:08.503 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.503 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.503 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:08.762 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.762 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:08.762 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:08.762 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.762 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.762 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.762 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:08.762 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:08.762 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:25:08.762 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:08.762 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:08.762 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:08.762 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:08.762 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjIyMTBmNzdhZWRiN2JlMGE2ZjZjNTMxYWJiNTFhNmR2kUqO: 00:25:08.762 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDFjM2EwMjhkYWFmMDFlNWUyY2FlNGU4ZmUwNGE1OWM3NGI4ODY4ZTQ4YTIwZjlhZDNjYTFhZDgwNzgxNWNiOD9idjE=: 00:25:08.762 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:08.762 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:08.762 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjIyMTBmNzdhZWRiN2JlMGE2ZjZjNTMxYWJiNTFhNmR2kUqO: 00:25:08.762 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDFjM2EwMjhkYWFmMDFlNWUyY2FlNGU4ZmUwNGE1OWM3NGI4ODY4ZTQ4YTIwZjlhZDNjYTFhZDgwNzgxNWNiOD9idjE=: ]] 00:25:08.762 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDFjM2EwMjhkYWFmMDFlNWUyY2FlNGU4ZmUwNGE1OWM3NGI4ODY4ZTQ4YTIwZjlhZDNjYTFhZDgwNzgxNWNiOD9idjE=: 00:25:08.762 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:25:08.762 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:08.762 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:08.762 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:08.762 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:08.762 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:08.762 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:08.762 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.762 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.762 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.762 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:08.762 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:08.762 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:08.762 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:08.762 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:08.762 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:08.762 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:08.762 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:08.762 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:08.762 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:08.762 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:08.762 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:08.762 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.762 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.762 nvme0n1 00:25:08.762 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.762 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:08.762 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.762 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.762 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:08.762 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.020 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:09.020 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:09.020 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.020 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.020 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.020 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:09.020 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:25:09.020 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:09.020 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:09.020 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:09.020 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:09.020 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzkwMWVlYTY1NmU1ZDY2NWFkZDcyMGQ2ZWQ0NTFhZjQ3Nzg4MWFhYzIyNzBmZTJhi0COUQ==: 00:25:09.020 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGY1YTJjYmY3NDVmNzFlZDcyZDMwYTMxOGMyZmU0ODRlM2E4MjkxNTEwOTg1NmQ0OthzsQ==: 00:25:09.020 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:09.020 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:09.020 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzkwMWVlYTY1NmU1ZDY2NWFkZDcyMGQ2ZWQ0NTFhZjQ3Nzg4MWFhYzIyNzBmZTJhi0COUQ==: 00:25:09.020 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGY1YTJjYmY3NDVmNzFlZDcyZDMwYTMxOGMyZmU0ODRlM2E4MjkxNTEwOTg1NmQ0OthzsQ==: ]] 00:25:09.020 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGY1YTJjYmY3NDVmNzFlZDcyZDMwYTMxOGMyZmU0ODRlM2E4MjkxNTEwOTg1NmQ0OthzsQ==: 00:25:09.020 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:25:09.020 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:09.020 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:09.020 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:09.020 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:09.020 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:09.020 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:09.020 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.020 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.020 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.020 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:09.020 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:09.020 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:09.020 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:09.020 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:09.020 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:09.020 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:09.020 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:09.020 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:09.020 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:09.020 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:09.020 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:09.020 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.020 14:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.020 nvme0n1 00:25:09.020 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.020 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:09.020 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.020 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:09.020 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.020 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.278 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:09.278 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:09.278 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.278 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.278 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.278 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:09.278 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:25:09.278 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:09.278 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:09.278 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:09.278 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:09.278 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yjk3NmVmZTA1MjA3MGNlNWE1OTRhMWJmNzBiY2I2ZGWBS6Zf: 00:25:09.278 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTQyNzU2Zjk0MmRkYjI0YmQzYzk5OWMyOWE0YTRhMjR6nWvq: 00:25:09.278 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:09.278 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:09.278 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yjk3NmVmZTA1MjA3MGNlNWE1OTRhMWJmNzBiY2I2ZGWBS6Zf: 00:25:09.278 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTQyNzU2Zjk0MmRkYjI0YmQzYzk5OWMyOWE0YTRhMjR6nWvq: ]] 00:25:09.278 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTQyNzU2Zjk0MmRkYjI0YmQzYzk5OWMyOWE0YTRhMjR6nWvq: 00:25:09.278 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:25:09.278 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:09.278 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:09.278 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:09.278 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:09.278 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:09.278 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:09.278 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.278 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.278 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.278 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:09.278 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:09.278 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:09.278 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:09.278 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:09.278 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:09.278 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:09.278 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:09.278 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:09.278 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:09.278 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:09.278 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:09.278 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.278 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.278 nvme0n1 00:25:09.278 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.278 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:09.278 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.278 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:09.278 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.278 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.535 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:09.535 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:09.535 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.536 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.536 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.536 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:09.536 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:25:09.536 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:09.536 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:09.536 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:09.536 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:09.536 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDVmMWMyOWIwMDhhMDYzZjRkNDRlMTEwYzc3MWRmYWMxYTU4Y2NlODNjMDY1YWUx/fPkeA==: 00:25:09.536 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDQwNjI0YTJkOWQxOGFjMzYwZjczMjkwOTAwNDQ2MjBCrX1D: 00:25:09.536 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:09.536 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:09.536 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDVmMWMyOWIwMDhhMDYzZjRkNDRlMTEwYzc3MWRmYWMxYTU4Y2NlODNjMDY1YWUx/fPkeA==: 00:25:09.536 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDQwNjI0YTJkOWQxOGFjMzYwZjczMjkwOTAwNDQ2MjBCrX1D: ]] 00:25:09.536 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDQwNjI0YTJkOWQxOGFjMzYwZjczMjkwOTAwNDQ2MjBCrX1D: 00:25:09.536 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:25:09.536 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:09.536 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:09.536 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:09.536 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:09.536 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:09.536 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:09.536 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.536 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.536 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.536 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:09.536 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:09.536 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:09.536 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:09.536 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:09.536 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:09.536 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:09.536 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:09.536 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:09.536 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:09.536 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:09.536 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:09.536 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.536 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.536 nvme0n1 00:25:09.536 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.536 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:09.536 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.536 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.536 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:09.536 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.536 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:09.536 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:09.536 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.536 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.794 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.794 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:09.794 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:25:09.794 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:09.794 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:09.794 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:09.794 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:09.794 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjM5MjA0MmQyMDBlMzkwZGVmZWFiN2RkMTEzM2M4YWFjNDFhNTgxMzE1YzYyNWVkNTg3N2YxOTQ3OTVlNDgyYvCZVHQ=: 00:25:09.794 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:09.794 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:09.794 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:09.794 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjM5MjA0MmQyMDBlMzkwZGVmZWFiN2RkMTEzM2M4YWFjNDFhNTgxMzE1YzYyNWVkNTg3N2YxOTQ3OTVlNDgyYvCZVHQ=: 00:25:09.794 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:09.794 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:25:09.794 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:09.794 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:09.794 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:09.794 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:09.794 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:09.794 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:09.794 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.794 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.794 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.794 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:09.794 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:09.794 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:09.794 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:09.794 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:09.794 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:09.794 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:09.794 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:09.794 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:09.794 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:09.794 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:09.794 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:09.794 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.794 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.794 nvme0n1 00:25:09.794 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.794 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:09.794 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.794 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:09.794 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.794 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.794 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:09.794 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:09.794 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.794 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.794 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.794 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:09.794 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:09.794 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:25:09.794 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:09.795 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:09.795 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:09.795 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:09.795 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjIyMTBmNzdhZWRiN2JlMGE2ZjZjNTMxYWJiNTFhNmR2kUqO: 00:25:09.795 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDFjM2EwMjhkYWFmMDFlNWUyY2FlNGU4ZmUwNGE1OWM3NGI4ODY4ZTQ4YTIwZjlhZDNjYTFhZDgwNzgxNWNiOD9idjE=: 00:25:09.795 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:09.795 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:09.795 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjIyMTBmNzdhZWRiN2JlMGE2ZjZjNTMxYWJiNTFhNmR2kUqO: 00:25:09.795 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDFjM2EwMjhkYWFmMDFlNWUyY2FlNGU4ZmUwNGE1OWM3NGI4ODY4ZTQ4YTIwZjlhZDNjYTFhZDgwNzgxNWNiOD9idjE=: ]] 00:25:09.795 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDFjM2EwMjhkYWFmMDFlNWUyY2FlNGU4ZmUwNGE1OWM3NGI4ODY4ZTQ4YTIwZjlhZDNjYTFhZDgwNzgxNWNiOD9idjE=: 00:25:09.795 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:25:09.795 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:09.795 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:09.795 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:09.795 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:09.795 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:09.795 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:09.795 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.795 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.052 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.052 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:10.052 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:10.052 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:10.052 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:10.052 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:10.052 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:10.052 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:10.052 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:10.052 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:10.052 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:10.052 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:10.052 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:10.052 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.052 14:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.310 nvme0n1 00:25:10.310 14:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.310 14:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:10.310 14:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.310 14:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.310 14:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:10.310 14:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.310 14:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:10.310 14:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:10.310 14:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.310 14:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.310 14:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.310 14:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:10.310 14:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:25:10.310 14:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:10.310 14:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:10.310 14:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:10.310 14:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:10.310 14:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzkwMWVlYTY1NmU1ZDY2NWFkZDcyMGQ2ZWQ0NTFhZjQ3Nzg4MWFhYzIyNzBmZTJhi0COUQ==: 00:25:10.310 14:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGY1YTJjYmY3NDVmNzFlZDcyZDMwYTMxOGMyZmU0ODRlM2E4MjkxNTEwOTg1NmQ0OthzsQ==: 00:25:10.310 14:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:10.310 14:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:10.310 14:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzkwMWVlYTY1NmU1ZDY2NWFkZDcyMGQ2ZWQ0NTFhZjQ3Nzg4MWFhYzIyNzBmZTJhi0COUQ==: 00:25:10.310 14:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGY1YTJjYmY3NDVmNzFlZDcyZDMwYTMxOGMyZmU0ODRlM2E4MjkxNTEwOTg1NmQ0OthzsQ==: ]] 00:25:10.310 14:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGY1YTJjYmY3NDVmNzFlZDcyZDMwYTMxOGMyZmU0ODRlM2E4MjkxNTEwOTg1NmQ0OthzsQ==: 00:25:10.310 14:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:25:10.310 14:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:10.310 14:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:10.310 14:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:10.310 14:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:10.310 14:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:10.310 14:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:10.310 14:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.310 14:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.310 14:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.310 14:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:10.310 14:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:10.310 14:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:10.310 14:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:10.310 14:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:10.310 14:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:10.310 14:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:10.310 14:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:10.310 14:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:10.310 14:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:10.310 14:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:10.310 14:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:10.310 14:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.310 14:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.567 nvme0n1 00:25:10.567 14:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.567 14:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:10.567 14:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.567 14:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.567 14:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:10.567 14:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.567 14:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:10.567 14:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:10.567 14:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.568 14:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.568 14:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.568 14:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:10.568 14:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:25:10.568 14:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:10.568 14:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:10.568 14:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:10.568 14:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:10.568 14:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yjk3NmVmZTA1MjA3MGNlNWE1OTRhMWJmNzBiY2I2ZGWBS6Zf: 00:25:10.568 14:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTQyNzU2Zjk0MmRkYjI0YmQzYzk5OWMyOWE0YTRhMjR6nWvq: 00:25:10.568 14:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:10.568 14:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:10.568 14:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yjk3NmVmZTA1MjA3MGNlNWE1OTRhMWJmNzBiY2I2ZGWBS6Zf: 00:25:10.568 14:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTQyNzU2Zjk0MmRkYjI0YmQzYzk5OWMyOWE0YTRhMjR6nWvq: ]] 00:25:10.568 14:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTQyNzU2Zjk0MmRkYjI0YmQzYzk5OWMyOWE0YTRhMjR6nWvq: 00:25:10.568 14:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:25:10.568 14:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:10.568 14:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:10.568 14:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:10.568 14:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:10.568 14:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:10.568 14:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:10.568 14:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.568 14:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.568 14:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.568 14:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:10.568 14:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:10.568 14:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:10.568 14:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:10.568 14:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:10.568 14:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:10.568 14:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:10.568 14:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:10.568 14:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:10.568 14:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:10.568 14:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:10.568 14:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:10.568 14:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.568 14:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.825 nvme0n1 00:25:10.825 14:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.825 14:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:10.825 14:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.825 14:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.825 14:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:10.825 14:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.825 14:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:10.825 14:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:10.825 14:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.825 14:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.825 14:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.825 14:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:10.825 14:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:25:10.825 14:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:10.825 14:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:10.825 14:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:10.825 14:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:10.825 14:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDVmMWMyOWIwMDhhMDYzZjRkNDRlMTEwYzc3MWRmYWMxYTU4Y2NlODNjMDY1YWUx/fPkeA==: 00:25:10.825 14:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDQwNjI0YTJkOWQxOGFjMzYwZjczMjkwOTAwNDQ2MjBCrX1D: 00:25:10.825 14:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:10.825 14:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:10.825 14:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDVmMWMyOWIwMDhhMDYzZjRkNDRlMTEwYzc3MWRmYWMxYTU4Y2NlODNjMDY1YWUx/fPkeA==: 00:25:10.825 14:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDQwNjI0YTJkOWQxOGFjMzYwZjczMjkwOTAwNDQ2MjBCrX1D: ]] 00:25:10.825 14:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDQwNjI0YTJkOWQxOGFjMzYwZjczMjkwOTAwNDQ2MjBCrX1D: 00:25:10.825 14:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:25:10.825 14:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:10.825 14:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:10.825 14:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:10.825 14:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:10.825 14:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:10.825 14:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:10.825 14:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.825 14:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.825 14:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.825 14:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:10.825 14:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:10.825 14:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:10.825 14:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:10.825 14:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:10.825 14:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:10.825 14:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:10.825 14:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:10.825 14:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:10.825 14:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:10.825 14:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:10.825 14:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:10.825 14:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.825 14:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.082 nvme0n1 00:25:11.082 14:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.082 14:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:11.082 14:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:11.082 14:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.082 14:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.339 14:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.339 14:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:11.339 14:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:11.339 14:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.339 14:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.339 14:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.339 14:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:11.339 14:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:25:11.339 14:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:11.339 14:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:11.339 14:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:11.339 14:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:11.339 14:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjM5MjA0MmQyMDBlMzkwZGVmZWFiN2RkMTEzM2M4YWFjNDFhNTgxMzE1YzYyNWVkNTg3N2YxOTQ3OTVlNDgyYvCZVHQ=: 00:25:11.339 14:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:11.339 14:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:11.339 14:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:11.339 14:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjM5MjA0MmQyMDBlMzkwZGVmZWFiN2RkMTEzM2M4YWFjNDFhNTgxMzE1YzYyNWVkNTg3N2YxOTQ3OTVlNDgyYvCZVHQ=: 00:25:11.339 14:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:11.339 14:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:25:11.339 14:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:11.339 14:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:11.339 14:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:11.339 14:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:11.339 14:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:11.339 14:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:11.339 14:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.339 14:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.339 14:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.339 14:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:11.339 14:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:11.339 14:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:11.339 14:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:11.339 14:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:11.339 14:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:11.339 14:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:11.339 14:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:11.339 14:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:11.339 14:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:11.339 14:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:11.339 14:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:11.339 14:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.339 14:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.597 nvme0n1 00:25:11.597 14:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.597 14:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:11.597 14:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:11.597 14:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.597 14:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.597 14:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.597 14:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:11.597 14:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:11.597 14:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.597 14:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.597 14:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.597 14:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:11.597 14:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:11.598 14:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:25:11.598 14:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:11.598 14:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:11.598 14:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:11.598 14:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:11.598 14:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjIyMTBmNzdhZWRiN2JlMGE2ZjZjNTMxYWJiNTFhNmR2kUqO: 00:25:11.598 14:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDFjM2EwMjhkYWFmMDFlNWUyY2FlNGU4ZmUwNGE1OWM3NGI4ODY4ZTQ4YTIwZjlhZDNjYTFhZDgwNzgxNWNiOD9idjE=: 00:25:11.598 14:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:11.598 14:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:11.598 14:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjIyMTBmNzdhZWRiN2JlMGE2ZjZjNTMxYWJiNTFhNmR2kUqO: 00:25:11.598 14:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDFjM2EwMjhkYWFmMDFlNWUyY2FlNGU4ZmUwNGE1OWM3NGI4ODY4ZTQ4YTIwZjlhZDNjYTFhZDgwNzgxNWNiOD9idjE=: ]] 00:25:11.598 14:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDFjM2EwMjhkYWFmMDFlNWUyY2FlNGU4ZmUwNGE1OWM3NGI4ODY4ZTQ4YTIwZjlhZDNjYTFhZDgwNzgxNWNiOD9idjE=: 00:25:11.598 14:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:25:11.598 14:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:11.598 14:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:11.598 14:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:11.598 14:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:11.598 14:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:11.598 14:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:11.598 14:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.598 14:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.598 14:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.598 14:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:11.598 14:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:11.598 14:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:11.598 14:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:11.598 14:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:11.598 14:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:11.598 14:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:11.598 14:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:11.598 14:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:11.598 14:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:11.598 14:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:11.598 14:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:11.598 14:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.598 14:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.162 nvme0n1 00:25:12.162 14:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.162 14:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:12.162 14:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.162 14:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.162 14:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:12.162 14:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.162 14:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:12.162 14:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:12.162 14:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.162 14:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.162 14:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.162 14:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:12.163 14:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:25:12.163 14:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:12.163 14:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:12.163 14:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:12.163 14:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:12.163 14:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzkwMWVlYTY1NmU1ZDY2NWFkZDcyMGQ2ZWQ0NTFhZjQ3Nzg4MWFhYzIyNzBmZTJhi0COUQ==: 00:25:12.163 14:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGY1YTJjYmY3NDVmNzFlZDcyZDMwYTMxOGMyZmU0ODRlM2E4MjkxNTEwOTg1NmQ0OthzsQ==: 00:25:12.163 14:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:12.163 14:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:12.163 14:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzkwMWVlYTY1NmU1ZDY2NWFkZDcyMGQ2ZWQ0NTFhZjQ3Nzg4MWFhYzIyNzBmZTJhi0COUQ==: 00:25:12.163 14:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGY1YTJjYmY3NDVmNzFlZDcyZDMwYTMxOGMyZmU0ODRlM2E4MjkxNTEwOTg1NmQ0OthzsQ==: ]] 00:25:12.163 14:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGY1YTJjYmY3NDVmNzFlZDcyZDMwYTMxOGMyZmU0ODRlM2E4MjkxNTEwOTg1NmQ0OthzsQ==: 00:25:12.163 14:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:25:12.163 14:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:12.163 14:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:12.163 14:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:12.163 14:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:12.163 14:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:12.163 14:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:12.163 14:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.163 14:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.163 14:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.163 14:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:12.163 14:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:12.163 14:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:12.163 14:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:12.163 14:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:12.163 14:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:12.163 14:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:12.163 14:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:12.163 14:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:12.163 14:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:12.163 14:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:12.163 14:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:12.163 14:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.163 14:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.729 nvme0n1 00:25:12.729 14:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.729 14:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:12.729 14:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:12.729 14:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.729 14:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.729 14:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.729 14:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:12.729 14:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:12.729 14:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.729 14:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.729 14:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.729 14:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:12.729 14:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:25:12.729 14:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:12.729 14:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:12.729 14:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:12.729 14:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:12.729 14:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yjk3NmVmZTA1MjA3MGNlNWE1OTRhMWJmNzBiY2I2ZGWBS6Zf: 00:25:12.729 14:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTQyNzU2Zjk0MmRkYjI0YmQzYzk5OWMyOWE0YTRhMjR6nWvq: 00:25:12.729 14:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:12.729 14:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:12.729 14:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yjk3NmVmZTA1MjA3MGNlNWE1OTRhMWJmNzBiY2I2ZGWBS6Zf: 00:25:12.729 14:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTQyNzU2Zjk0MmRkYjI0YmQzYzk5OWMyOWE0YTRhMjR6nWvq: ]] 00:25:12.729 14:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTQyNzU2Zjk0MmRkYjI0YmQzYzk5OWMyOWE0YTRhMjR6nWvq: 00:25:12.729 14:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:25:12.729 14:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:12.729 14:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:12.729 14:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:12.729 14:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:12.729 14:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:12.729 14:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:12.729 14:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.729 14:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.729 14:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.729 14:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:12.729 14:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:12.729 14:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:12.729 14:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:12.729 14:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:12.729 14:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:12.729 14:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:12.729 14:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:12.729 14:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:12.729 14:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:12.729 14:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:12.729 14:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:12.729 14:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.729 14:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.295 nvme0n1 00:25:13.295 14:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.295 14:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:13.295 14:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.295 14:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.295 14:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:13.295 14:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.295 14:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:13.295 14:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:13.295 14:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.295 14:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.295 14:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.295 14:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:13.295 14:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:25:13.295 14:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:13.295 14:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:13.295 14:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:13.295 14:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:13.295 14:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDVmMWMyOWIwMDhhMDYzZjRkNDRlMTEwYzc3MWRmYWMxYTU4Y2NlODNjMDY1YWUx/fPkeA==: 00:25:13.295 14:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDQwNjI0YTJkOWQxOGFjMzYwZjczMjkwOTAwNDQ2MjBCrX1D: 00:25:13.295 14:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:13.295 14:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:13.295 14:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDVmMWMyOWIwMDhhMDYzZjRkNDRlMTEwYzc3MWRmYWMxYTU4Y2NlODNjMDY1YWUx/fPkeA==: 00:25:13.295 14:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDQwNjI0YTJkOWQxOGFjMzYwZjczMjkwOTAwNDQ2MjBCrX1D: ]] 00:25:13.295 14:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDQwNjI0YTJkOWQxOGFjMzYwZjczMjkwOTAwNDQ2MjBCrX1D: 00:25:13.295 14:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:25:13.295 14:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:13.295 14:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:13.295 14:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:13.295 14:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:13.295 14:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:13.295 14:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:13.295 14:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.295 14:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.295 14:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.295 14:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:13.295 14:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:13.295 14:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:13.295 14:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:13.295 14:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:13.295 14:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:13.295 14:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:13.295 14:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:13.295 14:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:13.295 14:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:13.295 14:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:13.295 14:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:13.295 14:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.295 14:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.861 nvme0n1 00:25:13.861 14:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.861 14:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:13.861 14:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:13.861 14:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.861 14:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.861 14:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.861 14:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:13.861 14:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:13.861 14:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.861 14:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.861 14:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.861 14:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:13.861 14:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:25:13.861 14:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:13.861 14:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:13.861 14:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:13.861 14:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:13.861 14:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjM5MjA0MmQyMDBlMzkwZGVmZWFiN2RkMTEzM2M4YWFjNDFhNTgxMzE1YzYyNWVkNTg3N2YxOTQ3OTVlNDgyYvCZVHQ=: 00:25:13.861 14:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:13.861 14:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:13.861 14:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:13.861 14:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjM5MjA0MmQyMDBlMzkwZGVmZWFiN2RkMTEzM2M4YWFjNDFhNTgxMzE1YzYyNWVkNTg3N2YxOTQ3OTVlNDgyYvCZVHQ=: 00:25:13.861 14:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:13.861 14:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:25:13.861 14:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:13.861 14:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:13.861 14:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:13.861 14:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:13.861 14:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:13.861 14:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:13.861 14:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.861 14:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.861 14:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.861 14:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:13.861 14:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:13.861 14:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:13.861 14:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:13.861 14:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:13.861 14:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:13.861 14:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:13.861 14:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:13.861 14:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:13.861 14:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:13.861 14:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:13.861 14:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:13.861 14:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.861 14:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.119 nvme0n1 00:25:14.119 14:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.119 14:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:14.119 14:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.119 14:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.119 14:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:14.119 14:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.377 14:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:14.377 14:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:14.377 14:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.377 14:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.377 14:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.377 14:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:14.377 14:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:14.377 14:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:25:14.377 14:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:14.377 14:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:14.377 14:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:14.377 14:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:14.377 14:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjIyMTBmNzdhZWRiN2JlMGE2ZjZjNTMxYWJiNTFhNmR2kUqO: 00:25:14.377 14:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDFjM2EwMjhkYWFmMDFlNWUyY2FlNGU4ZmUwNGE1OWM3NGI4ODY4ZTQ4YTIwZjlhZDNjYTFhZDgwNzgxNWNiOD9idjE=: 00:25:14.377 14:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:14.377 14:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:14.377 14:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjIyMTBmNzdhZWRiN2JlMGE2ZjZjNTMxYWJiNTFhNmR2kUqO: 00:25:14.377 14:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDFjM2EwMjhkYWFmMDFlNWUyY2FlNGU4ZmUwNGE1OWM3NGI4ODY4ZTQ4YTIwZjlhZDNjYTFhZDgwNzgxNWNiOD9idjE=: ]] 00:25:14.377 14:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDFjM2EwMjhkYWFmMDFlNWUyY2FlNGU4ZmUwNGE1OWM3NGI4ODY4ZTQ4YTIwZjlhZDNjYTFhZDgwNzgxNWNiOD9idjE=: 00:25:14.377 14:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:25:14.377 14:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:14.377 14:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:14.377 14:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:14.377 14:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:14.377 14:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:14.377 14:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:14.377 14:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.377 14:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.377 14:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.377 14:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:14.377 14:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:14.377 14:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:14.377 14:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:14.377 14:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:14.377 14:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:14.377 14:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:14.377 14:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:14.377 14:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:14.377 14:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:14.377 14:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:14.377 14:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:14.377 14:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.377 14:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.317 nvme0n1 00:25:15.317 14:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.317 14:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:15.317 14:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:15.317 14:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.317 14:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.317 14:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.317 14:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:15.317 14:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:15.317 14:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.317 14:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.317 14:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.317 14:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:15.317 14:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:25:15.317 14:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:15.317 14:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:15.317 14:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:15.317 14:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:15.317 14:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzkwMWVlYTY1NmU1ZDY2NWFkZDcyMGQ2ZWQ0NTFhZjQ3Nzg4MWFhYzIyNzBmZTJhi0COUQ==: 00:25:15.317 14:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGY1YTJjYmY3NDVmNzFlZDcyZDMwYTMxOGMyZmU0ODRlM2E4MjkxNTEwOTg1NmQ0OthzsQ==: 00:25:15.317 14:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:15.317 14:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:15.317 14:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzkwMWVlYTY1NmU1ZDY2NWFkZDcyMGQ2ZWQ0NTFhZjQ3Nzg4MWFhYzIyNzBmZTJhi0COUQ==: 00:25:15.317 14:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGY1YTJjYmY3NDVmNzFlZDcyZDMwYTMxOGMyZmU0ODRlM2E4MjkxNTEwOTg1NmQ0OthzsQ==: ]] 00:25:15.317 14:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGY1YTJjYmY3NDVmNzFlZDcyZDMwYTMxOGMyZmU0ODRlM2E4MjkxNTEwOTg1NmQ0OthzsQ==: 00:25:15.317 14:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:25:15.317 14:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:15.317 14:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:15.317 14:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:15.317 14:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:15.317 14:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:15.317 14:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:15.317 14:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.317 14:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.317 14:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.317 14:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:15.317 14:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:15.317 14:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:15.317 14:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:15.317 14:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:15.317 14:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:15.317 14:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:15.317 14:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:15.317 14:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:15.317 14:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:15.317 14:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:15.317 14:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:15.317 14:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.317 14:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.252 nvme0n1 00:25:16.252 14:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.252 14:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:16.252 14:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.252 14:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.252 14:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:16.252 14:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.252 14:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:16.252 14:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:16.252 14:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.252 14:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.252 14:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.252 14:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:16.252 14:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:25:16.252 14:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:16.252 14:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:16.252 14:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:16.252 14:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:16.252 14:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yjk3NmVmZTA1MjA3MGNlNWE1OTRhMWJmNzBiY2I2ZGWBS6Zf: 00:25:16.252 14:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTQyNzU2Zjk0MmRkYjI0YmQzYzk5OWMyOWE0YTRhMjR6nWvq: 00:25:16.252 14:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:16.252 14:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:16.252 14:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yjk3NmVmZTA1MjA3MGNlNWE1OTRhMWJmNzBiY2I2ZGWBS6Zf: 00:25:16.252 14:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTQyNzU2Zjk0MmRkYjI0YmQzYzk5OWMyOWE0YTRhMjR6nWvq: ]] 00:25:16.252 14:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTQyNzU2Zjk0MmRkYjI0YmQzYzk5OWMyOWE0YTRhMjR6nWvq: 00:25:16.252 14:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:25:16.252 14:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:16.252 14:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:16.252 14:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:16.252 14:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:16.252 14:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:16.252 14:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:16.252 14:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.253 14:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.253 14:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.253 14:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:16.253 14:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:16.253 14:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:16.253 14:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:16.253 14:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:16.253 14:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:16.253 14:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:16.253 14:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:16.253 14:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:16.253 14:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:16.253 14:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:16.253 14:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:16.253 14:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.253 14:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.818 nvme0n1 00:25:16.818 14:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.818 14:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:16.818 14:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.818 14:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:16.818 14:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.076 14:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.076 14:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:17.076 14:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:17.076 14:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.076 14:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.076 14:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.076 14:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:17.076 14:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:25:17.076 14:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:17.076 14:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:17.076 14:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:17.076 14:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:17.076 14:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDVmMWMyOWIwMDhhMDYzZjRkNDRlMTEwYzc3MWRmYWMxYTU4Y2NlODNjMDY1YWUx/fPkeA==: 00:25:17.076 14:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDQwNjI0YTJkOWQxOGFjMzYwZjczMjkwOTAwNDQ2MjBCrX1D: 00:25:17.076 14:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:17.076 14:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:17.076 14:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDVmMWMyOWIwMDhhMDYzZjRkNDRlMTEwYzc3MWRmYWMxYTU4Y2NlODNjMDY1YWUx/fPkeA==: 00:25:17.076 14:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDQwNjI0YTJkOWQxOGFjMzYwZjczMjkwOTAwNDQ2MjBCrX1D: ]] 00:25:17.076 14:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDQwNjI0YTJkOWQxOGFjMzYwZjczMjkwOTAwNDQ2MjBCrX1D: 00:25:17.076 14:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:25:17.076 14:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:17.076 14:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:17.076 14:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:17.076 14:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:17.076 14:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:17.076 14:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:17.076 14:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.076 14:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.076 14:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.076 14:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:17.076 14:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:17.076 14:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:17.076 14:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:17.076 14:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:17.076 14:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:17.076 14:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:17.076 14:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:17.076 14:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:17.076 14:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:17.076 14:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:17.076 14:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:17.076 14:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.076 14:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.009 nvme0n1 00:25:18.009 14:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.009 14:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:18.009 14:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.009 14:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.009 14:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:18.009 14:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.009 14:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:18.009 14:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:18.009 14:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.009 14:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.009 14:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.009 14:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:18.009 14:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:25:18.009 14:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:18.009 14:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:18.009 14:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:18.009 14:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:18.009 14:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjM5MjA0MmQyMDBlMzkwZGVmZWFiN2RkMTEzM2M4YWFjNDFhNTgxMzE1YzYyNWVkNTg3N2YxOTQ3OTVlNDgyYvCZVHQ=: 00:25:18.009 14:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:18.009 14:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:18.009 14:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:18.009 14:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjM5MjA0MmQyMDBlMzkwZGVmZWFiN2RkMTEzM2M4YWFjNDFhNTgxMzE1YzYyNWVkNTg3N2YxOTQ3OTVlNDgyYvCZVHQ=: 00:25:18.009 14:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:18.009 14:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:25:18.009 14:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:18.009 14:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:18.009 14:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:18.009 14:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:18.009 14:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:18.009 14:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:18.009 14:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.009 14:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.009 14:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.009 14:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:18.009 14:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:18.009 14:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:18.009 14:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:18.009 14:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:18.009 14:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:18.009 14:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:18.009 14:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:18.010 14:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:18.010 14:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:18.010 14:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:18.010 14:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:18.010 14:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.010 14:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.944 nvme0n1 00:25:18.944 14:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.944 14:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:18.944 14:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.944 14:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.944 14:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:18.944 14:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.944 14:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:18.944 14:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:18.944 14:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.944 14:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.944 14:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.944 14:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:18.944 14:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:18.944 14:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:18.944 14:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:18.944 14:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:18.944 14:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzkwMWVlYTY1NmU1ZDY2NWFkZDcyMGQ2ZWQ0NTFhZjQ3Nzg4MWFhYzIyNzBmZTJhi0COUQ==: 00:25:18.944 14:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGY1YTJjYmY3NDVmNzFlZDcyZDMwYTMxOGMyZmU0ODRlM2E4MjkxNTEwOTg1NmQ0OthzsQ==: 00:25:18.944 14:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:18.944 14:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:18.944 14:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzkwMWVlYTY1NmU1ZDY2NWFkZDcyMGQ2ZWQ0NTFhZjQ3Nzg4MWFhYzIyNzBmZTJhi0COUQ==: 00:25:18.944 14:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGY1YTJjYmY3NDVmNzFlZDcyZDMwYTMxOGMyZmU0ODRlM2E4MjkxNTEwOTg1NmQ0OthzsQ==: ]] 00:25:18.944 14:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGY1YTJjYmY3NDVmNzFlZDcyZDMwYTMxOGMyZmU0ODRlM2E4MjkxNTEwOTg1NmQ0OthzsQ==: 00:25:18.944 14:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:18.944 14:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.944 14:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.944 14:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.944 14:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:25:18.944 14:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:18.944 14:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:18.944 14:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:18.945 14:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:18.945 14:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:18.945 14:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:18.945 14:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:18.945 14:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:18.945 14:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:18.945 14:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:18.945 14:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:18.945 14:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:25:18.945 14:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:18.945 14:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:18.945 14:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:18.945 14:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:18.945 14:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:18.945 14:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:18.945 14:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.945 14:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.945 request: 00:25:18.945 { 00:25:18.945 "name": "nvme0", 00:25:18.945 "trtype": "tcp", 00:25:18.945 "traddr": "10.0.0.1", 00:25:18.945 "adrfam": "ipv4", 00:25:18.945 "trsvcid": "4420", 00:25:18.945 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:18.945 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:18.945 "prchk_reftag": false, 00:25:18.945 "prchk_guard": false, 00:25:18.945 "hdgst": false, 00:25:18.945 "ddgst": false, 00:25:18.945 "method": "bdev_nvme_attach_controller", 00:25:18.945 "req_id": 1 00:25:18.945 } 00:25:18.945 Got JSON-RPC error response 00:25:18.945 response: 00:25:18.945 { 00:25:18.945 "code": -5, 00:25:18.945 "message": "Input/output error" 00:25:18.945 } 00:25:18.945 14:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:18.945 14:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:25:18.945 14:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:18.945 14:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:18.945 14:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:18.945 14:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:25:18.945 14:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:25:18.945 14:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.945 14:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.945 14:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.945 14:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:25:18.945 14:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:25:18.945 14:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:18.945 14:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:18.945 14:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:18.945 14:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:18.945 14:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:18.945 14:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:18.945 14:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:18.945 14:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:18.945 14:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:18.945 14:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:18.945 14:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:18.945 14:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:25:18.945 14:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:18.945 14:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:18.945 14:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:18.945 14:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:18.945 14:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:18.945 14:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:18.945 14:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.945 14:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.945 request: 00:25:18.945 { 00:25:18.945 "name": "nvme0", 00:25:18.945 "trtype": "tcp", 00:25:18.945 "traddr": "10.0.0.1", 00:25:18.945 "adrfam": "ipv4", 00:25:18.945 "trsvcid": "4420", 00:25:18.945 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:18.945 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:18.945 "prchk_reftag": false, 00:25:18.945 "prchk_guard": false, 00:25:18.945 "hdgst": false, 00:25:18.945 "ddgst": false, 00:25:18.945 "dhchap_key": "key2", 00:25:18.945 "method": "bdev_nvme_attach_controller", 00:25:18.945 "req_id": 1 00:25:18.945 } 00:25:18.945 Got JSON-RPC error response 00:25:18.945 response: 00:25:18.945 { 00:25:18.945 "code": -5, 00:25:18.945 "message": "Input/output error" 00:25:18.945 } 00:25:18.945 14:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:18.945 14:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:25:18.945 14:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:18.945 14:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:18.945 14:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:18.945 14:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:25:18.945 14:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.945 14:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:25:18.945 14:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.945 14:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.945 14:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:25:18.945 14:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:25:18.945 14:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:18.945 14:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:18.945 14:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:18.945 14:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:18.945 14:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:18.945 14:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:18.945 14:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:18.945 14:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:18.945 14:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:18.945 14:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:18.945 14:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:18.945 14:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:25:18.945 14:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:18.945 14:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:18.945 14:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:18.945 14:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:18.945 14:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:18.945 14:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:18.945 14:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.945 14:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.945 request: 00:25:18.945 { 00:25:18.945 "name": "nvme0", 00:25:18.945 "trtype": "tcp", 00:25:18.945 "traddr": "10.0.0.1", 00:25:18.945 "adrfam": "ipv4", 00:25:18.945 "trsvcid": "4420", 00:25:18.945 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:18.945 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:18.945 "prchk_reftag": false, 00:25:18.945 "prchk_guard": false, 00:25:18.945 "hdgst": false, 00:25:18.945 "ddgst": false, 00:25:18.945 "dhchap_key": "key1", 00:25:18.945 "dhchap_ctrlr_key": "ckey2", 00:25:18.945 "method": "bdev_nvme_attach_controller", 00:25:18.946 "req_id": 1 00:25:18.946 } 00:25:18.946 Got JSON-RPC error response 00:25:18.946 response: 00:25:18.946 { 00:25:18.946 "code": -5, 00:25:19.204 "message": "Input/output error" 00:25:19.204 } 00:25:19.204 14:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:19.204 14:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:25:19.204 14:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:19.204 14:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:19.204 14:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:19.204 14:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:25:19.204 14:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:25:19.204 14:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:25:19.204 14:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:19.204 14:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:25:19.204 14:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:19.204 14:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:25:19.204 14:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:19.204 14:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:19.204 rmmod nvme_tcp 00:25:19.204 rmmod nvme_fabrics 00:25:19.204 14:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:19.204 14:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:25:19.204 14:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:25:19.204 14:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 309644 ']' 00:25:19.204 14:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 309644 00:25:19.204 14:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@950 -- # '[' -z 309644 ']' 00:25:19.204 14:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # kill -0 309644 00:25:19.204 14:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # uname 00:25:19.204 14:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:19.204 14:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 309644 00:25:19.204 14:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:19.204 14:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:19.204 14:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 309644' 00:25:19.205 killing process with pid 309644 00:25:19.205 14:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@969 -- # kill 309644 00:25:19.205 14:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@974 -- # wait 309644 00:25:19.464 14:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:19.464 14:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:19.464 14:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:19.464 14:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:19.464 14:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:19.464 14:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:19.464 14:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:19.464 14:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:21.373 14:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:21.373 14:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:25:21.373 14:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:21.373 14:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:25:21.373 14:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:25:21.373 14:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:25:21.373 14:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:21.373 14:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:21.373 14:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:25:21.373 14:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:21.373 14:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:25:21.373 14:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:25:21.373 14:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:22.749 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:25:22.749 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:25:22.749 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:25:22.749 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:25:22.749 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:25:22.749 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:25:22.749 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:25:22.749 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:25:22.749 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:25:22.749 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:25:22.749 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:25:22.749 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:25:22.749 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:25:22.749 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:25:22.749 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:25:22.749 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:25:23.687 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:25:23.945 14:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.GzL /tmp/spdk.key-null.8ZN /tmp/spdk.key-sha256.cQw /tmp/spdk.key-sha384.F1U /tmp/spdk.key-sha512.k2n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:25:23.945 14:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:24.881 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:25:24.881 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:25:24.881 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:25:24.881 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:25:24.881 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:25:24.881 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:25:24.881 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:25:24.881 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:25:24.881 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:25:24.881 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:25:24.881 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:25:24.881 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:25:24.881 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:25:24.881 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:25:24.881 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:25:24.881 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:25:24.881 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:25:25.140 00:25:25.140 real 0m49.843s 00:25:25.140 user 0m46.909s 00:25:25.140 sys 0m5.669s 00:25:25.140 14:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:25.140 14:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.140 ************************************ 00:25:25.140 END TEST nvmf_auth_host 00:25:25.140 ************************************ 00:25:25.140 14:19:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:25:25.140 14:19:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:25:25.140 14:19:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:25.140 14:19:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:25.140 14:19:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.140 ************************************ 00:25:25.140 START TEST nvmf_digest 00:25:25.140 ************************************ 00:25:25.140 14:19:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:25:25.140 * Looking for test storage... 00:25:25.140 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:25.140 14:19:33 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:25.140 14:19:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:25:25.140 14:19:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:25.140 14:19:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:25.140 14:19:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:25.140 14:19:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:25.140 14:19:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:25.140 14:19:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:25.140 14:19:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:25.140 14:19:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:25.140 14:19:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:25.140 14:19:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:25.140 14:19:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:25:25.140 14:19:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:25:25.140 14:19:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:25.140 14:19:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:25.140 14:19:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:25.409 14:19:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:25.409 14:19:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:25.409 14:19:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:25.409 14:19:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:25.409 14:19:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:25.409 14:19:33 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:25.409 14:19:33 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:25.409 14:19:33 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:25.409 14:19:33 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:25:25.409 14:19:33 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:25.409 14:19:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:25:25.409 14:19:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:25.409 14:19:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:25.409 14:19:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:25.409 14:19:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:25.409 14:19:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:25.409 14:19:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:25.409 14:19:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:25.409 14:19:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:25.409 14:19:33 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:25:25.409 14:19:33 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:25:25.409 14:19:33 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:25:25.409 14:19:33 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:25:25.409 14:19:33 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:25:25.409 14:19:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:25.409 14:19:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:25.409 14:19:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:25.409 14:19:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:25.409 14:19:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:25.409 14:19:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:25.409 14:19:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:25.409 14:19:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:25.409 14:19:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:25.409 14:19:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:25.409 14:19:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:25:25.409 14:19:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:25:27.321 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:27.321 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:25:27.321 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:27.321 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:27.321 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:27.321 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:27.321 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:27.321 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:25:27.321 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:27.321 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:25:27.321 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:25:27.321 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:25:27.321 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:25:27.321 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:25:27.321 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:25:27.321 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:27.321 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:27.321 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:27.321 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:27.321 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:27.321 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:27.321 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:27.321 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:27.321 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:27.321 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:27.321 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:27.321 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:27.321 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:27.321 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:27.321 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:27.321 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:27.321 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:27.321 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:27.321 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:25:27.321 Found 0000:09:00.0 (0x8086 - 0x159b) 00:25:27.321 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:27.321 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:27.321 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:27.321 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:27.321 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:27.321 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:27.321 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:25:27.321 Found 0000:09:00.1 (0x8086 - 0x159b) 00:25:27.321 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:27.321 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:27.321 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:27.321 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:27.321 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:27.321 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:27.321 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:27.321 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:27.321 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:27.321 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:27.321 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:27.321 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:27.321 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:27.321 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:27.321 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:27.321 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:25:27.321 Found net devices under 0000:09:00.0: cvl_0_0 00:25:27.321 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:27.321 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:27.321 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:27.321 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:27.321 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:27.321 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:27.321 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:27.321 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:27.321 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:25:27.321 Found net devices under 0000:09:00.1: cvl_0_1 00:25:27.321 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:27.321 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:27.321 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:25:27.321 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:27.321 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:27.321 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:27.321 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:27.321 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:27.321 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:27.321 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:27.322 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:27.322 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:27.322 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:27.322 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:27.322 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:27.322 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:27.322 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:27.322 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:27.322 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:27.322 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:27.322 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:27.322 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:27.322 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:27.322 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:27.322 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:27.322 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:27.322 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:27.322 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.142 ms 00:25:27.322 00:25:27.322 --- 10.0.0.2 ping statistics --- 00:25:27.322 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:27.322 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:25:27.322 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:27.322 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:27.322 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.068 ms 00:25:27.322 00:25:27.322 --- 10.0.0.1 ping statistics --- 00:25:27.322 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:27.322 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:25:27.322 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:27.322 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:25:27.322 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:27.322 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:27.322 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:27.322 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:27.322 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:27.322 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:27.322 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:27.322 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:25:27.322 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:25:27.322 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:25:27.322 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:25:27.322 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:27.322 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:25:27.580 ************************************ 00:25:27.580 START TEST nvmf_digest_clean 00:25:27.580 ************************************ 00:25:27.580 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1125 -- # run_digest 00:25:27.580 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:25:27.580 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:25:27.580 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:25:27.580 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:25:27.580 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:25:27.580 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:27.580 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:27.580 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:27.580 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=319117 00:25:27.580 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:25:27.580 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 319117 00:25:27.580 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 319117 ']' 00:25:27.580 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:27.580 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:27.580 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:27.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:27.580 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:27.580 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:27.580 [2024-07-26 14:19:35.399492] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:25:27.580 [2024-07-26 14:19:35.399587] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:27.580 EAL: No free 2048 kB hugepages reported on node 1 00:25:27.580 [2024-07-26 14:19:35.465034] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:27.580 [2024-07-26 14:19:35.571223] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:27.580 [2024-07-26 14:19:35.571274] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:27.580 [2024-07-26 14:19:35.571288] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:27.580 [2024-07-26 14:19:35.571298] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:27.580 [2024-07-26 14:19:35.571308] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:27.580 [2024-07-26 14:19:35.571347] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:27.839 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:27.839 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:25:27.839 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:27.839 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:27.839 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:27.839 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:27.839 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:25:27.839 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:25:27.839 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:25:27.839 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.839 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:27.839 null0 00:25:27.839 [2024-07-26 14:19:35.743680] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:27.839 [2024-07-26 14:19:35.767920] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:27.839 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.839 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:25:27.839 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:27.839 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:27.839 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:25:27.839 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:25:27.839 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:25:27.839 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:25:27.839 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=319149 00:25:27.839 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 319149 /var/tmp/bperf.sock 00:25:27.839 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 319149 ']' 00:25:27.839 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:27.839 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:27.839 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:27.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:27.839 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:27.839 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:25:27.839 14:19:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:27.839 [2024-07-26 14:19:35.817636] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:25:27.839 [2024-07-26 14:19:35.817722] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid319149 ] 00:25:27.839 EAL: No free 2048 kB hugepages reported on node 1 00:25:28.097 [2024-07-26 14:19:35.876220] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:28.097 [2024-07-26 14:19:35.985397] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:28.097 14:19:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:28.097 14:19:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:25:28.097 14:19:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:25:28.097 14:19:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:28.097 14:19:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:28.663 14:19:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:28.663 14:19:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:28.920 nvme0n1 00:25:28.920 14:19:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:28.920 14:19:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:28.920 Running I/O for 2 seconds... 00:25:31.449 00:25:31.449 Latency(us) 00:25:31.449 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:31.449 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:25:31.449 nvme0n1 : 2.01 19564.25 76.42 0.00 0.00 6533.54 3446.71 19126.80 00:25:31.449 =================================================================================================================== 00:25:31.449 Total : 19564.25 76.42 0.00 0.00 6533.54 3446.71 19126.80 00:25:31.449 0 00:25:31.449 14:19:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:31.449 14:19:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:25:31.449 14:19:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:31.449 14:19:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:31.449 14:19:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:31.449 | select(.opcode=="crc32c") 00:25:31.449 | "\(.module_name) \(.executed)"' 00:25:31.449 14:19:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:25:31.449 14:19:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:25:31.449 14:19:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:31.449 14:19:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:31.449 14:19:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 319149 00:25:31.449 14:19:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 319149 ']' 00:25:31.449 14:19:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 319149 00:25:31.449 14:19:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:25:31.449 14:19:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:31.449 14:19:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 319149 00:25:31.449 14:19:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:31.449 14:19:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:31.449 14:19:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 319149' 00:25:31.449 killing process with pid 319149 00:25:31.449 14:19:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 319149 00:25:31.449 Received shutdown signal, test time was about 2.000000 seconds 00:25:31.449 00:25:31.449 Latency(us) 00:25:31.449 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:31.449 =================================================================================================================== 00:25:31.449 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:31.449 14:19:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 319149 00:25:31.449 14:19:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:25:31.449 14:19:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:31.449 14:19:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:31.449 14:19:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:25:31.449 14:19:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:25:31.449 14:19:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:25:31.449 14:19:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:25:31.449 14:19:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=319553 00:25:31.449 14:19:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:25:31.450 14:19:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 319553 /var/tmp/bperf.sock 00:25:31.450 14:19:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 319553 ']' 00:25:31.450 14:19:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:31.450 14:19:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:31.450 14:19:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:31.450 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:31.450 14:19:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:31.450 14:19:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:31.708 [2024-07-26 14:19:39.488837] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:25:31.708 [2024-07-26 14:19:39.488929] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid319553 ] 00:25:31.708 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:31.708 Zero copy mechanism will not be used. 00:25:31.708 EAL: No free 2048 kB hugepages reported on node 1 00:25:31.708 [2024-07-26 14:19:39.546348] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:31.708 [2024-07-26 14:19:39.656684] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:31.708 14:19:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:31.708 14:19:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:25:31.708 14:19:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:25:31.708 14:19:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:31.708 14:19:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:32.274 14:19:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:32.274 14:19:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:32.531 nvme0n1 00:25:32.531 14:19:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:32.532 14:19:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:32.532 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:32.532 Zero copy mechanism will not be used. 00:25:32.532 Running I/O for 2 seconds... 00:25:35.057 00:25:35.057 Latency(us) 00:25:35.057 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:35.057 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:25:35.057 nvme0n1 : 2.00 5881.65 735.21 0.00 0.00 2715.92 625.02 11068.30 00:25:35.057 =================================================================================================================== 00:25:35.058 Total : 5881.65 735.21 0.00 0.00 2715.92 625.02 11068.30 00:25:35.058 0 00:25:35.058 14:19:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:35.058 14:19:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:25:35.058 14:19:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:35.058 14:19:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:35.058 14:19:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:35.058 | select(.opcode=="crc32c") 00:25:35.058 | "\(.module_name) \(.executed)"' 00:25:35.058 14:19:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:25:35.058 14:19:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:25:35.058 14:19:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:35.058 14:19:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:35.058 14:19:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 319553 00:25:35.058 14:19:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 319553 ']' 00:25:35.058 14:19:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 319553 00:25:35.058 14:19:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:25:35.058 14:19:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:35.058 14:19:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 319553 00:25:35.058 14:19:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:35.058 14:19:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:35.058 14:19:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 319553' 00:25:35.058 killing process with pid 319553 00:25:35.058 14:19:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 319553 00:25:35.058 Received shutdown signal, test time was about 2.000000 seconds 00:25:35.058 00:25:35.058 Latency(us) 00:25:35.058 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:35.058 =================================================================================================================== 00:25:35.058 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:35.058 14:19:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 319553 00:25:35.058 14:19:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:25:35.058 14:19:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:35.058 14:19:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:35.058 14:19:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:25:35.058 14:19:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:25:35.058 14:19:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:25:35.058 14:19:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:25:35.058 14:19:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=320028 00:25:35.058 14:19:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 320028 /var/tmp/bperf.sock 00:25:35.058 14:19:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:25:35.058 14:19:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 320028 ']' 00:25:35.058 14:19:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:35.058 14:19:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:35.058 14:19:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:35.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:35.058 14:19:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:35.058 14:19:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:35.058 [2024-07-26 14:19:43.059815] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:25:35.058 [2024-07-26 14:19:43.059949] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid320028 ] 00:25:35.339 EAL: No free 2048 kB hugepages reported on node 1 00:25:35.339 [2024-07-26 14:19:43.120347] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:35.339 [2024-07-26 14:19:43.228037] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:35.339 14:19:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:35.339 14:19:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:25:35.339 14:19:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:25:35.339 14:19:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:35.339 14:19:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:35.604 14:19:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:35.604 14:19:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:36.168 nvme0n1 00:25:36.168 14:19:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:36.168 14:19:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:36.168 Running I/O for 2 seconds... 00:25:38.067 00:25:38.067 Latency(us) 00:25:38.067 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:38.067 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:38.067 nvme0n1 : 2.01 19744.24 77.13 0.00 0.00 6467.21 2730.67 11456.66 00:25:38.067 =================================================================================================================== 00:25:38.067 Total : 19744.24 77.13 0.00 0.00 6467.21 2730.67 11456.66 00:25:38.067 0 00:25:38.067 14:19:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:38.067 14:19:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:25:38.067 14:19:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:38.067 14:19:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:38.067 14:19:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:38.067 | select(.opcode=="crc32c") 00:25:38.067 | "\(.module_name) \(.executed)"' 00:25:38.325 14:19:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:25:38.325 14:19:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:25:38.325 14:19:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:38.325 14:19:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:38.325 14:19:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 320028 00:25:38.325 14:19:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 320028 ']' 00:25:38.325 14:19:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 320028 00:25:38.325 14:19:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:25:38.325 14:19:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:38.325 14:19:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 320028 00:25:38.325 14:19:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:38.325 14:19:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:38.325 14:19:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 320028' 00:25:38.325 killing process with pid 320028 00:25:38.325 14:19:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 320028 00:25:38.325 Received shutdown signal, test time was about 2.000000 seconds 00:25:38.325 00:25:38.325 Latency(us) 00:25:38.325 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:38.325 =================================================================================================================== 00:25:38.325 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:38.325 14:19:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 320028 00:25:38.583 14:19:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:25:38.583 14:19:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:38.583 14:19:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:38.583 14:19:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:25:38.583 14:19:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:25:38.583 14:19:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:25:38.583 14:19:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:25:38.583 14:19:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=320478 00:25:38.583 14:19:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 320478 /var/tmp/bperf.sock 00:25:38.583 14:19:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:25:38.583 14:19:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 320478 ']' 00:25:38.583 14:19:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:38.583 14:19:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:38.583 14:19:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:38.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:38.583 14:19:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:38.583 14:19:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:38.841 [2024-07-26 14:19:46.635165] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:25:38.841 [2024-07-26 14:19:46.635254] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid320478 ] 00:25:38.841 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:38.841 Zero copy mechanism will not be used. 00:25:38.841 EAL: No free 2048 kB hugepages reported on node 1 00:25:38.841 [2024-07-26 14:19:46.692700] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:38.841 [2024-07-26 14:19:46.797799] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:38.841 14:19:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:38.841 14:19:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:25:38.841 14:19:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:25:38.841 14:19:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:38.841 14:19:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:39.408 14:19:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:39.408 14:19:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:39.665 nvme0n1 00:25:39.665 14:19:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:39.665 14:19:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:39.922 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:39.922 Zero copy mechanism will not be used. 00:25:39.922 Running I/O for 2 seconds... 00:25:41.820 00:25:41.820 Latency(us) 00:25:41.820 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:41.820 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:25:41.820 nvme0n1 : 2.00 6147.58 768.45 0.00 0.00 2595.47 1941.81 8543.95 00:25:41.820 =================================================================================================================== 00:25:41.820 Total : 6147.58 768.45 0.00 0.00 2595.47 1941.81 8543.95 00:25:41.820 0 00:25:41.820 14:19:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:41.820 14:19:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:25:41.820 14:19:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:41.820 14:19:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:41.820 14:19:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:41.820 | select(.opcode=="crc32c") 00:25:41.820 | "\(.module_name) \(.executed)"' 00:25:42.078 14:19:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:25:42.078 14:19:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:25:42.078 14:19:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:42.078 14:19:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:42.078 14:19:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 320478 00:25:42.078 14:19:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 320478 ']' 00:25:42.078 14:19:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 320478 00:25:42.078 14:19:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:25:42.078 14:19:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:42.078 14:19:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 320478 00:25:42.078 14:19:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:42.078 14:19:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:42.078 14:19:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 320478' 00:25:42.078 killing process with pid 320478 00:25:42.078 14:19:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 320478 00:25:42.078 Received shutdown signal, test time was about 2.000000 seconds 00:25:42.078 00:25:42.078 Latency(us) 00:25:42.078 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:42.078 =================================================================================================================== 00:25:42.078 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:42.078 14:19:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 320478 00:25:42.336 14:19:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 319117 00:25:42.336 14:19:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 319117 ']' 00:25:42.336 14:19:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 319117 00:25:42.336 14:19:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:25:42.336 14:19:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:42.336 14:19:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 319117 00:25:42.336 14:19:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:42.336 14:19:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:42.336 14:19:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 319117' 00:25:42.336 killing process with pid 319117 00:25:42.336 14:19:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 319117 00:25:42.336 14:19:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 319117 00:25:42.594 00:25:42.594 real 0m15.216s 00:25:42.594 user 0m29.731s 00:25:42.594 sys 0m4.433s 00:25:42.594 14:19:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:42.594 14:19:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:42.594 ************************************ 00:25:42.594 END TEST nvmf_digest_clean 00:25:42.594 ************************************ 00:25:42.594 14:19:50 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:25:42.594 14:19:50 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:25:42.594 14:19:50 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:42.594 14:19:50 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:25:42.852 ************************************ 00:25:42.852 START TEST nvmf_digest_error 00:25:42.852 ************************************ 00:25:42.852 14:19:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1125 -- # run_digest_error 00:25:42.852 14:19:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:25:42.852 14:19:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:42.852 14:19:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:42.852 14:19:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:42.852 14:19:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=320924 00:25:42.852 14:19:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:25:42.852 14:19:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 320924 00:25:42.852 14:19:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 320924 ']' 00:25:42.852 14:19:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:42.852 14:19:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:42.852 14:19:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:42.852 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:42.852 14:19:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:42.852 14:19:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:42.852 [2024-07-26 14:19:50.665155] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:25:42.852 [2024-07-26 14:19:50.665236] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:42.852 EAL: No free 2048 kB hugepages reported on node 1 00:25:42.852 [2024-07-26 14:19:50.726204] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:42.852 [2024-07-26 14:19:50.832027] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:42.852 [2024-07-26 14:19:50.832079] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:42.852 [2024-07-26 14:19:50.832101] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:42.853 [2024-07-26 14:19:50.832112] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:42.853 [2024-07-26 14:19:50.832121] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:42.853 [2024-07-26 14:19:50.832146] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:43.110 14:19:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:43.110 14:19:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:25:43.110 14:19:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:43.110 14:19:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:43.110 14:19:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:43.110 14:19:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:43.110 14:19:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:25:43.110 14:19:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.110 14:19:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:43.110 [2024-07-26 14:19:50.896658] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:25:43.110 14:19:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.110 14:19:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:25:43.110 14:19:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:25:43.110 14:19:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.110 14:19:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:43.110 null0 00:25:43.110 [2024-07-26 14:19:51.002101] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:43.110 [2024-07-26 14:19:51.026279] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:43.110 14:19:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.110 14:19:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:25:43.110 14:19:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:43.110 14:19:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:25:43.110 14:19:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:25:43.110 14:19:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:25:43.110 14:19:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=321040 00:25:43.110 14:19:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:25:43.110 14:19:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 321040 /var/tmp/bperf.sock 00:25:43.110 14:19:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 321040 ']' 00:25:43.110 14:19:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:43.110 14:19:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:43.110 14:19:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:43.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:43.110 14:19:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:43.110 14:19:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:43.110 [2024-07-26 14:19:51.072387] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:25:43.110 [2024-07-26 14:19:51.072458] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid321040 ] 00:25:43.110 EAL: No free 2048 kB hugepages reported on node 1 00:25:43.368 [2024-07-26 14:19:51.130915] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:43.368 [2024-07-26 14:19:51.239433] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:43.368 14:19:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:43.368 14:19:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:25:43.368 14:19:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:43.368 14:19:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:43.625 14:19:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:43.625 14:19:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.625 14:19:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:43.625 14:19:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.625 14:19:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:43.625 14:19:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:44.188 nvme0n1 00:25:44.188 14:19:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:25:44.188 14:19:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.188 14:19:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:44.188 14:19:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.188 14:19:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:44.188 14:19:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:44.188 Running I/O for 2 seconds... 00:25:44.188 [2024-07-26 14:19:52.154736] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:44.188 [2024-07-26 14:19:52.154785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:5984 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.188 [2024-07-26 14:19:52.154805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:44.188 [2024-07-26 14:19:52.165209] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:44.188 [2024-07-26 14:19:52.165249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:22699 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.188 [2024-07-26 14:19:52.165264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:44.188 [2024-07-26 14:19:52.181028] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:44.188 [2024-07-26 14:19:52.181066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:12514 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.188 [2024-07-26 14:19:52.181082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:44.188 [2024-07-26 14:19:52.196793] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:44.188 [2024-07-26 14:19:52.196824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:12094 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.188 [2024-07-26 14:19:52.196865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:44.446 [2024-07-26 14:19:52.207722] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:44.446 [2024-07-26 14:19:52.207754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:24522 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.446 [2024-07-26 14:19:52.207771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:44.446 [2024-07-26 14:19:52.223141] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:44.446 [2024-07-26 14:19:52.223176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:17594 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.446 [2024-07-26 14:19:52.223192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:44.446 [2024-07-26 14:19:52.236621] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:44.446 [2024-07-26 14:19:52.236652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:4086 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.446 [2024-07-26 14:19:52.236670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:44.446 [2024-07-26 14:19:52.247703] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:44.446 [2024-07-26 14:19:52.247738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:2340 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.446 [2024-07-26 14:19:52.247755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:44.446 [2024-07-26 14:19:52.262080] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:44.446 [2024-07-26 14:19:52.262109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:12204 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.446 [2024-07-26 14:19:52.262125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:44.446 [2024-07-26 14:19:52.275994] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:44.446 [2024-07-26 14:19:52.276023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:4866 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.446 [2024-07-26 14:19:52.276040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:44.446 [2024-07-26 14:19:52.289219] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:44.446 [2024-07-26 14:19:52.289251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24212 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.446 [2024-07-26 14:19:52.289269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:44.446 [2024-07-26 14:19:52.301393] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:44.446 [2024-07-26 14:19:52.301424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17873 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.446 [2024-07-26 14:19:52.301442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:44.446 [2024-07-26 14:19:52.314022] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:44.446 [2024-07-26 14:19:52.314054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20376 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.446 [2024-07-26 14:19:52.314072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:44.446 [2024-07-26 14:19:52.327356] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:44.446 [2024-07-26 14:19:52.327387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:5997 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.446 [2024-07-26 14:19:52.327404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:44.446 [2024-07-26 14:19:52.339917] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:44.446 [2024-07-26 14:19:52.339963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:25059 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.446 [2024-07-26 14:19:52.339981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:44.446 [2024-07-26 14:19:52.351045] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:44.446 [2024-07-26 14:19:52.351075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:20617 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.446 [2024-07-26 14:19:52.351090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:44.446 [2024-07-26 14:19:52.363075] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:44.446 [2024-07-26 14:19:52.363105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:14537 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.446 [2024-07-26 14:19:52.363122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:44.446 [2024-07-26 14:19:52.375965] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:44.446 [2024-07-26 14:19:52.375993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24599 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.446 [2024-07-26 14:19:52.376009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:44.446 [2024-07-26 14:19:52.389809] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:44.446 [2024-07-26 14:19:52.389840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:699 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.446 [2024-07-26 14:19:52.389858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:44.446 [2024-07-26 14:19:52.402039] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:44.446 [2024-07-26 14:19:52.402067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11712 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.446 [2024-07-26 14:19:52.402083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:44.446 [2024-07-26 14:19:52.413505] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:44.446 [2024-07-26 14:19:52.413553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22324 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.446 [2024-07-26 14:19:52.413572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:44.446 [2024-07-26 14:19:52.426702] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:44.446 [2024-07-26 14:19:52.426733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24395 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.446 [2024-07-26 14:19:52.426751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:44.446 [2024-07-26 14:19:52.439215] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:44.446 [2024-07-26 14:19:52.439244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:16294 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.446 [2024-07-26 14:19:52.439281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:44.446 [2024-07-26 14:19:52.452770] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:44.446 [2024-07-26 14:19:52.452814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:15640 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.446 [2024-07-26 14:19:52.452831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:44.705 [2024-07-26 14:19:52.465619] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:44.705 [2024-07-26 14:19:52.465650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.705 [2024-07-26 14:19:52.465667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:44.705 [2024-07-26 14:19:52.477009] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:44.705 [2024-07-26 14:19:52.477040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:15750 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.705 [2024-07-26 14:19:52.477057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:44.705 [2024-07-26 14:19:52.490279] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:44.705 [2024-07-26 14:19:52.490308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:15064 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.705 [2024-07-26 14:19:52.490325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:44.705 [2024-07-26 14:19:52.502434] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:44.705 [2024-07-26 14:19:52.502464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:1895 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.705 [2024-07-26 14:19:52.502496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:44.705 [2024-07-26 14:19:52.515361] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:44.705 [2024-07-26 14:19:52.515390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:22745 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.705 [2024-07-26 14:19:52.515407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:44.705 [2024-07-26 14:19:52.527944] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:44.705 [2024-07-26 14:19:52.527972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:10098 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.705 [2024-07-26 14:19:52.527988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:44.705 [2024-07-26 14:19:52.540193] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:44.705 [2024-07-26 14:19:52.540221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2105 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.705 [2024-07-26 14:19:52.540237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:44.705 [2024-07-26 14:19:52.553455] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:44.705 [2024-07-26 14:19:52.553492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:7343 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.705 [2024-07-26 14:19:52.553511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:44.705 [2024-07-26 14:19:52.564667] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:44.705 [2024-07-26 14:19:52.564696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:21348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.705 [2024-07-26 14:19:52.564713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:44.705 [2024-07-26 14:19:52.578724] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:44.705 [2024-07-26 14:19:52.578753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:22506 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.705 [2024-07-26 14:19:52.578770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:44.705 [2024-07-26 14:19:52.591218] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:44.705 [2024-07-26 14:19:52.591247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:2915 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.705 [2024-07-26 14:19:52.591263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:44.705 [2024-07-26 14:19:52.603406] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:44.705 [2024-07-26 14:19:52.603450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:1571 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.705 [2024-07-26 14:19:52.603467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:44.705 [2024-07-26 14:19:52.616986] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:44.705 [2024-07-26 14:19:52.617017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:14789 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.705 [2024-07-26 14:19:52.617035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:44.705 [2024-07-26 14:19:52.629271] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:44.705 [2024-07-26 14:19:52.629302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:4603 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.705 [2024-07-26 14:19:52.629319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:44.705 [2024-07-26 14:19:52.641483] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:44.705 [2024-07-26 14:19:52.641514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:7578 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.705 [2024-07-26 14:19:52.641539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:44.705 [2024-07-26 14:19:52.653238] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:44.705 [2024-07-26 14:19:52.653267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:14375 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.705 [2024-07-26 14:19:52.653288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:44.705 [2024-07-26 14:19:52.668151] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:44.705 [2024-07-26 14:19:52.668182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:1882 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.705 [2024-07-26 14:19:52.668200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:44.705 [2024-07-26 14:19:52.679762] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:44.705 [2024-07-26 14:19:52.679793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:4832 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.705 [2024-07-26 14:19:52.679830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:44.705 [2024-07-26 14:19:52.692394] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:44.705 [2024-07-26 14:19:52.692424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17368 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.705 [2024-07-26 14:19:52.692457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:44.705 [2024-07-26 14:19:52.707198] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:44.705 [2024-07-26 14:19:52.707229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:8272 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.705 [2024-07-26 14:19:52.707247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:44.705 [2024-07-26 14:19:52.718216] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:44.705 [2024-07-26 14:19:52.718260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:23091 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.705 [2024-07-26 14:19:52.718277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:44.963 [2024-07-26 14:19:52.730980] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:44.963 [2024-07-26 14:19:52.731011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:8197 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.963 [2024-07-26 14:19:52.731029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:44.963 [2024-07-26 14:19:52.745199] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:44.964 [2024-07-26 14:19:52.745227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:10436 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.964 [2024-07-26 14:19:52.745244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:44.964 [2024-07-26 14:19:52.757243] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:44.964 [2024-07-26 14:19:52.757271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:12127 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.964 [2024-07-26 14:19:52.757288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:44.964 [2024-07-26 14:19:52.770492] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:44.964 [2024-07-26 14:19:52.770538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:18704 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.964 [2024-07-26 14:19:52.770559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:44.964 [2024-07-26 14:19:52.781263] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:44.964 [2024-07-26 14:19:52.781291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:4356 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.964 [2024-07-26 14:19:52.781307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:44.964 [2024-07-26 14:19:52.794319] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:44.964 [2024-07-26 14:19:52.794349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:1718 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.964 [2024-07-26 14:19:52.794364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:44.964 [2024-07-26 14:19:52.809582] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:44.964 [2024-07-26 14:19:52.809617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21322 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.964 [2024-07-26 14:19:52.809635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:44.964 [2024-07-26 14:19:52.820871] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:44.964 [2024-07-26 14:19:52.820914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:4447 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.964 [2024-07-26 14:19:52.820930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:44.964 [2024-07-26 14:19:52.835626] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:44.964 [2024-07-26 14:19:52.835657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:15785 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.964 [2024-07-26 14:19:52.835675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:44.964 [2024-07-26 14:19:52.850850] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:44.964 [2024-07-26 14:19:52.850889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:7604 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.964 [2024-07-26 14:19:52.850905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:44.964 [2024-07-26 14:19:52.865431] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:44.964 [2024-07-26 14:19:52.865460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:19762 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.964 [2024-07-26 14:19:52.865476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:44.964 [2024-07-26 14:19:52.878525] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:44.964 [2024-07-26 14:19:52.878577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:8513 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.964 [2024-07-26 14:19:52.878593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:44.964 [2024-07-26 14:19:52.891307] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:44.964 [2024-07-26 14:19:52.891353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:22513 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.964 [2024-07-26 14:19:52.891370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:44.964 [2024-07-26 14:19:52.903831] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:44.964 [2024-07-26 14:19:52.903875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:6518 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.964 [2024-07-26 14:19:52.903891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:44.964 [2024-07-26 14:19:52.915754] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:44.964 [2024-07-26 14:19:52.915794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:9067 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.964 [2024-07-26 14:19:52.915828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:44.964 [2024-07-26 14:19:52.927912] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:44.964 [2024-07-26 14:19:52.927941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:16774 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.964 [2024-07-26 14:19:52.927958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:44.964 [2024-07-26 14:19:52.940414] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:44.964 [2024-07-26 14:19:52.940442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:15879 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.964 [2024-07-26 14:19:52.940458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:44.964 [2024-07-26 14:19:52.953972] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:44.964 [2024-07-26 14:19:52.954000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:16228 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.964 [2024-07-26 14:19:52.954016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:44.964 [2024-07-26 14:19:52.965998] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:44.964 [2024-07-26 14:19:52.966026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:1915 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.964 [2024-07-26 14:19:52.966042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:44.964 [2024-07-26 14:19:52.979148] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:44.964 [2024-07-26 14:19:52.979179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:15276 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.964 [2024-07-26 14:19:52.979196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:45.223 [2024-07-26 14:19:52.992909] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:45.223 [2024-07-26 14:19:52.992953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:8260 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.223 [2024-07-26 14:19:52.992976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:45.223 [2024-07-26 14:19:53.003630] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:45.223 [2024-07-26 14:19:53.003661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:18581 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.223 [2024-07-26 14:19:53.003678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:45.223 [2024-07-26 14:19:53.016993] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:45.223 [2024-07-26 14:19:53.017037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:16402 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.223 [2024-07-26 14:19:53.017054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:45.223 [2024-07-26 14:19:53.029425] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:45.223 [2024-07-26 14:19:53.029454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:11234 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.223 [2024-07-26 14:19:53.029471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:45.223 [2024-07-26 14:19:53.041969] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:45.223 [2024-07-26 14:19:53.042012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13111 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.223 [2024-07-26 14:19:53.042027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:45.223 [2024-07-26 14:19:53.054716] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:45.223 [2024-07-26 14:19:53.054746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:4050 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.223 [2024-07-26 14:19:53.054763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:45.223 [2024-07-26 14:19:53.067109] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:45.223 [2024-07-26 14:19:53.067138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:7958 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.223 [2024-07-26 14:19:53.067154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:45.223 [2024-07-26 14:19:53.079800] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:45.223 [2024-07-26 14:19:53.079844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:12043 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.223 [2024-07-26 14:19:53.079861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:45.223 [2024-07-26 14:19:53.092749] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:45.223 [2024-07-26 14:19:53.092779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:14745 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.223 [2024-07-26 14:19:53.092796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:45.223 [2024-07-26 14:19:53.105749] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:45.223 [2024-07-26 14:19:53.105785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15249 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.223 [2024-07-26 14:19:53.105804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:45.223 [2024-07-26 14:19:53.118938] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:45.223 [2024-07-26 14:19:53.118968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:1946 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.223 [2024-07-26 14:19:53.118985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:45.223 [2024-07-26 14:19:53.130009] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:45.223 [2024-07-26 14:19:53.130037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:24754 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.223 [2024-07-26 14:19:53.130054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:45.223 [2024-07-26 14:19:53.142403] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:45.223 [2024-07-26 14:19:53.142431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:599 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.223 [2024-07-26 14:19:53.142446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:45.223 [2024-07-26 14:19:53.155356] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:45.223 [2024-07-26 14:19:53.155384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:13304 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.223 [2024-07-26 14:19:53.155400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:45.223 [2024-07-26 14:19:53.169657] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:45.223 [2024-07-26 14:19:53.169687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:16353 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.223 [2024-07-26 14:19:53.169704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:45.223 [2024-07-26 14:19:53.183174] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:45.223 [2024-07-26 14:19:53.183206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:14617 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.223 [2024-07-26 14:19:53.183224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:45.223 [2024-07-26 14:19:53.194451] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:45.223 [2024-07-26 14:19:53.194479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11916 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.223 [2024-07-26 14:19:53.194495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:45.223 [2024-07-26 14:19:53.207271] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:45.223 [2024-07-26 14:19:53.207302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:5346 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.223 [2024-07-26 14:19:53.207339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:45.223 [2024-07-26 14:19:53.222258] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:45.223 [2024-07-26 14:19:53.222286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:23390 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.224 [2024-07-26 14:19:53.222302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:45.224 [2024-07-26 14:19:53.236093] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:45.224 [2024-07-26 14:19:53.236145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14127 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.224 [2024-07-26 14:19:53.236164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:45.481 [2024-07-26 14:19:53.250201] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:45.481 [2024-07-26 14:19:53.250232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21379 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.481 [2024-07-26 14:19:53.250250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:45.481 [2024-07-26 14:19:53.260428] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:45.481 [2024-07-26 14:19:53.260455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:21461 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.481 [2024-07-26 14:19:53.260470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:45.481 [2024-07-26 14:19:53.274820] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:45.481 [2024-07-26 14:19:53.274863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4278 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.481 [2024-07-26 14:19:53.274878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:45.481 [2024-07-26 14:19:53.287318] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:45.481 [2024-07-26 14:19:53.287350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24211 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.481 [2024-07-26 14:19:53.287368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:45.481 [2024-07-26 14:19:53.301481] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:45.481 [2024-07-26 14:19:53.301512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:2232 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.481 [2024-07-26 14:19:53.301553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:45.481 [2024-07-26 14:19:53.313143] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:45.481 [2024-07-26 14:19:53.313170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1543 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.481 [2024-07-26 14:19:53.313186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:45.481 [2024-07-26 14:19:53.326423] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:45.481 [2024-07-26 14:19:53.326472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:12219 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.481 [2024-07-26 14:19:53.326491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:45.481 [2024-07-26 14:19:53.338421] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:45.481 [2024-07-26 14:19:53.338448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:5526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.481 [2024-07-26 14:19:53.338463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:45.481 [2024-07-26 14:19:53.351140] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:45.481 [2024-07-26 14:19:53.351183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:14807 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.481 [2024-07-26 14:19:53.351199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:45.481 [2024-07-26 14:19:53.363628] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:45.481 [2024-07-26 14:19:53.363672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:19041 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.481 [2024-07-26 14:19:53.363687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:45.481 [2024-07-26 14:19:53.375873] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:45.481 [2024-07-26 14:19:53.375915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:19314 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.481 [2024-07-26 14:19:53.375930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:45.481 [2024-07-26 14:19:53.388427] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:45.481 [2024-07-26 14:19:53.388454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:2871 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.481 [2024-07-26 14:19:53.388469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:45.481 [2024-07-26 14:19:53.400003] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:45.481 [2024-07-26 14:19:53.400042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:16570 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.481 [2024-07-26 14:19:53.400057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:45.481 [2024-07-26 14:19:53.412939] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:45.481 [2024-07-26 14:19:53.412966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:15228 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.481 [2024-07-26 14:19:53.412981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:45.481 [2024-07-26 14:19:53.426555] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:45.481 [2024-07-26 14:19:53.426598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:21517 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.481 [2024-07-26 14:19:53.426615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:45.481 [2024-07-26 14:19:53.440509] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:45.481 [2024-07-26 14:19:53.440544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:18723 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.481 [2024-07-26 14:19:53.440577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:45.481 [2024-07-26 14:19:53.452864] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:45.481 [2024-07-26 14:19:53.452893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:10094 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.481 [2024-07-26 14:19:53.452924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:45.481 [2024-07-26 14:19:53.465913] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:45.481 [2024-07-26 14:19:53.465940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:8604 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.482 [2024-07-26 14:19:53.465956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:45.482 [2024-07-26 14:19:53.478554] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:45.482 [2024-07-26 14:19:53.478597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:241 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.482 [2024-07-26 14:19:53.478615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:45.482 [2024-07-26 14:19:53.490683] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:45.482 [2024-07-26 14:19:53.490714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:5566 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.482 [2024-07-26 14:19:53.490731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:45.739 [2024-07-26 14:19:53.503187] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:45.739 [2024-07-26 14:19:53.503217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:22061 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.739 [2024-07-26 14:19:53.503234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:45.739 [2024-07-26 14:19:53.515129] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:45.739 [2024-07-26 14:19:53.515156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:17595 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.739 [2024-07-26 14:19:53.515172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:45.739 [2024-07-26 14:19:53.528158] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:45.739 [2024-07-26 14:19:53.528203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:7856 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.739 [2024-07-26 14:19:53.528220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:45.739 [2024-07-26 14:19:53.540880] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:45.739 [2024-07-26 14:19:53.540911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:24083 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.739 [2024-07-26 14:19:53.540950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:45.739 [2024-07-26 14:19:53.552235] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:45.739 [2024-07-26 14:19:53.552263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:6500 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.739 [2024-07-26 14:19:53.552278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:45.739 [2024-07-26 14:19:53.565668] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:45.739 [2024-07-26 14:19:53.565696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:18048 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.739 [2024-07-26 14:19:53.565711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:45.739 [2024-07-26 14:19:53.579746] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:45.739 [2024-07-26 14:19:53.579775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:22798 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.739 [2024-07-26 14:19:53.579791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:45.739 [2024-07-26 14:19:53.590829] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:45.739 [2024-07-26 14:19:53.590871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:11237 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.739 [2024-07-26 14:19:53.590886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:45.739 [2024-07-26 14:19:53.605738] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:45.739 [2024-07-26 14:19:53.605768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:11183 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.739 [2024-07-26 14:19:53.605800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:45.739 [2024-07-26 14:19:53.616412] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:45.739 [2024-07-26 14:19:53.616457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6209 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.739 [2024-07-26 14:19:53.616474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:45.739 [2024-07-26 14:19:53.631842] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:45.739 [2024-07-26 14:19:53.631885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:13567 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.739 [2024-07-26 14:19:53.631900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:45.739 [2024-07-26 14:19:53.644782] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:45.739 [2024-07-26 14:19:53.644826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:5025 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.739 [2024-07-26 14:19:53.644843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:45.739 [2024-07-26 14:19:53.656970] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:45.739 [2024-07-26 14:19:53.657018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:5079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.739 [2024-07-26 14:19:53.657035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:45.739 [2024-07-26 14:19:53.669223] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:45.739 [2024-07-26 14:19:53.669254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:17672 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.739 [2024-07-26 14:19:53.669285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:45.739 [2024-07-26 14:19:53.681598] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:45.739 [2024-07-26 14:19:53.681642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:3750 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.739 [2024-07-26 14:19:53.681659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:45.739 [2024-07-26 14:19:53.694367] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:45.739 [2024-07-26 14:19:53.694398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:11880 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.739 [2024-07-26 14:19:53.694416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:45.739 [2024-07-26 14:19:53.706559] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:45.739 [2024-07-26 14:19:53.706589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:10969 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.739 [2024-07-26 14:19:53.706607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:45.739 [2024-07-26 14:19:53.718637] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:45.739 [2024-07-26 14:19:53.718665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:2956 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.739 [2024-07-26 14:19:53.718680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:45.740 [2024-07-26 14:19:53.731235] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:45.740 [2024-07-26 14:19:53.731281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:23382 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.740 [2024-07-26 14:19:53.731297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:45.740 [2024-07-26 14:19:53.744157] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:45.740 [2024-07-26 14:19:53.744201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:1033 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.740 [2024-07-26 14:19:53.744217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:45.740 [2024-07-26 14:19:53.756497] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:45.740 [2024-07-26 14:19:53.756535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:481 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.740 [2024-07-26 14:19:53.756555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:45.997 [2024-07-26 14:19:53.770079] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:45.997 [2024-07-26 14:19:53.770120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22578 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.997 [2024-07-26 14:19:53.770136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:45.997 [2024-07-26 14:19:53.782370] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:45.997 [2024-07-26 14:19:53.782400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:16276 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.997 [2024-07-26 14:19:53.782418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:45.997 [2024-07-26 14:19:53.794117] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:45.997 [2024-07-26 14:19:53.794144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:17668 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.997 [2024-07-26 14:19:53.794159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:45.997 [2024-07-26 14:19:53.806910] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:45.997 [2024-07-26 14:19:53.806941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:10314 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.997 [2024-07-26 14:19:53.806958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:45.997 [2024-07-26 14:19:53.821028] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:45.997 [2024-07-26 14:19:53.821056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:7957 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.997 [2024-07-26 14:19:53.821071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:45.997 [2024-07-26 14:19:53.831950] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:45.997 [2024-07-26 14:19:53.831992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:7442 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.997 [2024-07-26 14:19:53.832008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:45.997 [2024-07-26 14:19:53.846363] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:45.997 [2024-07-26 14:19:53.846392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20616 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.997 [2024-07-26 14:19:53.846409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:45.997 [2024-07-26 14:19:53.859591] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:45.997 [2024-07-26 14:19:53.859621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11680 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.997 [2024-07-26 14:19:53.859639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:45.997 [2024-07-26 14:19:53.871101] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:45.997 [2024-07-26 14:19:53.871134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:25082 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.997 [2024-07-26 14:19:53.871150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:45.997 [2024-07-26 14:19:53.885666] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:45.997 [2024-07-26 14:19:53.885697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:2229 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.997 [2024-07-26 14:19:53.885715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:45.997 [2024-07-26 14:19:53.896763] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:45.997 [2024-07-26 14:19:53.896794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:21108 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.997 [2024-07-26 14:19:53.896811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:45.997 [2024-07-26 14:19:53.911299] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:45.997 [2024-07-26 14:19:53.911327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:11164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.997 [2024-07-26 14:19:53.911342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:45.997 [2024-07-26 14:19:53.927431] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:45.997 [2024-07-26 14:19:53.927459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:20056 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.997 [2024-07-26 14:19:53.927476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:45.998 [2024-07-26 14:19:53.940160] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:45.998 [2024-07-26 14:19:53.940188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:12432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.998 [2024-07-26 14:19:53.940203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:45.998 [2024-07-26 14:19:53.952985] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:45.998 [2024-07-26 14:19:53.953015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:17752 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.998 [2024-07-26 14:19:53.953032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:45.998 [2024-07-26 14:19:53.964625] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:45.998 [2024-07-26 14:19:53.964663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:17453 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.998 [2024-07-26 14:19:53.964680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:45.998 [2024-07-26 14:19:53.977675] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:45.998 [2024-07-26 14:19:53.977704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2906 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.998 [2024-07-26 14:19:53.977736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:45.998 [2024-07-26 14:19:53.989732] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:45.998 [2024-07-26 14:19:53.989778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25062 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.998 [2024-07-26 14:19:53.989795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:45.998 [2024-07-26 14:19:54.002382] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:45.998 [2024-07-26 14:19:54.002426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:5235 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.998 [2024-07-26 14:19:54.002443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:46.255 [2024-07-26 14:19:54.015075] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:46.255 [2024-07-26 14:19:54.015105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:21357 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.255 [2024-07-26 14:19:54.015124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:46.255 [2024-07-26 14:19:54.028107] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:46.255 [2024-07-26 14:19:54.028150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:25557 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.255 [2024-07-26 14:19:54.028165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:46.255 [2024-07-26 14:19:54.040384] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:46.255 [2024-07-26 14:19:54.040414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:17194 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.255 [2024-07-26 14:19:54.040432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:46.255 [2024-07-26 14:19:54.052822] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:46.255 [2024-07-26 14:19:54.052850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:25014 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.255 [2024-07-26 14:19:54.052866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:46.255 [2024-07-26 14:19:54.064997] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:46.255 [2024-07-26 14:19:54.065025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:23974 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.255 [2024-07-26 14:19:54.065041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:46.255 [2024-07-26 14:19:54.077650] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:46.255 [2024-07-26 14:19:54.077679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1103 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.255 [2024-07-26 14:19:54.077695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:46.255 [2024-07-26 14:19:54.090673] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:46.255 [2024-07-26 14:19:54.090701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:25088 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.255 [2024-07-26 14:19:54.090722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:46.255 [2024-07-26 14:19:54.102735] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:46.255 [2024-07-26 14:19:54.102781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:1984 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.255 [2024-07-26 14:19:54.102798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:46.255 [2024-07-26 14:19:54.115869] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:46.255 [2024-07-26 14:19:54.115901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:2390 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.255 [2024-07-26 14:19:54.115918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:46.255 [2024-07-26 14:19:54.128155] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:46.255 [2024-07-26 14:19:54.128185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:21763 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.255 [2024-07-26 14:19:54.128217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:46.255 [2024-07-26 14:19:54.141717] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ccb0) 00:25:46.255 [2024-07-26 14:19:54.141747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:6460 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.255 [2024-07-26 14:19:54.141764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:46.255 00:25:46.255 Latency(us) 00:25:46.255 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:46.255 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:25:46.255 nvme0n1 : 2.00 19908.68 77.77 0.00 0.00 6421.89 3616.62 18447.17 00:25:46.255 =================================================================================================================== 00:25:46.255 Total : 19908.68 77.77 0.00 0.00 6421.89 3616.62 18447.17 00:25:46.255 0 00:25:46.255 14:19:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:46.255 14:19:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:46.255 14:19:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:46.255 | .driver_specific 00:25:46.255 | .nvme_error 00:25:46.255 | .status_code 00:25:46.255 | .command_transient_transport_error' 00:25:46.256 14:19:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:46.512 14:19:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 156 > 0 )) 00:25:46.512 14:19:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 321040 00:25:46.512 14:19:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 321040 ']' 00:25:46.513 14:19:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 321040 00:25:46.513 14:19:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:25:46.513 14:19:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:46.513 14:19:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 321040 00:25:46.513 14:19:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:46.513 14:19:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:46.513 14:19:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 321040' 00:25:46.513 killing process with pid 321040 00:25:46.513 14:19:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 321040 00:25:46.513 Received shutdown signal, test time was about 2.000000 seconds 00:25:46.513 00:25:46.513 Latency(us) 00:25:46.513 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:46.513 =================================================================================================================== 00:25:46.513 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:46.513 14:19:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 321040 00:25:46.770 14:19:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:25:46.770 14:19:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:46.770 14:19:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:25:46.770 14:19:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:25:46.770 14:19:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:25:46.770 14:19:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=321469 00:25:46.770 14:19:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:25:46.770 14:19:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 321469 /var/tmp/bperf.sock 00:25:46.770 14:19:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 321469 ']' 00:25:46.770 14:19:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:46.770 14:19:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:46.770 14:19:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:46.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:46.770 14:19:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:46.770 14:19:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:46.770 [2024-07-26 14:19:54.749206] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:25:46.770 [2024-07-26 14:19:54.749278] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid321469 ] 00:25:46.770 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:46.770 Zero copy mechanism will not be used. 00:25:46.770 EAL: No free 2048 kB hugepages reported on node 1 00:25:47.027 [2024-07-26 14:19:54.807623] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:47.027 [2024-07-26 14:19:54.914862] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:47.027 14:19:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:47.027 14:19:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:25:47.027 14:19:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:47.027 14:19:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:47.284 14:19:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:47.284 14:19:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.284 14:19:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:47.284 14:19:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.284 14:19:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:47.284 14:19:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:47.849 nvme0n1 00:25:47.849 14:19:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:25:47.849 14:19:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.849 14:19:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:47.849 14:19:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.849 14:19:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:47.849 14:19:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:47.849 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:47.849 Zero copy mechanism will not be used. 00:25:47.849 Running I/O for 2 seconds... 00:25:47.849 [2024-07-26 14:19:55.685793] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:47.849 [2024-07-26 14:19:55.685867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.849 [2024-07-26 14:19:55.685886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:47.849 [2024-07-26 14:19:55.690468] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:47.849 [2024-07-26 14:19:55.690500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.849 [2024-07-26 14:19:55.690518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:47.849 [2024-07-26 14:19:55.694987] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:47.849 [2024-07-26 14:19:55.695015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.849 [2024-07-26 14:19:55.695031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:47.849 [2024-07-26 14:19:55.699659] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:47.849 [2024-07-26 14:19:55.699691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.849 [2024-07-26 14:19:55.699713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.849 [2024-07-26 14:19:55.704542] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:47.849 [2024-07-26 14:19:55.704574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.849 [2024-07-26 14:19:55.704592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:47.849 [2024-07-26 14:19:55.710931] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:47.849 [2024-07-26 14:19:55.710964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.849 [2024-07-26 14:19:55.710981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:47.849 [2024-07-26 14:19:55.716163] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:47.849 [2024-07-26 14:19:55.716195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.849 [2024-07-26 14:19:55.716211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:47.849 [2024-07-26 14:19:55.721030] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:47.849 [2024-07-26 14:19:55.721061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.849 [2024-07-26 14:19:55.721078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.850 [2024-07-26 14:19:55.725697] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:47.850 [2024-07-26 14:19:55.725729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.850 [2024-07-26 14:19:55.725747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:47.850 [2024-07-26 14:19:55.730695] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:47.850 [2024-07-26 14:19:55.730727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.850 [2024-07-26 14:19:55.730745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:47.850 [2024-07-26 14:19:55.733905] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:47.850 [2024-07-26 14:19:55.733933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.850 [2024-07-26 14:19:55.733949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:47.850 [2024-07-26 14:19:55.740947] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:47.850 [2024-07-26 14:19:55.740979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.850 [2024-07-26 14:19:55.740996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.850 [2024-07-26 14:19:55.745961] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:47.850 [2024-07-26 14:19:55.745993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.850 [2024-07-26 14:19:55.746030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:47.850 [2024-07-26 14:19:55.751432] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:47.850 [2024-07-26 14:19:55.751463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.850 [2024-07-26 14:19:55.751480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:47.850 [2024-07-26 14:19:55.757302] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:47.850 [2024-07-26 14:19:55.757334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.850 [2024-07-26 14:19:55.757350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:47.850 [2024-07-26 14:19:55.764849] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:47.850 [2024-07-26 14:19:55.764896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.850 [2024-07-26 14:19:55.764913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.850 [2024-07-26 14:19:55.771308] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:47.850 [2024-07-26 14:19:55.771339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.850 [2024-07-26 14:19:55.771356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:47.850 [2024-07-26 14:19:55.776501] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:47.850 [2024-07-26 14:19:55.776556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.850 [2024-07-26 14:19:55.776590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:47.850 [2024-07-26 14:19:55.781669] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:47.850 [2024-07-26 14:19:55.781702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.850 [2024-07-26 14:19:55.781719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:47.850 [2024-07-26 14:19:55.786283] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:47.850 [2024-07-26 14:19:55.786313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.850 [2024-07-26 14:19:55.786330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.850 [2024-07-26 14:19:55.790939] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:47.850 [2024-07-26 14:19:55.790969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.850 [2024-07-26 14:19:55.790985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:47.850 [2024-07-26 14:19:55.795622] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:47.850 [2024-07-26 14:19:55.795658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.850 [2024-07-26 14:19:55.795676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:47.850 [2024-07-26 14:19:55.800170] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:47.850 [2024-07-26 14:19:55.800199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.850 [2024-07-26 14:19:55.800214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:47.850 [2024-07-26 14:19:55.804761] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:47.850 [2024-07-26 14:19:55.804791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.850 [2024-07-26 14:19:55.804808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.850 [2024-07-26 14:19:55.809410] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:47.850 [2024-07-26 14:19:55.809454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.850 [2024-07-26 14:19:55.809469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:47.850 [2024-07-26 14:19:55.814082] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:47.850 [2024-07-26 14:19:55.814125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.850 [2024-07-26 14:19:55.814141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:47.850 [2024-07-26 14:19:55.818705] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:47.850 [2024-07-26 14:19:55.818736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.850 [2024-07-26 14:19:55.818753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:47.850 [2024-07-26 14:19:55.823289] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:47.850 [2024-07-26 14:19:55.823318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.850 [2024-07-26 14:19:55.823334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.850 [2024-07-26 14:19:55.828921] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:47.850 [2024-07-26 14:19:55.828966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.850 [2024-07-26 14:19:55.828982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:47.850 [2024-07-26 14:19:55.836126] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:47.850 [2024-07-26 14:19:55.836157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.850 [2024-07-26 14:19:55.836173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:47.850 [2024-07-26 14:19:55.843030] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:47.850 [2024-07-26 14:19:55.843076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.850 [2024-07-26 14:19:55.843092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:47.850 [2024-07-26 14:19:55.849899] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:47.850 [2024-07-26 14:19:55.849943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.850 [2024-07-26 14:19:55.849960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.850 [2024-07-26 14:19:55.857720] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:47.850 [2024-07-26 14:19:55.857752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.850 [2024-07-26 14:19:55.857770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:47.850 [2024-07-26 14:19:55.864070] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:47.850 [2024-07-26 14:19:55.864101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.850 [2024-07-26 14:19:55.864119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.115 [2024-07-26 14:19:55.870809] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.115 [2024-07-26 14:19:55.870841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.115 [2024-07-26 14:19:55.870858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.115 [2024-07-26 14:19:55.876787] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.115 [2024-07-26 14:19:55.876819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.115 [2024-07-26 14:19:55.876852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.115 [2024-07-26 14:19:55.881911] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.115 [2024-07-26 14:19:55.881942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.115 [2024-07-26 14:19:55.881959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.115 [2024-07-26 14:19:55.886502] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.115 [2024-07-26 14:19:55.886555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.115 [2024-07-26 14:19:55.886573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.115 [2024-07-26 14:19:55.891173] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.115 [2024-07-26 14:19:55.891203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.115 [2024-07-26 14:19:55.891240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.115 [2024-07-26 14:19:55.895887] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.115 [2024-07-26 14:19:55.895917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.115 [2024-07-26 14:19:55.895933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.115 [2024-07-26 14:19:55.900498] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.115 [2024-07-26 14:19:55.900535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.115 [2024-07-26 14:19:55.900572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.115 [2024-07-26 14:19:55.905437] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.115 [2024-07-26 14:19:55.905468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.115 [2024-07-26 14:19:55.905485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.116 [2024-07-26 14:19:55.911042] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.116 [2024-07-26 14:19:55.911072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.116 [2024-07-26 14:19:55.911089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.116 [2024-07-26 14:19:55.918622] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.116 [2024-07-26 14:19:55.918654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.116 [2024-07-26 14:19:55.918672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.116 [2024-07-26 14:19:55.925119] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.116 [2024-07-26 14:19:55.925151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.116 [2024-07-26 14:19:55.925167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.116 [2024-07-26 14:19:55.931440] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.116 [2024-07-26 14:19:55.931472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.116 [2024-07-26 14:19:55.931489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.116 [2024-07-26 14:19:55.937687] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.116 [2024-07-26 14:19:55.937719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.116 [2024-07-26 14:19:55.937737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.116 [2024-07-26 14:19:55.943111] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.116 [2024-07-26 14:19:55.943151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.116 [2024-07-26 14:19:55.943169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.116 [2024-07-26 14:19:55.948411] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.116 [2024-07-26 14:19:55.948442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.116 [2024-07-26 14:19:55.948458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.116 [2024-07-26 14:19:55.953445] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.116 [2024-07-26 14:19:55.953476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.116 [2024-07-26 14:19:55.953493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.116 [2024-07-26 14:19:55.958158] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.116 [2024-07-26 14:19:55.958189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.116 [2024-07-26 14:19:55.958205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.116 [2024-07-26 14:19:55.963300] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.116 [2024-07-26 14:19:55.963331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.116 [2024-07-26 14:19:55.963348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.116 [2024-07-26 14:19:55.967815] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.116 [2024-07-26 14:19:55.967860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.116 [2024-07-26 14:19:55.967877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.116 [2024-07-26 14:19:55.972596] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.116 [2024-07-26 14:19:55.972628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.116 [2024-07-26 14:19:55.972644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.116 [2024-07-26 14:19:55.978194] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.116 [2024-07-26 14:19:55.978224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.116 [2024-07-26 14:19:55.978242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.116 [2024-07-26 14:19:55.982747] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.116 [2024-07-26 14:19:55.982778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.116 [2024-07-26 14:19:55.982795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.116 [2024-07-26 14:19:55.987254] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.116 [2024-07-26 14:19:55.987284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.116 [2024-07-26 14:19:55.987301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.116 [2024-07-26 14:19:55.993146] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.116 [2024-07-26 14:19:55.993176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.116 [2024-07-26 14:19:55.993198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.116 [2024-07-26 14:19:55.999179] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.116 [2024-07-26 14:19:55.999209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.116 [2024-07-26 14:19:55.999228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.116 [2024-07-26 14:19:56.005637] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.116 [2024-07-26 14:19:56.005670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.116 [2024-07-26 14:19:56.005690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.116 [2024-07-26 14:19:56.010193] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.116 [2024-07-26 14:19:56.010223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.116 [2024-07-26 14:19:56.010238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.116 [2024-07-26 14:19:56.017807] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.116 [2024-07-26 14:19:56.017852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.116 [2024-07-26 14:19:56.017869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.116 [2024-07-26 14:19:56.025242] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.116 [2024-07-26 14:19:56.025273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.116 [2024-07-26 14:19:56.025290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.116 [2024-07-26 14:19:56.032743] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.116 [2024-07-26 14:19:56.032775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.116 [2024-07-26 14:19:56.032796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.116 [2024-07-26 14:19:56.040231] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.116 [2024-07-26 14:19:56.040267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.116 [2024-07-26 14:19:56.040284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.116 [2024-07-26 14:19:56.046030] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.116 [2024-07-26 14:19:56.046060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.116 [2024-07-26 14:19:56.046076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.116 [2024-07-26 14:19:56.051208] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.116 [2024-07-26 14:19:56.051240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.116 [2024-07-26 14:19:56.051272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.116 [2024-07-26 14:19:56.056698] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.116 [2024-07-26 14:19:56.056729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.116 [2024-07-26 14:19:56.056747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.116 [2024-07-26 14:19:56.061872] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.117 [2024-07-26 14:19:56.061904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.117 [2024-07-26 14:19:56.061921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.117 [2024-07-26 14:19:56.067147] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.117 [2024-07-26 14:19:56.067178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.117 [2024-07-26 14:19:56.067195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.117 [2024-07-26 14:19:56.071644] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.117 [2024-07-26 14:19:56.071675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.117 [2024-07-26 14:19:56.071692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.117 [2024-07-26 14:19:56.075985] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.117 [2024-07-26 14:19:56.076030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.117 [2024-07-26 14:19:56.076046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.117 [2024-07-26 14:19:56.080366] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.117 [2024-07-26 14:19:56.080397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.117 [2024-07-26 14:19:56.080414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.117 [2024-07-26 14:19:56.084920] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.117 [2024-07-26 14:19:56.084951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.117 [2024-07-26 14:19:56.084968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.117 [2024-07-26 14:19:56.089696] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.117 [2024-07-26 14:19:56.089727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.117 [2024-07-26 14:19:56.089743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.117 [2024-07-26 14:19:56.094805] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.117 [2024-07-26 14:19:56.094836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.117 [2024-07-26 14:19:56.094853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.117 [2024-07-26 14:19:56.099599] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.117 [2024-07-26 14:19:56.099630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.117 [2024-07-26 14:19:56.099647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.117 [2024-07-26 14:19:56.104785] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.117 [2024-07-26 14:19:56.104816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.117 [2024-07-26 14:19:56.104833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.117 [2024-07-26 14:19:56.110658] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.117 [2024-07-26 14:19:56.110691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.117 [2024-07-26 14:19:56.110708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.117 [2024-07-26 14:19:56.116716] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.117 [2024-07-26 14:19:56.116748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.117 [2024-07-26 14:19:56.116766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.117 [2024-07-26 14:19:56.122814] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.117 [2024-07-26 14:19:56.122846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.117 [2024-07-26 14:19:56.122864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.117 [2024-07-26 14:19:56.127820] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.117 [2024-07-26 14:19:56.127852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.117 [2024-07-26 14:19:56.127876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.117 [2024-07-26 14:19:56.132704] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.117 [2024-07-26 14:19:56.132736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.117 [2024-07-26 14:19:56.132753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.376 [2024-07-26 14:19:56.137799] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.376 [2024-07-26 14:19:56.137830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.376 [2024-07-26 14:19:56.137848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.376 [2024-07-26 14:19:56.142823] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.376 [2024-07-26 14:19:56.142855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.376 [2024-07-26 14:19:56.142873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.376 [2024-07-26 14:19:56.147585] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.376 [2024-07-26 14:19:56.147617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.376 [2024-07-26 14:19:56.147633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.376 [2024-07-26 14:19:56.152654] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.376 [2024-07-26 14:19:56.152686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.376 [2024-07-26 14:19:56.152711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.376 [2024-07-26 14:19:56.157677] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.376 [2024-07-26 14:19:56.157709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.376 [2024-07-26 14:19:56.157733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.376 [2024-07-26 14:19:56.162209] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.376 [2024-07-26 14:19:56.162241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.376 [2024-07-26 14:19:56.162258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.376 [2024-07-26 14:19:56.166741] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.376 [2024-07-26 14:19:56.166772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.376 [2024-07-26 14:19:56.166789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.376 [2024-07-26 14:19:56.171116] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.376 [2024-07-26 14:19:56.171152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.376 [2024-07-26 14:19:56.171169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.376 [2024-07-26 14:19:56.176327] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.376 [2024-07-26 14:19:56.176358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.376 [2024-07-26 14:19:56.176375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.376 [2024-07-26 14:19:56.181489] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.376 [2024-07-26 14:19:56.181520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.376 [2024-07-26 14:19:56.181547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.376 [2024-07-26 14:19:56.186117] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.376 [2024-07-26 14:19:56.186147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.376 [2024-07-26 14:19:56.186164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.376 [2024-07-26 14:19:56.190687] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.376 [2024-07-26 14:19:56.190716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.376 [2024-07-26 14:19:56.190733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.376 [2024-07-26 14:19:56.195433] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.376 [2024-07-26 14:19:56.195463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.376 [2024-07-26 14:19:56.195480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.376 [2024-07-26 14:19:56.199918] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.376 [2024-07-26 14:19:56.199948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.376 [2024-07-26 14:19:56.199965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.376 [2024-07-26 14:19:56.204419] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.376 [2024-07-26 14:19:56.204450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.376 [2024-07-26 14:19:56.204467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.376 [2024-07-26 14:19:56.208889] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.376 [2024-07-26 14:19:56.208919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.376 [2024-07-26 14:19:56.208935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.376 [2024-07-26 14:19:56.213427] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.376 [2024-07-26 14:19:56.213457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.376 [2024-07-26 14:19:56.213474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.376 [2024-07-26 14:19:56.217969] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.376 [2024-07-26 14:19:56.217999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.376 [2024-07-26 14:19:56.218016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.376 [2024-07-26 14:19:56.222863] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.376 [2024-07-26 14:19:56.222894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.376 [2024-07-26 14:19:56.222912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.376 [2024-07-26 14:19:56.227480] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.376 [2024-07-26 14:19:56.227510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.376 [2024-07-26 14:19:56.227533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.376 [2024-07-26 14:19:56.231980] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.376 [2024-07-26 14:19:56.232010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.376 [2024-07-26 14:19:56.232027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.376 [2024-07-26 14:19:56.237194] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.377 [2024-07-26 14:19:56.237225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.377 [2024-07-26 14:19:56.237243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.377 [2024-07-26 14:19:56.243777] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.377 [2024-07-26 14:19:56.243814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.377 [2024-07-26 14:19:56.243831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.377 [2024-07-26 14:19:56.251361] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.377 [2024-07-26 14:19:56.251393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.377 [2024-07-26 14:19:56.251410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.377 [2024-07-26 14:19:56.258985] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.377 [2024-07-26 14:19:56.259017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.377 [2024-07-26 14:19:56.259041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.377 [2024-07-26 14:19:56.266659] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.377 [2024-07-26 14:19:56.266691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.377 [2024-07-26 14:19:56.266709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.377 [2024-07-26 14:19:56.274199] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.377 [2024-07-26 14:19:56.274231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.377 [2024-07-26 14:19:56.274249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.377 [2024-07-26 14:19:56.281695] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.377 [2024-07-26 14:19:56.281727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.377 [2024-07-26 14:19:56.281744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.377 [2024-07-26 14:19:56.289278] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.377 [2024-07-26 14:19:56.289311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.377 [2024-07-26 14:19:56.289329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.377 [2024-07-26 14:19:56.296867] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.377 [2024-07-26 14:19:56.296899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.377 [2024-07-26 14:19:56.296916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.377 [2024-07-26 14:19:56.304605] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.377 [2024-07-26 14:19:56.304637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.377 [2024-07-26 14:19:56.304658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.377 [2024-07-26 14:19:56.312235] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.377 [2024-07-26 14:19:56.312267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.377 [2024-07-26 14:19:56.312285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.377 [2024-07-26 14:19:56.319107] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.377 [2024-07-26 14:19:56.319140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.377 [2024-07-26 14:19:56.319157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.377 [2024-07-26 14:19:56.323684] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.377 [2024-07-26 14:19:56.323715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.377 [2024-07-26 14:19:56.323731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.377 [2024-07-26 14:19:56.331714] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.377 [2024-07-26 14:19:56.331760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.377 [2024-07-26 14:19:56.331778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.377 [2024-07-26 14:19:56.339481] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.377 [2024-07-26 14:19:56.339512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.377 [2024-07-26 14:19:56.339551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.377 [2024-07-26 14:19:56.347489] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.377 [2024-07-26 14:19:56.347520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.377 [2024-07-26 14:19:56.347560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.377 [2024-07-26 14:19:56.355038] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.377 [2024-07-26 14:19:56.355069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.377 [2024-07-26 14:19:56.355085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.377 [2024-07-26 14:19:56.361688] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.377 [2024-07-26 14:19:56.361720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.377 [2024-07-26 14:19:56.361737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.377 [2024-07-26 14:19:56.366799] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.377 [2024-07-26 14:19:56.366845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.377 [2024-07-26 14:19:56.366862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.377 [2024-07-26 14:19:56.373019] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.377 [2024-07-26 14:19:56.373050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.377 [2024-07-26 14:19:56.373067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.377 [2024-07-26 14:19:56.378302] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.377 [2024-07-26 14:19:56.378332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.377 [2024-07-26 14:19:56.378354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.378 [2024-07-26 14:19:56.382716] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.378 [2024-07-26 14:19:56.382762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.378 [2024-07-26 14:19:56.382778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.378 [2024-07-26 14:19:56.387028] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.378 [2024-07-26 14:19:56.387057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.378 [2024-07-26 14:19:56.387073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.378 [2024-07-26 14:19:56.391347] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.378 [2024-07-26 14:19:56.391377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.378 [2024-07-26 14:19:56.391393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.636 [2024-07-26 14:19:56.395693] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.636 [2024-07-26 14:19:56.395722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.636 [2024-07-26 14:19:56.395738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.636 [2024-07-26 14:19:56.400108] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.636 [2024-07-26 14:19:56.400139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.636 [2024-07-26 14:19:56.400156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.636 [2024-07-26 14:19:56.404579] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.636 [2024-07-26 14:19:56.404609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.636 [2024-07-26 14:19:56.404626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.636 [2024-07-26 14:19:56.408982] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.636 [2024-07-26 14:19:56.409010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.636 [2024-07-26 14:19:56.409025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.636 [2024-07-26 14:19:56.413287] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.636 [2024-07-26 14:19:56.413316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.636 [2024-07-26 14:19:56.413332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.636 [2024-07-26 14:19:56.417767] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.636 [2024-07-26 14:19:56.417802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.636 [2024-07-26 14:19:56.417834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.636 [2024-07-26 14:19:56.422117] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.636 [2024-07-26 14:19:56.422146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.636 [2024-07-26 14:19:56.422163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.636 [2024-07-26 14:19:56.426495] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.636 [2024-07-26 14:19:56.426547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.636 [2024-07-26 14:19:56.426565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.636 [2024-07-26 14:19:56.431019] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.636 [2024-07-26 14:19:56.431064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.636 [2024-07-26 14:19:56.431080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.636 [2024-07-26 14:19:56.436092] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.636 [2024-07-26 14:19:56.436121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.636 [2024-07-26 14:19:56.436138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.636 [2024-07-26 14:19:56.440825] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.636 [2024-07-26 14:19:56.440871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.636 [2024-07-26 14:19:56.440887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.636 [2024-07-26 14:19:56.445626] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.636 [2024-07-26 14:19:56.445657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.636 [2024-07-26 14:19:56.445675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.636 [2024-07-26 14:19:56.450320] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.636 [2024-07-26 14:19:56.450353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.636 [2024-07-26 14:19:56.450370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.636 [2024-07-26 14:19:56.455569] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.636 [2024-07-26 14:19:56.455602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.636 [2024-07-26 14:19:56.455619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.637 [2024-07-26 14:19:56.460742] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.637 [2024-07-26 14:19:56.460773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.637 [2024-07-26 14:19:56.460791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.637 [2024-07-26 14:19:56.466006] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.637 [2024-07-26 14:19:56.466038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.637 [2024-07-26 14:19:56.466056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.637 [2024-07-26 14:19:56.471491] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.637 [2024-07-26 14:19:56.471523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.637 [2024-07-26 14:19:56.471549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.637 [2024-07-26 14:19:56.477220] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.637 [2024-07-26 14:19:56.477252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.637 [2024-07-26 14:19:56.477269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.637 [2024-07-26 14:19:56.482882] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.637 [2024-07-26 14:19:56.482914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.637 [2024-07-26 14:19:56.482932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.637 [2024-07-26 14:19:56.488708] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.637 [2024-07-26 14:19:56.488740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.637 [2024-07-26 14:19:56.488757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.637 [2024-07-26 14:19:56.494309] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.637 [2024-07-26 14:19:56.494341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.637 [2024-07-26 14:19:56.494359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.637 [2024-07-26 14:19:56.500160] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.637 [2024-07-26 14:19:56.500193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.637 [2024-07-26 14:19:56.500210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.637 [2024-07-26 14:19:56.506081] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.637 [2024-07-26 14:19:56.506114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.637 [2024-07-26 14:19:56.506138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.637 [2024-07-26 14:19:56.513040] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.637 [2024-07-26 14:19:56.513087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.637 [2024-07-26 14:19:56.513103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.637 [2024-07-26 14:19:56.519365] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.637 [2024-07-26 14:19:56.519399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.637 [2024-07-26 14:19:56.519416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.637 [2024-07-26 14:19:56.525357] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.637 [2024-07-26 14:19:56.525389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.637 [2024-07-26 14:19:56.525406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.637 [2024-07-26 14:19:56.531249] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.637 [2024-07-26 14:19:56.531281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.637 [2024-07-26 14:19:56.531299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.637 [2024-07-26 14:19:56.536992] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.637 [2024-07-26 14:19:56.537025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.637 [2024-07-26 14:19:56.537043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.637 [2024-07-26 14:19:56.542789] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.637 [2024-07-26 14:19:56.542821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.637 [2024-07-26 14:19:56.542838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.637 [2024-07-26 14:19:56.548235] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.637 [2024-07-26 14:19:56.548267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.637 [2024-07-26 14:19:56.548284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.637 [2024-07-26 14:19:56.554012] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.637 [2024-07-26 14:19:56.554045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.637 [2024-07-26 14:19:56.554062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.637 [2024-07-26 14:19:56.559430] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.637 [2024-07-26 14:19:56.559468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.637 [2024-07-26 14:19:56.559487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.637 [2024-07-26 14:19:56.563984] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.637 [2024-07-26 14:19:56.564015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.637 [2024-07-26 14:19:56.564032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.637 [2024-07-26 14:19:56.569558] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.637 [2024-07-26 14:19:56.569589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.637 [2024-07-26 14:19:56.569606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.637 [2024-07-26 14:19:56.576489] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.637 [2024-07-26 14:19:56.576522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.637 [2024-07-26 14:19:56.576548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.637 [2024-07-26 14:19:56.583305] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.637 [2024-07-26 14:19:56.583337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.637 [2024-07-26 14:19:56.583355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.637 [2024-07-26 14:19:56.588876] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.637 [2024-07-26 14:19:56.588909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.637 [2024-07-26 14:19:56.588926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.637 [2024-07-26 14:19:56.594571] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.637 [2024-07-26 14:19:56.594615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.637 [2024-07-26 14:19:56.594635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.637 [2024-07-26 14:19:56.599244] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.638 [2024-07-26 14:19:56.599276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.638 [2024-07-26 14:19:56.599293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.638 [2024-07-26 14:19:56.603605] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.638 [2024-07-26 14:19:56.603635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.638 [2024-07-26 14:19:56.603652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.638 [2024-07-26 14:19:56.607994] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.638 [2024-07-26 14:19:56.608025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.638 [2024-07-26 14:19:56.608042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.638 [2024-07-26 14:19:56.612456] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.638 [2024-07-26 14:19:56.612487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.638 [2024-07-26 14:19:56.612504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.638 [2024-07-26 14:19:56.616677] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.638 [2024-07-26 14:19:56.616710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.638 [2024-07-26 14:19:56.616727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.638 [2024-07-26 14:19:56.619852] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.638 [2024-07-26 14:19:56.619896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.638 [2024-07-26 14:19:56.619913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.638 [2024-07-26 14:19:56.624913] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.638 [2024-07-26 14:19:56.624943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.638 [2024-07-26 14:19:56.624960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.638 [2024-07-26 14:19:56.630151] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.638 [2024-07-26 14:19:56.630182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.638 [2024-07-26 14:19:56.630199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.638 [2024-07-26 14:19:56.635296] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.638 [2024-07-26 14:19:56.635327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.638 [2024-07-26 14:19:56.635345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.638 [2024-07-26 14:19:56.640978] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.638 [2024-07-26 14:19:56.641010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.638 [2024-07-26 14:19:56.641029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.638 [2024-07-26 14:19:56.646379] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.638 [2024-07-26 14:19:56.646411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.638 [2024-07-26 14:19:56.646437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.638 [2024-07-26 14:19:56.651856] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.638 [2024-07-26 14:19:56.651888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.638 [2024-07-26 14:19:56.651905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.904 [2024-07-26 14:19:56.657775] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.904 [2024-07-26 14:19:56.657807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.904 [2024-07-26 14:19:56.657825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.904 [2024-07-26 14:19:56.663036] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.904 [2024-07-26 14:19:56.663067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.904 [2024-07-26 14:19:56.663083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.904 [2024-07-26 14:19:56.668758] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.904 [2024-07-26 14:19:56.668791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.904 [2024-07-26 14:19:56.668809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.904 [2024-07-26 14:19:56.674629] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.904 [2024-07-26 14:19:56.674661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.904 [2024-07-26 14:19:56.674680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.904 [2024-07-26 14:19:56.680552] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.904 [2024-07-26 14:19:56.680585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.904 [2024-07-26 14:19:56.680602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.904 [2024-07-26 14:19:56.686620] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.904 [2024-07-26 14:19:56.686652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.904 [2024-07-26 14:19:56.686670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.904 [2024-07-26 14:19:56.692288] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.904 [2024-07-26 14:19:56.692319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.904 [2024-07-26 14:19:56.692336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.904 [2024-07-26 14:19:56.697404] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.904 [2024-07-26 14:19:56.697436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.904 [2024-07-26 14:19:56.697454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.904 [2024-07-26 14:19:56.702137] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.905 [2024-07-26 14:19:56.702168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.905 [2024-07-26 14:19:56.702185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.905 [2024-07-26 14:19:56.707184] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.905 [2024-07-26 14:19:56.707215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.905 [2024-07-26 14:19:56.707232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.905 [2024-07-26 14:19:56.713318] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.905 [2024-07-26 14:19:56.713349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.905 [2024-07-26 14:19:56.713366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.905 [2024-07-26 14:19:56.720832] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.905 [2024-07-26 14:19:56.720865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.905 [2024-07-26 14:19:56.720882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.905 [2024-07-26 14:19:56.728308] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.905 [2024-07-26 14:19:56.728341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.905 [2024-07-26 14:19:56.728358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.905 [2024-07-26 14:19:56.735814] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.905 [2024-07-26 14:19:56.735847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.905 [2024-07-26 14:19:56.735865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.905 [2024-07-26 14:19:56.743949] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.905 [2024-07-26 14:19:56.743980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.905 [2024-07-26 14:19:56.743997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.905 [2024-07-26 14:19:56.750861] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.905 [2024-07-26 14:19:56.750892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.905 [2024-07-26 14:19:56.750916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.905 [2024-07-26 14:19:56.758350] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.905 [2024-07-26 14:19:56.758381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.905 [2024-07-26 14:19:56.758398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.905 [2024-07-26 14:19:56.765853] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.905 [2024-07-26 14:19:56.765885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.905 [2024-07-26 14:19:56.765903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.905 [2024-07-26 14:19:56.772861] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.905 [2024-07-26 14:19:56.772892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.905 [2024-07-26 14:19:56.772910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.905 [2024-07-26 14:19:56.780618] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.905 [2024-07-26 14:19:56.780650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.905 [2024-07-26 14:19:56.780667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.905 [2024-07-26 14:19:56.788523] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.905 [2024-07-26 14:19:56.788562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.905 [2024-07-26 14:19:56.788590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.905 [2024-07-26 14:19:56.796602] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.905 [2024-07-26 14:19:56.796634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.905 [2024-07-26 14:19:56.796651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.905 [2024-07-26 14:19:56.804136] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.905 [2024-07-26 14:19:56.804168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.905 [2024-07-26 14:19:56.804186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.905 [2024-07-26 14:19:56.811695] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.905 [2024-07-26 14:19:56.811728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.905 [2024-07-26 14:19:56.811745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.905 [2024-07-26 14:19:56.819061] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.905 [2024-07-26 14:19:56.819115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.905 [2024-07-26 14:19:56.819133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.905 [2024-07-26 14:19:56.826210] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.905 [2024-07-26 14:19:56.826242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.905 [2024-07-26 14:19:56.826274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.905 [2024-07-26 14:19:56.831905] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.905 [2024-07-26 14:19:56.831936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.905 [2024-07-26 14:19:56.831953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.905 [2024-07-26 14:19:56.837356] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.905 [2024-07-26 14:19:56.837392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.905 [2024-07-26 14:19:56.837408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.905 [2024-07-26 14:19:56.842892] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.905 [2024-07-26 14:19:56.842939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.905 [2024-07-26 14:19:56.842956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.905 [2024-07-26 14:19:56.847407] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.905 [2024-07-26 14:19:56.847453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.905 [2024-07-26 14:19:56.847470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.905 [2024-07-26 14:19:56.851837] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.905 [2024-07-26 14:19:56.851883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.905 [2024-07-26 14:19:56.851899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.905 [2024-07-26 14:19:56.856259] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.905 [2024-07-26 14:19:56.856290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.905 [2024-07-26 14:19:56.856321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.905 [2024-07-26 14:19:56.860738] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.905 [2024-07-26 14:19:56.860769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.905 [2024-07-26 14:19:56.860786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.905 [2024-07-26 14:19:56.865259] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.905 [2024-07-26 14:19:56.865288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.905 [2024-07-26 14:19:56.865304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.905 [2024-07-26 14:19:56.869705] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.905 [2024-07-26 14:19:56.869736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.905 [2024-07-26 14:19:56.869752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.905 [2024-07-26 14:19:56.874235] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.905 [2024-07-26 14:19:56.874281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.905 [2024-07-26 14:19:56.874298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.905 [2024-07-26 14:19:56.878712] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.905 [2024-07-26 14:19:56.878755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.905 [2024-07-26 14:19:56.878771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.905 [2024-07-26 14:19:56.883064] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.905 [2024-07-26 14:19:56.883108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.905 [2024-07-26 14:19:56.883124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.905 [2024-07-26 14:19:56.887595] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.905 [2024-07-26 14:19:56.887625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.905 [2024-07-26 14:19:56.887641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.905 [2024-07-26 14:19:56.891953] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.905 [2024-07-26 14:19:56.891995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.905 [2024-07-26 14:19:56.892011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.905 [2024-07-26 14:19:56.896375] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.905 [2024-07-26 14:19:56.896404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.905 [2024-07-26 14:19:56.896420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.905 [2024-07-26 14:19:56.900731] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.905 [2024-07-26 14:19:56.900761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.905 [2024-07-26 14:19:56.900783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.905 [2024-07-26 14:19:56.904988] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.905 [2024-07-26 14:19:56.905019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.905 [2024-07-26 14:19:56.905035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.905 [2024-07-26 14:19:56.909349] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.905 [2024-07-26 14:19:56.909393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.905 [2024-07-26 14:19:56.909409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.905 [2024-07-26 14:19:56.913880] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.905 [2024-07-26 14:19:56.913926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.905 [2024-07-26 14:19:56.913942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.905 [2024-07-26 14:19:56.918264] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:48.905 [2024-07-26 14:19:56.918294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.905 [2024-07-26 14:19:56.918311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.162 [2024-07-26 14:19:56.922789] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:49.162 [2024-07-26 14:19:56.922834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.163 [2024-07-26 14:19:56.922850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:49.163 [2024-07-26 14:19:56.927209] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:49.163 [2024-07-26 14:19:56.927253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.163 [2024-07-26 14:19:56.927269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:49.163 [2024-07-26 14:19:56.931658] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:49.163 [2024-07-26 14:19:56.931687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.163 [2024-07-26 14:19:56.931704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:49.163 [2024-07-26 14:19:56.936054] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:49.163 [2024-07-26 14:19:56.936083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.163 [2024-07-26 14:19:56.936098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.163 [2024-07-26 14:19:56.940495] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:49.163 [2024-07-26 14:19:56.940533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.163 [2024-07-26 14:19:56.940552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:49.163 [2024-07-26 14:19:56.944365] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:49.163 [2024-07-26 14:19:56.944401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.163 [2024-07-26 14:19:56.944417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:49.163 [2024-07-26 14:19:56.947100] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:49.163 [2024-07-26 14:19:56.947131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.163 [2024-07-26 14:19:56.947163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:49.163 [2024-07-26 14:19:56.950613] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:49.163 [2024-07-26 14:19:56.950647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.163 [2024-07-26 14:19:56.950664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.163 [2024-07-26 14:19:56.955355] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:49.163 [2024-07-26 14:19:56.955400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.163 [2024-07-26 14:19:56.955416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:49.163 [2024-07-26 14:19:56.960794] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:49.163 [2024-07-26 14:19:56.960841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.163 [2024-07-26 14:19:56.960858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:49.163 [2024-07-26 14:19:56.966867] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:49.163 [2024-07-26 14:19:56.966915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.163 [2024-07-26 14:19:56.966931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:49.163 [2024-07-26 14:19:56.971994] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:49.163 [2024-07-26 14:19:56.972040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.163 [2024-07-26 14:19:56.972056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.163 [2024-07-26 14:19:56.976516] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:49.163 [2024-07-26 14:19:56.976570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.163 [2024-07-26 14:19:56.976593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:49.163 [2024-07-26 14:19:56.981007] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:49.163 [2024-07-26 14:19:56.981038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.163 [2024-07-26 14:19:56.981054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:49.163 [2024-07-26 14:19:56.985344] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:49.163 [2024-07-26 14:19:56.985374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.163 [2024-07-26 14:19:56.985404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:49.163 [2024-07-26 14:19:56.989780] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:49.163 [2024-07-26 14:19:56.989827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.163 [2024-07-26 14:19:56.989842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.163 [2024-07-26 14:19:56.994653] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:49.163 [2024-07-26 14:19:56.994685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.163 [2024-07-26 14:19:56.994701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:49.163 [2024-07-26 14:19:56.999185] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:49.163 [2024-07-26 14:19:56.999216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.163 [2024-07-26 14:19:56.999247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:49.163 [2024-07-26 14:19:57.003525] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:49.163 [2024-07-26 14:19:57.003562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.163 [2024-07-26 14:19:57.003579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:49.163 [2024-07-26 14:19:57.007896] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:49.163 [2024-07-26 14:19:57.007925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.163 [2024-07-26 14:19:57.007941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.163 [2024-07-26 14:19:57.012360] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:49.163 [2024-07-26 14:19:57.012389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.163 [2024-07-26 14:19:57.012405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:49.163 [2024-07-26 14:19:57.017772] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:49.163 [2024-07-26 14:19:57.017807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.163 [2024-07-26 14:19:57.017839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:49.163 [2024-07-26 14:19:57.024669] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:49.163 [2024-07-26 14:19:57.024702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.163 [2024-07-26 14:19:57.024719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:49.163 [2024-07-26 14:19:57.031843] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:49.163 [2024-07-26 14:19:57.031874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.163 [2024-07-26 14:19:57.031906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.163 [2024-07-26 14:19:57.037408] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:49.163 [2024-07-26 14:19:57.037439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.163 [2024-07-26 14:19:57.037455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:49.163 [2024-07-26 14:19:57.043223] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:49.163 [2024-07-26 14:19:57.043253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.163 [2024-07-26 14:19:57.043270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:49.163 [2024-07-26 14:19:57.048332] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:49.163 [2024-07-26 14:19:57.048362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.164 [2024-07-26 14:19:57.048394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:49.164 [2024-07-26 14:19:57.054148] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:49.164 [2024-07-26 14:19:57.054179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.164 [2024-07-26 14:19:57.054196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.164 [2024-07-26 14:19:57.059662] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:49.164 [2024-07-26 14:19:57.059694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.164 [2024-07-26 14:19:57.059712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:49.164 [2024-07-26 14:19:57.065556] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:49.164 [2024-07-26 14:19:57.065588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.164 [2024-07-26 14:19:57.065606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:49.164 [2024-07-26 14:19:57.071199] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:49.164 [2024-07-26 14:19:57.071231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.164 [2024-07-26 14:19:57.071249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:49.164 [2024-07-26 14:19:57.077170] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:49.164 [2024-07-26 14:19:57.077217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.164 [2024-07-26 14:19:57.077234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.164 [2024-07-26 14:19:57.083214] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:49.164 [2024-07-26 14:19:57.083246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.164 [2024-07-26 14:19:57.083262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:49.164 [2024-07-26 14:19:57.088678] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:49.164 [2024-07-26 14:19:57.088710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.164 [2024-07-26 14:19:57.088727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:49.164 [2024-07-26 14:19:57.093488] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:49.164 [2024-07-26 14:19:57.093520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.164 [2024-07-26 14:19:57.093544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:49.164 [2024-07-26 14:19:57.098583] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:49.164 [2024-07-26 14:19:57.098629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.164 [2024-07-26 14:19:57.098647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.164 [2024-07-26 14:19:57.103941] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:49.164 [2024-07-26 14:19:57.103972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.164 [2024-07-26 14:19:57.104004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:49.164 [2024-07-26 14:19:57.109128] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:49.164 [2024-07-26 14:19:57.109160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.164 [2024-07-26 14:19:57.109177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:49.164 [2024-07-26 14:19:57.113571] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:49.164 [2024-07-26 14:19:57.113602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.164 [2024-07-26 14:19:57.113639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:49.164 [2024-07-26 14:19:57.117988] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:49.164 [2024-07-26 14:19:57.118017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.164 [2024-07-26 14:19:57.118032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.164 [2024-07-26 14:19:57.122371] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:49.164 [2024-07-26 14:19:57.122400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.164 [2024-07-26 14:19:57.122416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:49.164 [2024-07-26 14:19:57.126816] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:49.164 [2024-07-26 14:19:57.126862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.164 [2024-07-26 14:19:57.126878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:49.164 [2024-07-26 14:19:57.131178] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:49.164 [2024-07-26 14:19:57.131222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.164 [2024-07-26 14:19:57.131239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:49.164 [2024-07-26 14:19:57.135578] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:49.164 [2024-07-26 14:19:57.135609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.164 [2024-07-26 14:19:57.135626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.164 [2024-07-26 14:19:57.139950] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:49.164 [2024-07-26 14:19:57.139979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.164 [2024-07-26 14:19:57.139994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:49.164 [2024-07-26 14:19:57.144310] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:49.164 [2024-07-26 14:19:57.144338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.164 [2024-07-26 14:19:57.144354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:49.164 [2024-07-26 14:19:57.148807] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:49.164 [2024-07-26 14:19:57.148852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.164 [2024-07-26 14:19:57.148867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:49.164 [2024-07-26 14:19:57.153350] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:49.164 [2024-07-26 14:19:57.153380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.164 [2024-07-26 14:19:57.153395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.164 [2024-07-26 14:19:57.158691] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:49.164 [2024-07-26 14:19:57.158737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.164 [2024-07-26 14:19:57.158753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:49.164 [2024-07-26 14:19:57.163723] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:49.164 [2024-07-26 14:19:57.163754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.164 [2024-07-26 14:19:57.163787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:49.164 [2024-07-26 14:19:57.168638] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:49.164 [2024-07-26 14:19:57.168670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.164 [2024-07-26 14:19:57.168687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:49.165 [2024-07-26 14:19:57.173319] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:49.165 [2024-07-26 14:19:57.173364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.165 [2024-07-26 14:19:57.173380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.165 [2024-07-26 14:19:57.177792] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:49.165 [2024-07-26 14:19:57.177836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.165 [2024-07-26 14:19:57.177852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:49.422 [2024-07-26 14:19:57.182311] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:49.422 [2024-07-26 14:19:57.182342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.422 [2024-07-26 14:19:57.182358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:49.422 [2024-07-26 14:19:57.186795] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:49.422 [2024-07-26 14:19:57.186826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.422 [2024-07-26 14:19:57.186857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:49.422 [2024-07-26 14:19:57.191151] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:49.422 [2024-07-26 14:19:57.191180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.422 [2024-07-26 14:19:57.191202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.422 [2024-07-26 14:19:57.195465] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:49.422 [2024-07-26 14:19:57.195495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.422 [2024-07-26 14:19:57.195511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:49.422 [2024-07-26 14:19:57.200961] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:49.422 [2024-07-26 14:19:57.200993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.422 [2024-07-26 14:19:57.201011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:49.422 [2024-07-26 14:19:57.207731] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:49.422 [2024-07-26 14:19:57.207763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.422 [2024-07-26 14:19:57.207780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:49.422 [2024-07-26 14:19:57.214754] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:49.422 [2024-07-26 14:19:57.214786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.422 [2024-07-26 14:19:57.214803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.422 [2024-07-26 14:19:57.220202] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:49.422 [2024-07-26 14:19:57.220232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.422 [2024-07-26 14:19:57.220263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:49.422 [2024-07-26 14:19:57.225869] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:49.422 [2024-07-26 14:19:57.225898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.422 [2024-07-26 14:19:57.225914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:49.422 [2024-07-26 14:19:57.230622] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:49.422 [2024-07-26 14:19:57.230652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.422 [2024-07-26 14:19:57.230683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:49.422 [2024-07-26 14:19:57.235182] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:49.422 [2024-07-26 14:19:57.235228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.422 [2024-07-26 14:19:57.235244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.422 [2024-07-26 14:19:57.240386] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:49.423 [2024-07-26 14:19:57.240438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.423 [2024-07-26 14:19:57.240455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:49.423 [2024-07-26 14:19:57.246102] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:49.423 [2024-07-26 14:19:57.246148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.423 [2024-07-26 14:19:57.246165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:49.423 [2024-07-26 14:19:57.251732] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:49.423 [2024-07-26 14:19:57.251765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.423 [2024-07-26 14:19:57.251782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:49.423 [2024-07-26 14:19:57.257821] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:49.423 [2024-07-26 14:19:57.257868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.423 [2024-07-26 14:19:57.257885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.423 [2024-07-26 14:19:57.263123] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:49.423 [2024-07-26 14:19:57.263168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.423 [2024-07-26 14:19:57.263184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:49.423 [2024-07-26 14:19:57.268336] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:49.423 [2024-07-26 14:19:57.268368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.423 [2024-07-26 14:19:57.268385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:49.423 [2024-07-26 14:19:57.274999] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:49.423 [2024-07-26 14:19:57.275031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.423 [2024-07-26 14:19:57.275048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:49.423 [2024-07-26 14:19:57.282466] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:49.423 [2024-07-26 14:19:57.282499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.423 [2024-07-26 14:19:57.282538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.423 [2024-07-26 14:19:57.288208] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:49.423 [2024-07-26 14:19:57.288241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.423 [2024-07-26 14:19:57.288257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:49.423 [2024-07-26 14:19:57.293812] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:49.423 [2024-07-26 14:19:57.293858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.423 [2024-07-26 14:19:57.293876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:49.423 [2024-07-26 14:19:57.298459] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:49.423 [2024-07-26 14:19:57.298489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.423 [2024-07-26 14:19:57.298505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:49.423 [2024-07-26 14:19:57.302920] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:49.423 [2024-07-26 14:19:57.302950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.423 [2024-07-26 14:19:57.302967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.423 [2024-07-26 14:19:57.307607] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:49.423 [2024-07-26 14:19:57.307638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.423 [2024-07-26 14:19:57.307655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:49.423 [2024-07-26 14:19:57.311311] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:49.423 [2024-07-26 14:19:57.311341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.423 [2024-07-26 14:19:57.311358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:49.423 [2024-07-26 14:19:57.314852] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:49.423 [2024-07-26 14:19:57.314880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.423 [2024-07-26 14:19:57.314911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:49.423 [2024-07-26 14:19:57.319178] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:49.423 [2024-07-26 14:19:57.319206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.423 [2024-07-26 14:19:57.319222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.423 [2024-07-26 14:19:57.323609] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:49.423 [2024-07-26 14:19:57.323639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.423 [2024-07-26 14:19:57.323656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:49.423 [2024-07-26 14:19:57.328046] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:49.423 [2024-07-26 14:19:57.328073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.423 [2024-07-26 14:19:57.328096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:49.423 [2024-07-26 14:19:57.332442] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:49.423 [2024-07-26 14:19:57.332469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.423 [2024-07-26 14:19:57.332485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:49.423 [2024-07-26 14:19:57.336753] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:49.423 [2024-07-26 14:19:57.336782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.423 [2024-07-26 14:19:57.336800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.423 [2024-07-26 14:19:57.341045] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:49.423 [2024-07-26 14:19:57.341089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.423 [2024-07-26 14:19:57.341104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:49.423 [2024-07-26 14:19:57.345515] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:49.423 [2024-07-26 14:19:57.345566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.423 [2024-07-26 14:19:57.345583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:49.423 [2024-07-26 14:19:57.350043] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:49.423 [2024-07-26 14:19:57.350086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.423 [2024-07-26 14:19:57.350102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:49.423 [2024-07-26 14:19:57.354687] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:49.423 [2024-07-26 14:19:57.354718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.423 [2024-07-26 14:19:57.354734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.423 [2024-07-26 14:19:57.359388] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:49.423 [2024-07-26 14:19:57.359419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.423 [2024-07-26 14:19:57.359435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:49.423 [2024-07-26 14:19:57.364022] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:49.423 [2024-07-26 14:19:57.364052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.423 [2024-07-26 14:19:57.364082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:49.423 [2024-07-26 14:19:57.368908] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:49.423 [2024-07-26 14:19:57.368944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.423 [2024-07-26 14:19:57.368961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:49.424 [2024-07-26 14:19:57.374225] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:49.424 [2024-07-26 14:19:57.374255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.424 [2024-07-26 14:19:57.374272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.424 [2024-07-26 14:19:57.377273] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:49.424 [2024-07-26 14:19:57.377302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.424 [2024-07-26 14:19:57.377319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:49.424 [2024-07-26 14:19:57.382319] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:49.424 [2024-07-26 14:19:57.382363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.424 [2024-07-26 14:19:57.382380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:49.424 [2024-07-26 14:19:57.388215] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:49.424 [2024-07-26 14:19:57.388245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.424 [2024-07-26 14:19:57.388262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:49.424 [2024-07-26 14:19:57.393946] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:49.424 [2024-07-26 14:19:57.393992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.424 [2024-07-26 14:19:57.394009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.424 [2024-07-26 14:19:57.399933] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:49.424 [2024-07-26 14:19:57.399977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.424 [2024-07-26 14:19:57.399992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:49.424 [2024-07-26 14:19:57.405812] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:49.424 [2024-07-26 14:19:57.405858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.424 [2024-07-26 14:19:57.405875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:49.424 [2024-07-26 14:19:57.411410] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:49.424 [2024-07-26 14:19:57.411456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.424 [2024-07-26 14:19:57.411473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:49.424 [2024-07-26 14:19:57.417118] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:49.424 [2024-07-26 14:19:57.417150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.424 [2024-07-26 14:19:57.417182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.424 [2024-07-26 14:19:57.422755] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:49.424 [2024-07-26 14:19:57.422786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.424 [2024-07-26 14:19:57.422803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:49.424 [2024-07-26 14:19:57.428421] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:49.424 [2024-07-26 14:19:57.428452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.424 [2024-07-26 14:19:57.428468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:49.424 [2024-07-26 14:19:57.434045] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:49.424 [2024-07-26 14:19:57.434090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.424 [2024-07-26 14:19:57.434106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:49.682 [2024-07-26 14:19:57.440144] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:49.682 [2024-07-26 14:19:57.440176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.682 [2024-07-26 14:19:57.440193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.682 [2024-07-26 14:19:57.446108] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:49.682 [2024-07-26 14:19:57.446153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.682 [2024-07-26 14:19:57.446170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:49.682 [2024-07-26 14:19:57.451283] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:49.682 [2024-07-26 14:19:57.451316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.682 [2024-07-26 14:19:57.451333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:49.682 [2024-07-26 14:19:57.457009] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:49.682 [2024-07-26 14:19:57.457041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.683 [2024-07-26 14:19:57.457073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:49.683 [2024-07-26 14:19:57.462756] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:49.683 [2024-07-26 14:19:57.462807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.683 [2024-07-26 14:19:57.462825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.683 [2024-07-26 14:19:57.468597] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:49.683 [2024-07-26 14:19:57.468629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.683 [2024-07-26 14:19:57.468646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:49.683 [2024-07-26 14:19:57.474099] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:49.683 [2024-07-26 14:19:57.474127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.683 [2024-07-26 14:19:57.474143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:49.683 [2024-07-26 14:19:57.479280] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:49.683 [2024-07-26 14:19:57.479311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.683 [2024-07-26 14:19:57.479328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:49.683 [2024-07-26 14:19:57.484263] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:49.683 [2024-07-26 14:19:57.484293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.683 [2024-07-26 14:19:57.484310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.683 [2024-07-26 14:19:57.488867] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:49.683 [2024-07-26 14:19:57.488913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.683 [2024-07-26 14:19:57.488931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:49.683 [2024-07-26 14:19:57.494724] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:49.683 [2024-07-26 14:19:57.494757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.683 [2024-07-26 14:19:57.494774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:49.683 [2024-07-26 14:19:57.499989] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:49.683 [2024-07-26 14:19:57.500019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.683 [2024-07-26 14:19:57.500036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:49.683 [2024-07-26 14:19:57.504558] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:49.683 [2024-07-26 14:19:57.504590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.683 [2024-07-26 14:19:57.504621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.683 [2024-07-26 14:19:57.509994] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:49.683 [2024-07-26 14:19:57.510026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.683 [2024-07-26 14:19:57.510043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:49.683 [2024-07-26 14:19:57.514922] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:49.683 [2024-07-26 14:19:57.514953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.683 [2024-07-26 14:19:57.514969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:49.683 [2024-07-26 14:19:57.519741] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:49.683 [2024-07-26 14:19:57.519773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.683 [2024-07-26 14:19:57.519790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:49.683 [2024-07-26 14:19:57.524666] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:49.683 [2024-07-26 14:19:57.524698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.683 [2024-07-26 14:19:57.524715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.683 [2024-07-26 14:19:57.527618] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:49.683 [2024-07-26 14:19:57.527648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.683 [2024-07-26 14:19:57.527665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:49.683 [2024-07-26 14:19:57.532193] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:49.683 [2024-07-26 14:19:57.532222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.683 [2024-07-26 14:19:57.532238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:49.683 [2024-07-26 14:19:57.536718] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:49.683 [2024-07-26 14:19:57.536749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.683 [2024-07-26 14:19:57.536765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:49.683 [2024-07-26 14:19:57.541070] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:49.683 [2024-07-26 14:19:57.541098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.683 [2024-07-26 14:19:57.541113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.683 [2024-07-26 14:19:57.545280] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:49.683 [2024-07-26 14:19:57.545309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.683 [2024-07-26 14:19:57.545330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:49.683 [2024-07-26 14:19:57.550308] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:49.683 [2024-07-26 14:19:57.550337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.683 [2024-07-26 14:19:57.550354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:49.683 [2024-07-26 14:19:57.554002] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:49.683 [2024-07-26 14:19:57.554032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.683 [2024-07-26 14:19:57.554048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:49.683 [2024-07-26 14:19:57.558452] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:49.683 [2024-07-26 14:19:57.558482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.683 [2024-07-26 14:19:57.558498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.683 [2024-07-26 14:19:57.563087] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:49.683 [2024-07-26 14:19:57.563119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.683 [2024-07-26 14:19:57.563135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:49.683 [2024-07-26 14:19:57.568364] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:49.683 [2024-07-26 14:19:57.568395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.683 [2024-07-26 14:19:57.568412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:49.683 [2024-07-26 14:19:57.575735] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:49.683 [2024-07-26 14:19:57.575768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.683 [2024-07-26 14:19:57.575786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:49.683 [2024-07-26 14:19:57.583362] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:49.683 [2024-07-26 14:19:57.583394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.683 [2024-07-26 14:19:57.583410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.683 [2024-07-26 14:19:57.590710] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:49.683 [2024-07-26 14:19:57.590743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.684 [2024-07-26 14:19:57.590760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:49.684 [2024-07-26 14:19:57.598261] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:49.684 [2024-07-26 14:19:57.598297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.684 [2024-07-26 14:19:57.598315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:49.684 [2024-07-26 14:19:57.606395] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:49.684 [2024-07-26 14:19:57.606441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.684 [2024-07-26 14:19:57.606458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:49.684 [2024-07-26 14:19:57.614299] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:49.684 [2024-07-26 14:19:57.614331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.684 [2024-07-26 14:19:57.614348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.684 [2024-07-26 14:19:57.622471] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:49.684 [2024-07-26 14:19:57.622503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.684 [2024-07-26 14:19:57.622545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:49.684 [2024-07-26 14:19:57.630155] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:49.684 [2024-07-26 14:19:57.630186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.684 [2024-07-26 14:19:57.630218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:49.684 [2024-07-26 14:19:57.637718] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:49.684 [2024-07-26 14:19:57.637751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.684 [2024-07-26 14:19:57.637784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:49.684 [2024-07-26 14:19:57.645343] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:49.684 [2024-07-26 14:19:57.645374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.684 [2024-07-26 14:19:57.645391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.684 [2024-07-26 14:19:57.653015] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:49.684 [2024-07-26 14:19:57.653046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.684 [2024-07-26 14:19:57.653078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:49.684 [2024-07-26 14:19:57.660621] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:49.684 [2024-07-26 14:19:57.660669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.684 [2024-07-26 14:19:57.660686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:49.684 [2024-07-26 14:19:57.668688] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:49.684 [2024-07-26 14:19:57.668721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.684 [2024-07-26 14:19:57.668754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:49.684 [2024-07-26 14:19:57.676753] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:49.684 [2024-07-26 14:19:57.676784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.684 [2024-07-26 14:19:57.676816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.684 [2024-07-26 14:19:57.683796] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ae290) 00:25:49.684 [2024-07-26 14:19:57.683830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.684 [2024-07-26 14:19:57.683863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:49.684 00:25:49.684 Latency(us) 00:25:49.684 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:49.684 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:25:49.684 nvme0n1 : 2.00 5725.40 715.68 0.00 0.00 2790.43 576.47 8543.95 00:25:49.684 =================================================================================================================== 00:25:49.684 Total : 5725.40 715.68 0.00 0.00 2790.43 576.47 8543.95 00:25:49.684 0 00:25:49.941 14:19:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:49.941 14:19:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:49.941 14:19:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:49.941 | .driver_specific 00:25:49.941 | .nvme_error 00:25:49.941 | .status_code 00:25:49.941 | .command_transient_transport_error' 00:25:49.941 14:19:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:50.199 14:19:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 369 > 0 )) 00:25:50.199 14:19:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 321469 00:25:50.199 14:19:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 321469 ']' 00:25:50.199 14:19:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 321469 00:25:50.199 14:19:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:25:50.199 14:19:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:50.199 14:19:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 321469 00:25:50.199 14:19:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:50.199 14:19:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:50.199 14:19:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 321469' 00:25:50.199 killing process with pid 321469 00:25:50.199 14:19:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 321469 00:25:50.199 Received shutdown signal, test time was about 2.000000 seconds 00:25:50.199 00:25:50.199 Latency(us) 00:25:50.199 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:50.199 =================================================================================================================== 00:25:50.199 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:50.199 14:19:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 321469 00:25:50.480 14:19:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:25:50.480 14:19:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:50.480 14:19:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:25:50.480 14:19:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:25:50.480 14:19:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:25:50.480 14:19:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=321879 00:25:50.481 14:19:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:25:50.481 14:19:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 321879 /var/tmp/bperf.sock 00:25:50.481 14:19:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 321879 ']' 00:25:50.481 14:19:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:50.481 14:19:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:50.481 14:19:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:50.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:50.481 14:19:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:50.481 14:19:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:50.481 [2024-07-26 14:19:58.273088] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:25:50.481 [2024-07-26 14:19:58.273181] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid321879 ] 00:25:50.481 EAL: No free 2048 kB hugepages reported on node 1 00:25:50.481 [2024-07-26 14:19:58.332155] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:50.481 [2024-07-26 14:19:58.440144] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:50.777 14:19:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:50.777 14:19:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:25:50.777 14:19:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:50.777 14:19:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:51.057 14:19:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:51.057 14:19:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:51.057 14:19:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:51.057 14:19:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:51.057 14:19:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:51.057 14:19:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:51.331 nvme0n1 00:25:51.331 14:19:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:25:51.332 14:19:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:51.332 14:19:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:51.332 14:19:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:51.332 14:19:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:51.332 14:19:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:51.332 Running I/O for 2 seconds... 00:25:51.332 [2024-07-26 14:19:59.307348] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x137f4f0) with pdu=0x2000190de8a8 00:25:51.332 [2024-07-26 14:19:59.307585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:8851 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.332 [2024-07-26 14:19:59.307638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:51.332 [2024-07-26 14:19:59.321521] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x137f4f0) with pdu=0x2000190de8a8 00:25:51.332 [2024-07-26 14:19:59.321752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:11008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.332 [2024-07-26 14:19:59.321809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:51.332 [2024-07-26 14:19:59.335026] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x137f4f0) with pdu=0x2000190de8a8 00:25:51.332 [2024-07-26 14:19:59.335290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:1877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.332 [2024-07-26 14:19:59.335323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:51.611 [2024-07-26 14:19:59.348605] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x137f4f0) with pdu=0x2000190de8a8 00:25:51.611 [2024-07-26 14:19:59.348828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:932 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.611 [2024-07-26 14:19:59.348913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:51.611 [2024-07-26 14:19:59.361836] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x137f4f0) with pdu=0x2000190de8a8 00:25:51.611 [2024-07-26 14:19:59.362176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:4126 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.611 [2024-07-26 14:19:59.362207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:51.611 [2024-07-26 14:19:59.375679] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x137f4f0) with pdu=0x2000190de8a8 00:25:51.611 [2024-07-26 14:19:59.375954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:3023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.611 [2024-07-26 14:19:59.376020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:51.611 [2024-07-26 14:19:59.389575] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x137f4f0) with pdu=0x2000190de8a8 00:25:51.611 [2024-07-26 14:19:59.389795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:23461 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.611 [2024-07-26 14:19:59.389870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:51.611 [2024-07-26 14:19:59.403593] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x137f4f0) with pdu=0x2000190de8a8 00:25:51.611 [2024-07-26 14:19:59.403839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:18354 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.611 [2024-07-26 14:19:59.403911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:51.611 [2024-07-26 14:19:59.417490] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x137f4f0) with pdu=0x2000190de8a8 00:25:51.611 [2024-07-26 14:19:59.417742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:7066 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.612 [2024-07-26 14:19:59.417798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:51.612 [2024-07-26 14:19:59.431341] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x137f4f0) with pdu=0x2000190de8a8 00:25:51.612 [2024-07-26 14:19:59.431582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:1514 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.612 [2024-07-26 14:19:59.431655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:51.612 [2024-07-26 14:19:59.445235] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x137f4f0) with pdu=0x2000190de8a8 00:25:51.612 [2024-07-26 14:19:59.445543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:7511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.612 [2024-07-26 14:19:59.445574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:51.612 [2024-07-26 14:19:59.459254] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x137f4f0) with pdu=0x2000190de8a8 00:25:51.612 [2024-07-26 14:19:59.459506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:13748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.612 [2024-07-26 14:19:59.459586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:51.612 [2024-07-26 14:19:59.473040] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x137f4f0) with pdu=0x2000190de8a8 00:25:51.612 [2024-07-26 14:19:59.473303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:5511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.612 [2024-07-26 14:19:59.473370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:51.612 [2024-07-26 14:19:59.487081] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x137f4f0) with pdu=0x2000190de8a8 00:25:51.612 [2024-07-26 14:19:59.487297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:17433 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.612 [2024-07-26 14:19:59.487366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:51.612 [2024-07-26 14:19:59.500909] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x137f4f0) with pdu=0x2000190de8a8 00:25:51.612 [2024-07-26 14:19:59.501169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:1761 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.612 [2024-07-26 14:19:59.501242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:51.612 [2024-07-26 14:19:59.514972] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x137f4f0) with pdu=0x2000190de8a8 00:25:51.612 [2024-07-26 14:19:59.515229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:14376 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.612 [2024-07-26 14:19:59.515259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:51.612 [2024-07-26 14:19:59.528810] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x137f4f0) with pdu=0x2000190de8a8 00:25:51.612 [2024-07-26 14:19:59.529084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:2001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.612 [2024-07-26 14:19:59.529159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:51.612 [2024-07-26 14:19:59.542699] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x137f4f0) with pdu=0x2000190de8a8 00:25:51.612 [2024-07-26 14:19:59.542924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:10775 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.612 [2024-07-26 14:19:59.542995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:51.612 [2024-07-26 14:19:59.556607] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x137f4f0) with pdu=0x2000190de8a8 00:25:51.612 [2024-07-26 14:19:59.556904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:24362 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.612 [2024-07-26 14:19:59.556976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:51.612 [2024-07-26 14:19:59.570469] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x137f4f0) with pdu=0x2000190de8a8 00:25:51.612 [2024-07-26 14:19:59.570822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:4481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.612 [2024-07-26 14:19:59.570852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:51.612 [2024-07-26 14:19:59.584009] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x137f4f0) with pdu=0x2000190de8a8 00:25:51.612 [2024-07-26 14:19:59.584225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:4053 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.612 [2024-07-26 14:19:59.584297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:51.612 [2024-07-26 14:19:59.597867] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x137f4f0) with pdu=0x2000190de8a8 00:25:51.612 [2024-07-26 14:19:59.598113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:23180 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.612 [2024-07-26 14:19:59.598179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:51.612 [2024-07-26 14:19:59.611211] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x137f4f0) with pdu=0x2000190de8a8 00:25:51.612 [2024-07-26 14:19:59.611447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:18310 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.612 [2024-07-26 14:19:59.611517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:51.871 [2024-07-26 14:19:59.624762] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x137f4f0) with pdu=0x2000190de8a8 00:25:51.871 [2024-07-26 14:19:59.625027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:22048 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.871 [2024-07-26 14:19:59.625056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:51.871 [2024-07-26 14:19:59.638653] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x137f4f0) with pdu=0x2000190de8a8 00:25:51.871 [2024-07-26 14:19:59.638879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:10337 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.871 [2024-07-26 14:19:59.638950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:51.871 [2024-07-26 14:19:59.652463] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x137f4f0) with pdu=0x2000190de8a8 00:25:51.871 [2024-07-26 14:19:59.652787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:19731 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.871 [2024-07-26 14:19:59.652817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:51.871 [2024-07-26 14:19:59.666412] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x137f4f0) with pdu=0x2000190de8a8 00:25:51.871 [2024-07-26 14:19:59.666659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:442 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.871 [2024-07-26 14:19:59.666720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:51.871 [2024-07-26 14:19:59.680302] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x137f4f0) with pdu=0x2000190de8a8 00:25:51.871 [2024-07-26 14:19:59.680594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:20055 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.871 [2024-07-26 14:19:59.680639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:51.871 [2024-07-26 14:19:59.694417] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x137f4f0) with pdu=0x2000190de8a8 00:25:51.871 [2024-07-26 14:19:59.694767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:23476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.871 [2024-07-26 14:19:59.694798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:51.871 [2024-07-26 14:19:59.708348] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x137f4f0) with pdu=0x2000190de8a8 00:25:51.871 [2024-07-26 14:19:59.708692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:23603 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.871 [2024-07-26 14:19:59.708723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:51.871 [2024-07-26 14:19:59.722174] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x137f4f0) with pdu=0x2000190de8a8 00:25:51.871 [2024-07-26 14:19:59.722473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:16893 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.871 [2024-07-26 14:19:59.722503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:51.871 [2024-07-26 14:19:59.736083] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x137f4f0) with pdu=0x2000190de8a8 00:25:51.871 [2024-07-26 14:19:59.736333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:22633 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.871 [2024-07-26 14:19:59.736399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:51.871 [2024-07-26 14:19:59.749940] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x137f4f0) with pdu=0x2000190de8a8 00:25:51.871 [2024-07-26 14:19:59.750200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:20858 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.871 [2024-07-26 14:19:59.750270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:51.871 [2024-07-26 14:19:59.763674] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x137f4f0) with pdu=0x2000190de8a8 00:25:51.871 [2024-07-26 14:19:59.763880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:14185 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.871 [2024-07-26 14:19:59.763949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:51.871 [2024-07-26 14:19:59.777495] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x137f4f0) with pdu=0x2000190de8a8 00:25:51.871 [2024-07-26 14:19:59.777707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:4341 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.871 [2024-07-26 14:19:59.777760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:51.871 [2024-07-26 14:19:59.791296] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x137f4f0) with pdu=0x2000190de8a8 00:25:51.871 [2024-07-26 14:19:59.791589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:19076 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.871 [2024-07-26 14:19:59.791619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:51.871 [2024-07-26 14:19:59.805090] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x137f4f0) with pdu=0x2000190de8a8 00:25:51.871 [2024-07-26 14:19:59.805332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:17616 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.871 [2024-07-26 14:19:59.805402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:51.871 [2024-07-26 14:19:59.818765] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x137f4f0) with pdu=0x2000190de8a8 00:25:51.871 [2024-07-26 14:19:59.819016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:18714 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.871 [2024-07-26 14:19:59.819083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:51.871 [2024-07-26 14:19:59.832525] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x137f4f0) with pdu=0x2000190de8a8 00:25:51.871 [2024-07-26 14:19:59.832756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:4533 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.871 [2024-07-26 14:19:59.832811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:51.871 [2024-07-26 14:19:59.846118] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x137f4f0) with pdu=0x2000190de8a8 00:25:51.871 [2024-07-26 14:19:59.846354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:9419 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.871 [2024-07-26 14:19:59.846427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:51.871 [2024-07-26 14:19:59.859730] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x137f4f0) with pdu=0x2000190de8a8 00:25:51.871 [2024-07-26 14:19:59.859978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.871 [2024-07-26 14:19:59.860054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:51.871 [2024-07-26 14:19:59.873587] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x137f4f0) with pdu=0x2000190de8a8 00:25:51.871 [2024-07-26 14:19:59.873864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:16548 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.871 [2024-07-26 14:19:59.873894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:51.871 [2024-07-26 14:19:59.887434] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x137f4f0) with pdu=0x2000190de8a8 00:25:51.871 [2024-07-26 14:19:59.887672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:22469 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.871 [2024-07-26 14:19:59.887727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:52.130 [2024-07-26 14:19:59.901278] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x137f4f0) with pdu=0x2000190de8a8 00:25:52.130 [2024-07-26 14:19:59.901563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:850 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.130 [2024-07-26 14:19:59.901621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:52.130 [2024-07-26 14:19:59.915070] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x137f4f0) with pdu=0x2000190de8a8 00:25:52.130 [2024-07-26 14:19:59.915315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:20949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.130 [2024-07-26 14:19:59.915382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:52.130 [2024-07-26 14:19:59.928839] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x137f4f0) with pdu=0x2000190de8a8 00:25:52.130 [2024-07-26 14:19:59.929101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:10318 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.130 [2024-07-26 14:19:59.929176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:52.130 [2024-07-26 14:19:59.942591] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x137f4f0) with pdu=0x2000190de8a8 00:25:52.130 [2024-07-26 14:19:59.942818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:17689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.130 [2024-07-26 14:19:59.942886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:52.130 [2024-07-26 14:19:59.956310] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x137f4f0) with pdu=0x2000190de8a8 00:25:52.130 [2024-07-26 14:19:59.956675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:13335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.130 [2024-07-26 14:19:59.956706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:52.130 [2024-07-26 14:19:59.970141] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x137f4f0) with pdu=0x2000190de8a8 00:25:52.130 [2024-07-26 14:19:59.970453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:11503 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.130 [2024-07-26 14:19:59.970484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:52.130 [2024-07-26 14:19:59.983988] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x137f4f0) with pdu=0x2000190de8a8 00:25:52.130 [2024-07-26 14:19:59.984285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:8019 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.130 [2024-07-26 14:19:59.984315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:52.130 [2024-07-26 14:19:59.997891] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x137f4f0) with pdu=0x2000190de8a8 00:25:52.130 [2024-07-26 14:19:59.998153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:20504 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.130 [2024-07-26 14:19:59.998226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:52.130 [2024-07-26 14:20:00.011518] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x137f4f0) with pdu=0x2000190de8a8 00:25:52.130 [2024-07-26 14:20:00.011719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:2317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.130 [2024-07-26 14:20:00.011755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:52.130 [2024-07-26 14:20:00.025060] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x137f4f0) with pdu=0x2000190de8a8 00:25:52.130 [2024-07-26 14:20:00.025251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:23663 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.130 [2024-07-26 14:20:00.025284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:52.130 [2024-07-26 14:20:00.038546] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x137f4f0) with pdu=0x2000190de8a8 00:25:52.130 [2024-07-26 14:20:00.038735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:25563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.130 [2024-07-26 14:20:00.038766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:52.130 [2024-07-26 14:20:00.052306] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x137f4f0) with pdu=0x2000190de8a8 00:25:52.130 [2024-07-26 14:20:00.052559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:11669 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.130 [2024-07-26 14:20:00.052602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:52.130 [2024-07-26 14:20:00.066671] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x137f4f0) with pdu=0x2000190de8a8 00:25:52.130 [2024-07-26 14:20:00.066854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:13845 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.130 [2024-07-26 14:20:00.066885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:52.130 [2024-07-26 14:20:00.080360] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x137f4f0) with pdu=0x2000190de8a8 00:25:52.130 [2024-07-26 14:20:00.080607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:7844 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.130 [2024-07-26 14:20:00.080665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:52.130 [2024-07-26 14:20:00.094151] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x137f4f0) with pdu=0x2000190de8a8 00:25:52.130 [2024-07-26 14:20:00.094454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:13568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.130 [2024-07-26 14:20:00.094485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:52.130 [2024-07-26 14:20:00.107708] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x137f4f0) with pdu=0x2000190de8a8 00:25:52.130 [2024-07-26 14:20:00.107939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:13016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.130 [2024-07-26 14:20:00.107969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:52.130 [2024-07-26 14:20:00.121523] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x137f4f0) with pdu=0x2000190de8a8 00:25:52.130 [2024-07-26 14:20:00.121760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:17153 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.130 [2024-07-26 14:20:00.121824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:52.130 [2024-07-26 14:20:00.135279] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x137f4f0) with pdu=0x2000190de8a8 00:25:52.130 [2024-07-26 14:20:00.135549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:4425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.130 [2024-07-26 14:20:00.135594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:52.389 [2024-07-26 14:20:00.148848] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x137f4f0) with pdu=0x2000190de8a8 00:25:52.389 [2024-07-26 14:20:00.149101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:23756 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.389 [2024-07-26 14:20:00.149176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:52.389 [2024-07-26 14:20:00.162549] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x137f4f0) with pdu=0x2000190de8a8 00:25:52.389 [2024-07-26 14:20:00.162776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:15253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.389 [2024-07-26 14:20:00.162834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:52.389 [2024-07-26 14:20:00.176263] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x137f4f0) with pdu=0x2000190de8a8 00:25:52.389 [2024-07-26 14:20:00.176553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:21906 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.389 [2024-07-26 14:20:00.176624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:52.389 [2024-07-26 14:20:00.189953] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x137f4f0) with pdu=0x2000190de8a8 00:25:52.389 [2024-07-26 14:20:00.190210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:12176 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.389 [2024-07-26 14:20:00.190240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:52.389 [2024-07-26 14:20:00.203795] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x137f4f0) with pdu=0x2000190de8a8 00:25:52.389 [2024-07-26 14:20:00.204107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.389 [2024-07-26 14:20:00.204137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:52.389 [2024-07-26 14:20:00.217419] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x137f4f0) with pdu=0x2000190de8a8 00:25:52.389 [2024-07-26 14:20:00.217651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:13299 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.389 [2024-07-26 14:20:00.217719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:52.389 [2024-07-26 14:20:00.230992] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x137f4f0) with pdu=0x2000190de8a8 00:25:52.389 [2024-07-26 14:20:00.231248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:9083 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.389 [2024-07-26 14:20:00.231324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:52.389 [2024-07-26 14:20:00.244377] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x137f4f0) with pdu=0x2000190de8a8 00:25:52.389 [2024-07-26 14:20:00.244615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:21521 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.389 [2024-07-26 14:20:00.244682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:52.389 [2024-07-26 14:20:00.257964] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x137f4f0) with pdu=0x2000190de8a8 00:25:52.389 [2024-07-26 14:20:00.258195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:7980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.389 [2024-07-26 14:20:00.258270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:52.389 [2024-07-26 14:20:00.271622] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x137f4f0) with pdu=0x2000190de8a8 00:25:52.389 [2024-07-26 14:20:00.271819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:17029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.389 [2024-07-26 14:20:00.271910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:52.389 [2024-07-26 14:20:00.285163] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x137f4f0) with pdu=0x2000190de8a8 00:25:52.389 [2024-07-26 14:20:00.285402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:16475 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.389 [2024-07-26 14:20:00.285477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:52.389 [2024-07-26 14:20:00.298811] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x137f4f0) with pdu=0x2000190de8a8 00:25:52.389 [2024-07-26 14:20:00.299146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:6878 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.389 [2024-07-26 14:20:00.299178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:52.389 [2024-07-26 14:20:00.312421] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x137f4f0) with pdu=0x2000190de8a8 00:25:52.389 [2024-07-26 14:20:00.312627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:8837 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.389 [2024-07-26 14:20:00.312682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:52.389 [2024-07-26 14:20:00.326234] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x137f4f0) with pdu=0x2000190de8a8 00:25:52.389 [2024-07-26 14:20:00.326557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:12883 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.389 [2024-07-26 14:20:00.326589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:52.389 [2024-07-26 14:20:00.339755] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x137f4f0) with pdu=0x2000190de8a8 00:25:52.389 [2024-07-26 14:20:00.340126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:19398 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.389 [2024-07-26 14:20:00.340156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:52.389 [2024-07-26 14:20:00.353435] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x137f4f0) with pdu=0x2000190de8a8 00:25:52.389 [2024-07-26 14:20:00.353677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:10098 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.389 [2024-07-26 14:20:00.353727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:52.390 [2024-07-26 14:20:00.366693] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x137f4f0) with pdu=0x2000190de8a8 00:25:52.390 [2024-07-26 14:20:00.366953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:15882 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.390 [2024-07-26 14:20:00.367010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:52.390 [2024-07-26 14:20:00.380262] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x137f4f0) with pdu=0x2000190de8a8 00:25:52.390 [2024-07-26 14:20:00.380523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:23761 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.390 [2024-07-26 14:20:00.380605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:52.390 [2024-07-26 14:20:00.393854] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x137f4f0) with pdu=0x2000190de8a8 00:25:52.390 [2024-07-26 14:20:00.394076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:4924 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.390 [2024-07-26 14:20:00.394130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:52.648 [2024-07-26 14:20:00.407484] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x137f4f0) with pdu=0x2000190de8a8 00:25:52.648 [2024-07-26 14:20:00.407729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:14011 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.648 [2024-07-26 14:20:00.407784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:52.648 [2024-07-26 14:20:00.421047] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x137f4f0) with pdu=0x2000190de8a8 00:25:52.648 [2024-07-26 14:20:00.421256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:5238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.648 [2024-07-26 14:20:00.421328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:52.648 [2024-07-26 14:20:00.434722] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x137f4f0) with pdu=0x2000190de8a8 00:25:52.648 [2024-07-26 14:20:00.434993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:10213 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.648 [2024-07-26 14:20:00.435047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:52.648 [2024-07-26 14:20:00.448382] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x137f4f0) with pdu=0x2000190de8a8 00:25:52.648 [2024-07-26 14:20:00.448620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:21551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.648 [2024-07-26 14:20:00.448674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:52.648 [2024-07-26 14:20:00.461999] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x137f4f0) with pdu=0x2000190de8a8 00:25:52.648 [2024-07-26 14:20:00.462266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:17687 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.648 [2024-07-26 14:20:00.462306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:52.648 [2024-07-26 14:20:00.475658] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x137f4f0) with pdu=0x2000190de8a8 00:25:52.648 [2024-07-26 14:20:00.475946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:25457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.648 [2024-07-26 14:20:00.476016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:52.648 [2024-07-26 14:20:00.489356] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x137f4f0) with pdu=0x2000190de8a8 00:25:52.648 [2024-07-26 14:20:00.489679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:4068 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.648 [2024-07-26 14:20:00.489710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:52.648 [2024-07-26 14:20:00.502849] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x137f4f0) with pdu=0x2000190de8a8 00:25:52.648 [2024-07-26 14:20:00.503111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:17921 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.648 [2024-07-26 14:20:00.503181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:52.648 [2024-07-26 14:20:00.516544] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x137f4f0) with pdu=0x2000190de8a8 00:25:52.648 [2024-07-26 14:20:00.516743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:21588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.648 [2024-07-26 14:20:00.516797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:52.648 [2024-07-26 14:20:00.530133] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x137f4f0) with pdu=0x2000190de8a8 00:25:52.648 [2024-07-26 14:20:00.530378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:9621 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.648 [2024-07-26 14:20:00.530446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:52.648 [2024-07-26 14:20:00.543802] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x137f4f0) with pdu=0x2000190de8a8 00:25:52.648 [2024-07-26 14:20:00.544042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:3477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.648 [2024-07-26 14:20:00.544099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:52.648 [2024-07-26 14:20:00.557341] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x137f4f0) with pdu=0x2000190de8a8 00:25:52.648 [2024-07-26 14:20:00.557644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:20482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.648 [2024-07-26 14:20:00.557677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:52.648 [2024-07-26 14:20:00.570908] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x137f4f0) with pdu=0x2000190de8a8 00:25:52.648 [2024-07-26 14:20:00.571171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:22958 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.648 [2024-07-26 14:20:00.571209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:52.648 [2024-07-26 14:20:00.584489] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x137f4f0) with pdu=0x2000190de8a8 00:25:52.648 [2024-07-26 14:20:00.584697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:14443 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.648 [2024-07-26 14:20:00.584751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:52.648 [2024-07-26 14:20:00.597977] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x137f4f0) with pdu=0x2000190de8a8 00:25:52.648 [2024-07-26 14:20:00.598250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:6189 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.648 [2024-07-26 14:20:00.598281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:52.648 [2024-07-26 14:20:00.611768] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x137f4f0) with pdu=0x2000190de8a8 00:25:52.648 [2024-07-26 14:20:00.611991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:22388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.648 [2024-07-26 14:20:00.612043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:52.648 [2024-07-26 14:20:00.625158] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x137f4f0) with pdu=0x2000190de8a8 00:25:52.648 [2024-07-26 14:20:00.625378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:22484 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.648 [2024-07-26 14:20:00.625428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:52.648 [2024-07-26 14:20:00.638829] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x137f4f0) with pdu=0x2000190de8a8 00:25:52.649 [2024-07-26 14:20:00.639047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:12909 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.649 [2024-07-26 14:20:00.639076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:52.649 [2024-07-26 14:20:00.652646] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x137f4f0) with pdu=0x2000190de8a8 00:25:52.649 [2024-07-26 14:20:00.652925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:2896 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.649 [2024-07-26 14:20:00.652956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:52.907 [2024-07-26 14:20:00.666237] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x137f4f0) with pdu=0x2000190de8a8 00:25:52.907 [2024-07-26 14:20:00.666457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:18825 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.907 [2024-07-26 14:20:00.666514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:52.907 [2024-07-26 14:20:00.679965] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x137f4f0) with pdu=0x2000190de8a8 00:25:52.907 [2024-07-26 14:20:00.680189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:21730 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.907 [2024-07-26 14:20:00.680257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:52.907 [2024-07-26 14:20:00.693947] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x137f4f0) with pdu=0x2000190de8a8 00:25:52.907 [2024-07-26 14:20:00.694175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:7430 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.907 [2024-07-26 14:20:00.694230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:52.907 [2024-07-26 14:20:00.707743] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x137f4f0) with pdu=0x2000190de8a8 00:25:52.907 [2024-07-26 14:20:00.708025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:7443 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.907 [2024-07-26 14:20:00.708056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:52.907 [2024-07-26 14:20:00.721583] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x137f4f0) with pdu=0x2000190de8a8 00:25:52.907 [2024-07-26 14:20:00.721784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:21859 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.907 [2024-07-26 14:20:00.721852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:52.907 [2024-07-26 14:20:00.735436] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x137f4f0) with pdu=0x2000190de8a8 00:25:52.907 [2024-07-26 14:20:00.735704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:5222 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.907 [2024-07-26 14:20:00.735772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:52.907 [2024-07-26 14:20:00.749468] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x137f4f0) with pdu=0x2000190de8a8 00:25:52.907 [2024-07-26 14:20:00.749707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:14989 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.907 [2024-07-26 14:20:00.749775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:52.907 [2024-07-26 14:20:00.763558] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x137f4f0) with pdu=0x2000190de8a8 00:25:52.907 [2024-07-26 14:20:00.763815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:18431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.907 [2024-07-26 14:20:00.763868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:52.907 [2024-07-26 14:20:00.777403] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x137f4f0) with pdu=0x2000190de8a8 00:25:52.907 [2024-07-26 14:20:00.777648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:8722 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.907 [2024-07-26 14:20:00.777718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:52.907 [2024-07-26 14:20:00.791430] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x137f4f0) with pdu=0x2000190de8a8 00:25:52.907 [2024-07-26 14:20:00.791665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:5470 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.907 [2024-07-26 14:20:00.791738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:52.907 [2024-07-26 14:20:00.805235] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x137f4f0) with pdu=0x2000190de8a8 00:25:52.907 [2024-07-26 14:20:00.805510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:4173 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.907 [2024-07-26 14:20:00.805560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:52.907 [2024-07-26 14:20:00.819116] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x137f4f0) with pdu=0x2000190de8a8 00:25:52.907 [2024-07-26 14:20:00.819380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:19720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.907 [2024-07-26 14:20:00.819449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:52.907 [2024-07-26 14:20:00.833093] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x137f4f0) with pdu=0x2000190de8a8 00:25:52.907 [2024-07-26 14:20:00.833343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:16537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.907 [2024-07-26 14:20:00.833409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:52.907 [2024-07-26 14:20:00.847005] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x137f4f0) with pdu=0x2000190de8a8 00:25:52.907 [2024-07-26 14:20:00.847252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:7223 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.907 [2024-07-26 14:20:00.847326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:52.907 [2024-07-26 14:20:00.860949] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x137f4f0) with pdu=0x2000190de8a8 00:25:52.907 [2024-07-26 14:20:00.861177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:11391 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.907 [2024-07-26 14:20:00.861240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:52.907 [2024-07-26 14:20:00.874841] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x137f4f0) with pdu=0x2000190de8a8 00:25:52.907 [2024-07-26 14:20:00.875173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:9618 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.907 [2024-07-26 14:20:00.875204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:52.907 [2024-07-26 14:20:00.888513] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x137f4f0) with pdu=0x2000190de8a8 00:25:52.907 [2024-07-26 14:20:00.888788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:19945 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.907 [2024-07-26 14:20:00.888864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:52.907 [2024-07-26 14:20:00.902622] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x137f4f0) with pdu=0x2000190de8a8 00:25:52.907 [2024-07-26 14:20:00.902851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:10447 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.907 [2024-07-26 14:20:00.902937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:52.907 [2024-07-26 14:20:00.916596] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x137f4f0) with pdu=0x2000190de8a8 00:25:52.907 [2024-07-26 14:20:00.916850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:15437 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.907 [2024-07-26 14:20:00.916904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:53.166 [2024-07-26 14:20:00.930131] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x137f4f0) with pdu=0x2000190de8a8 00:25:53.166 [2024-07-26 14:20:00.930333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:8563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.166 [2024-07-26 14:20:00.930409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:53.166 [2024-07-26 14:20:00.943909] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x137f4f0) with pdu=0x2000190de8a8 00:25:53.166 [2024-07-26 14:20:00.944151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:5347 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.166 [2024-07-26 14:20:00.944217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:53.166 [2024-07-26 14:20:00.957833] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x137f4f0) with pdu=0x2000190de8a8 00:25:53.166 [2024-07-26 14:20:00.958066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:7284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.166 [2024-07-26 14:20:00.958123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:53.166 [2024-07-26 14:20:00.971836] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x137f4f0) with pdu=0x2000190de8a8 00:25:53.166 [2024-07-26 14:20:00.972056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:25040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.166 [2024-07-26 14:20:00.972122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:53.166 [2024-07-26 14:20:00.985781] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x137f4f0) with pdu=0x2000190de8a8 00:25:53.166 [2024-07-26 14:20:00.986026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:18673 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.166 [2024-07-26 14:20:00.986093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:53.166 [2024-07-26 14:20:00.999780] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x137f4f0) with pdu=0x2000190de8a8 00:25:53.166 [2024-07-26 14:20:01.000017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:13401 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.166 [2024-07-26 14:20:01.000080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:53.166 [2024-07-26 14:20:01.013805] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x137f4f0) with pdu=0x2000190de8a8 00:25:53.166 [2024-07-26 14:20:01.014034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:23521 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.166 [2024-07-26 14:20:01.014102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:53.166 [2024-07-26 14:20:01.027758] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x137f4f0) with pdu=0x2000190de8a8 00:25:53.166 [2024-07-26 14:20:01.028023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:21009 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.166 [2024-07-26 14:20:01.028087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:53.166 [2024-07-26 14:20:01.041636] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x137f4f0) with pdu=0x2000190de8a8 00:25:53.166 [2024-07-26 14:20:01.041855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:10946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.166 [2024-07-26 14:20:01.041920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:53.166 [2024-07-26 14:20:01.055541] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x137f4f0) with pdu=0x2000190de8a8 00:25:53.166 [2024-07-26 14:20:01.055823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:5388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.166 [2024-07-26 14:20:01.055893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:53.166 [2024-07-26 14:20:01.069364] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x137f4f0) with pdu=0x2000190de8a8 00:25:53.166 [2024-07-26 14:20:01.069682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:6471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.166 [2024-07-26 14:20:01.069713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:53.166 [2024-07-26 14:20:01.083232] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x137f4f0) with pdu=0x2000190de8a8 00:25:53.166 [2024-07-26 14:20:01.083546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:12043 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.166 [2024-07-26 14:20:01.083577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:53.166 [2024-07-26 14:20:01.097175] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x137f4f0) with pdu=0x2000190de8a8 00:25:53.166 [2024-07-26 14:20:01.097424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:9739 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.166 [2024-07-26 14:20:01.097494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:53.166 [2024-07-26 14:20:01.111034] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x137f4f0) with pdu=0x2000190de8a8 00:25:53.166 [2024-07-26 14:20:01.111323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:24495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.166 [2024-07-26 14:20:01.111391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:53.166 [2024-07-26 14:20:01.124870] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x137f4f0) with pdu=0x2000190de8a8 00:25:53.166 [2024-07-26 14:20:01.125095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:6721 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.166 [2024-07-26 14:20:01.125166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:53.166 [2024-07-26 14:20:01.138621] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x137f4f0) with pdu=0x2000190de8a8 00:25:53.166 [2024-07-26 14:20:01.138851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:17829 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.166 [2024-07-26 14:20:01.138921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:53.166 [2024-07-26 14:20:01.152574] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x137f4f0) with pdu=0x2000190de8a8 00:25:53.166 [2024-07-26 14:20:01.152842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:18529 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.166 [2024-07-26 14:20:01.152913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:53.166 [2024-07-26 14:20:01.166398] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x137f4f0) with pdu=0x2000190de8a8 00:25:53.166 [2024-07-26 14:20:01.166741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:9953 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.166 [2024-07-26 14:20:01.166772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:53.166 [2024-07-26 14:20:01.180289] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x137f4f0) with pdu=0x2000190de8a8 00:25:53.166 [2024-07-26 14:20:01.180524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:5597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.166 [2024-07-26 14:20:01.180636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:53.425 [2024-07-26 14:20:01.193935] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x137f4f0) with pdu=0x2000190de8a8 00:25:53.425 [2024-07-26 14:20:01.194138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:12220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.425 [2024-07-26 14:20:01.194214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:53.425 [2024-07-26 14:20:01.207808] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x137f4f0) with pdu=0x2000190de8a8 00:25:53.425 [2024-07-26 14:20:01.208100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:6091 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.425 [2024-07-26 14:20:01.208144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:53.425 [2024-07-26 14:20:01.221804] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x137f4f0) with pdu=0x2000190de8a8 00:25:53.425 [2024-07-26 14:20:01.222047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:8501 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.425 [2024-07-26 14:20:01.222119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:53.425 [2024-07-26 14:20:01.235712] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x137f4f0) with pdu=0x2000190de8a8 00:25:53.425 [2024-07-26 14:20:01.235995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:15001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.425 [2024-07-26 14:20:01.236062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:53.425 [2024-07-26 14:20:01.249684] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x137f4f0) with pdu=0x2000190de8a8 00:25:53.425 [2024-07-26 14:20:01.249933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:5655 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.425 [2024-07-26 14:20:01.249991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:53.425 [2024-07-26 14:20:01.263703] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x137f4f0) with pdu=0x2000190de8a8 00:25:53.425 [2024-07-26 14:20:01.263984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:10429 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.425 [2024-07-26 14:20:01.264013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:53.425 [2024-07-26 14:20:01.277657] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x137f4f0) with pdu=0x2000190de8a8 00:25:53.425 [2024-07-26 14:20:01.277909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:10029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.425 [2024-07-26 14:20:01.277939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:53.425 [2024-07-26 14:20:01.291626] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x137f4f0) with pdu=0x2000190de8a8 00:25:53.425 [2024-07-26 14:20:01.291955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:8290 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.425 [2024-07-26 14:20:01.291984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:53.425 00:25:53.425 Latency(us) 00:25:53.425 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:53.425 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:53.425 nvme0n1 : 2.01 18536.68 72.41 0.00 0.00 6887.67 2815.62 14466.47 00:25:53.425 =================================================================================================================== 00:25:53.425 Total : 18536.68 72.41 0.00 0.00 6887.67 2815.62 14466.47 00:25:53.425 0 00:25:53.425 14:20:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:53.425 14:20:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:53.425 14:20:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:53.425 | .driver_specific 00:25:53.425 | .nvme_error 00:25:53.425 | .status_code 00:25:53.425 | .command_transient_transport_error' 00:25:53.425 14:20:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:53.684 14:20:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 145 > 0 )) 00:25:53.684 14:20:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 321879 00:25:53.684 14:20:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 321879 ']' 00:25:53.684 14:20:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 321879 00:25:53.684 14:20:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:25:53.684 14:20:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:53.684 14:20:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 321879 00:25:53.684 14:20:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:53.684 14:20:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:53.684 14:20:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 321879' 00:25:53.684 killing process with pid 321879 00:25:53.684 14:20:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 321879 00:25:53.684 Received shutdown signal, test time was about 2.000000 seconds 00:25:53.684 00:25:53.684 Latency(us) 00:25:53.684 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:53.684 =================================================================================================================== 00:25:53.684 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:53.684 14:20:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 321879 00:25:53.941 14:20:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:25:53.942 14:20:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:53.942 14:20:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:25:53.942 14:20:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:25:53.942 14:20:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:25:53.942 14:20:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=322292 00:25:53.942 14:20:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:25:53.942 14:20:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 322292 /var/tmp/bperf.sock 00:25:53.942 14:20:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 322292 ']' 00:25:53.942 14:20:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:53.942 14:20:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:53.942 14:20:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:53.942 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:53.942 14:20:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:53.942 14:20:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:53.942 [2024-07-26 14:20:01.912835] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:25:53.942 [2024-07-26 14:20:01.912943] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid322292 ] 00:25:53.942 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:53.942 Zero copy mechanism will not be used. 00:25:53.942 EAL: No free 2048 kB hugepages reported on node 1 00:25:54.199 [2024-07-26 14:20:01.974049] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:54.199 [2024-07-26 14:20:02.082915] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:54.199 14:20:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:54.199 14:20:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:25:54.199 14:20:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:54.199 14:20:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:54.456 14:20:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:54.456 14:20:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:54.456 14:20:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:54.456 14:20:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:54.456 14:20:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:54.456 14:20:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:55.021 nvme0n1 00:25:55.021 14:20:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:25:55.021 14:20:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.021 14:20:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:55.021 14:20:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.021 14:20:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:55.021 14:20:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:55.279 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:55.279 Zero copy mechanism will not be used. 00:25:55.279 Running I/O for 2 seconds... 00:25:55.279 [2024-07-26 14:20:03.104873] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:55.279 [2024-07-26 14:20:03.105169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.279 [2024-07-26 14:20:03.105208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:55.279 [2024-07-26 14:20:03.110667] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:55.279 [2024-07-26 14:20:03.110974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.279 [2024-07-26 14:20:03.111005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:55.279 [2024-07-26 14:20:03.117223] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:55.279 [2024-07-26 14:20:03.117553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.279 [2024-07-26 14:20:03.117583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:55.279 [2024-07-26 14:20:03.123736] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:55.279 [2024-07-26 14:20:03.124031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.279 [2024-07-26 14:20:03.124060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:55.279 [2024-07-26 14:20:03.129700] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:55.279 [2024-07-26 14:20:03.129951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.279 [2024-07-26 14:20:03.129982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:55.279 [2024-07-26 14:20:03.134857] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:55.279 [2024-07-26 14:20:03.135068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.279 [2024-07-26 14:20:03.135100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:55.279 [2024-07-26 14:20:03.139580] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:55.279 [2024-07-26 14:20:03.139833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.279 [2024-07-26 14:20:03.139863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:55.279 [2024-07-26 14:20:03.144183] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:55.279 [2024-07-26 14:20:03.144499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.279 [2024-07-26 14:20:03.144539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:55.279 [2024-07-26 14:20:03.149008] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:55.279 [2024-07-26 14:20:03.149158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.279 [2024-07-26 14:20:03.149187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:55.279 [2024-07-26 14:20:03.153554] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:55.279 [2024-07-26 14:20:03.153820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.279 [2024-07-26 14:20:03.153849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:55.279 [2024-07-26 14:20:03.157941] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:55.279 [2024-07-26 14:20:03.158429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.279 [2024-07-26 14:20:03.158496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:55.279 [2024-07-26 14:20:03.162310] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:55.279 [2024-07-26 14:20:03.162692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.280 [2024-07-26 14:20:03.162723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:55.280 [2024-07-26 14:20:03.166763] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:55.280 [2024-07-26 14:20:03.167220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.280 [2024-07-26 14:20:03.167292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:55.280 [2024-07-26 14:20:03.171178] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:55.280 [2024-07-26 14:20:03.171524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.280 [2024-07-26 14:20:03.171565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:55.280 [2024-07-26 14:20:03.175613] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:55.280 [2024-07-26 14:20:03.175841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.280 [2024-07-26 14:20:03.175871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:55.280 [2024-07-26 14:20:03.180012] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:55.280 [2024-07-26 14:20:03.180382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.280 [2024-07-26 14:20:03.180437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:55.280 [2024-07-26 14:20:03.184358] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:55.280 [2024-07-26 14:20:03.184647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.280 [2024-07-26 14:20:03.184721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:55.280 [2024-07-26 14:20:03.188803] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:55.280 [2024-07-26 14:20:03.189099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.280 [2024-07-26 14:20:03.189148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:55.280 [2024-07-26 14:20:03.193066] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:55.280 [2024-07-26 14:20:03.193566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.280 [2024-07-26 14:20:03.193598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:55.280 [2024-07-26 14:20:03.197632] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:55.280 [2024-07-26 14:20:03.197958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.280 [2024-07-26 14:20:03.198040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:55.280 [2024-07-26 14:20:03.201929] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:55.280 [2024-07-26 14:20:03.202329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.280 [2024-07-26 14:20:03.202468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:55.280 [2024-07-26 14:20:03.206396] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:55.280 [2024-07-26 14:20:03.206778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.280 [2024-07-26 14:20:03.206811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:55.280 [2024-07-26 14:20:03.210888] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:55.280 [2024-07-26 14:20:03.211218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.280 [2024-07-26 14:20:03.211285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:55.280 [2024-07-26 14:20:03.215140] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:55.280 [2024-07-26 14:20:03.215434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.280 [2024-07-26 14:20:03.215521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:55.280 [2024-07-26 14:20:03.219509] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:55.280 [2024-07-26 14:20:03.219870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.280 [2024-07-26 14:20:03.219901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:55.280 [2024-07-26 14:20:03.223833] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:55.280 [2024-07-26 14:20:03.224245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.280 [2024-07-26 14:20:03.224281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:55.280 [2024-07-26 14:20:03.228173] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:55.280 [2024-07-26 14:20:03.228667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.280 [2024-07-26 14:20:03.228698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:55.280 [2024-07-26 14:20:03.232509] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:55.280 [2024-07-26 14:20:03.232868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.280 [2024-07-26 14:20:03.232899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:55.280 [2024-07-26 14:20:03.236933] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:55.280 [2024-07-26 14:20:03.237205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.280 [2024-07-26 14:20:03.237310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:55.280 [2024-07-26 14:20:03.241328] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:55.280 [2024-07-26 14:20:03.241655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.280 [2024-07-26 14:20:03.241738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:55.280 [2024-07-26 14:20:03.245739] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:55.280 [2024-07-26 14:20:03.246217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.281 [2024-07-26 14:20:03.246256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:55.281 [2024-07-26 14:20:03.250043] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:55.281 [2024-07-26 14:20:03.250434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.281 [2024-07-26 14:20:03.250507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:55.281 [2024-07-26 14:20:03.254609] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:55.281 [2024-07-26 14:20:03.254727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.281 [2024-07-26 14:20:03.254851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:55.281 [2024-07-26 14:20:03.259400] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:55.281 [2024-07-26 14:20:03.259598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.281 [2024-07-26 14:20:03.259627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:55.281 [2024-07-26 14:20:03.264239] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:55.281 [2024-07-26 14:20:03.264395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.281 [2024-07-26 14:20:03.264424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:55.281 [2024-07-26 14:20:03.269104] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:55.281 [2024-07-26 14:20:03.269208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.281 [2024-07-26 14:20:03.269236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:55.281 [2024-07-26 14:20:03.274172] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:55.281 [2024-07-26 14:20:03.274278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.281 [2024-07-26 14:20:03.274350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:55.281 [2024-07-26 14:20:03.279321] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:55.281 [2024-07-26 14:20:03.279428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.281 [2024-07-26 14:20:03.279500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:55.281 [2024-07-26 14:20:03.284257] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:55.281 [2024-07-26 14:20:03.284408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.281 [2024-07-26 14:20:03.284436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:55.281 [2024-07-26 14:20:03.289005] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:55.281 [2024-07-26 14:20:03.289139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.281 [2024-07-26 14:20:03.289168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:55.281 [2024-07-26 14:20:03.294710] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:55.281 [2024-07-26 14:20:03.294801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.281 [2024-07-26 14:20:03.294829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:55.539 [2024-07-26 14:20:03.299519] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:55.539 [2024-07-26 14:20:03.299671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.539 [2024-07-26 14:20:03.299753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:55.539 [2024-07-26 14:20:03.304664] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:55.539 [2024-07-26 14:20:03.304835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.539 [2024-07-26 14:20:03.304864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:55.539 [2024-07-26 14:20:03.309646] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:55.539 [2024-07-26 14:20:03.309950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.539 [2024-07-26 14:20:03.309981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:55.539 [2024-07-26 14:20:03.314594] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:55.539 [2024-07-26 14:20:03.314742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.539 [2024-07-26 14:20:03.314771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:55.539 [2024-07-26 14:20:03.319510] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:55.539 [2024-07-26 14:20:03.319652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.539 [2024-07-26 14:20:03.319680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:55.539 [2024-07-26 14:20:03.324464] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:55.539 [2024-07-26 14:20:03.324640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.539 [2024-07-26 14:20:03.324670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:55.539 [2024-07-26 14:20:03.329441] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:55.539 [2024-07-26 14:20:03.329601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.539 [2024-07-26 14:20:03.329630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:55.539 [2024-07-26 14:20:03.334402] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:55.539 [2024-07-26 14:20:03.334636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.539 [2024-07-26 14:20:03.334668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:55.539 [2024-07-26 14:20:03.339247] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:55.539 [2024-07-26 14:20:03.339360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.539 [2024-07-26 14:20:03.339445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:55.539 [2024-07-26 14:20:03.344110] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:55.539 [2024-07-26 14:20:03.344222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.540 [2024-07-26 14:20:03.344304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:55.540 [2024-07-26 14:20:03.348970] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:55.540 [2024-07-26 14:20:03.349165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.540 [2024-07-26 14:20:03.349213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:55.540 [2024-07-26 14:20:03.354030] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:55.540 [2024-07-26 14:20:03.354122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.540 [2024-07-26 14:20:03.354150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:55.540 [2024-07-26 14:20:03.358892] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:55.540 [2024-07-26 14:20:03.358966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.540 [2024-07-26 14:20:03.358993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:55.540 [2024-07-26 14:20:03.363943] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:55.540 [2024-07-26 14:20:03.364130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.540 [2024-07-26 14:20:03.364159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:55.540 [2024-07-26 14:20:03.368918] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:55.540 [2024-07-26 14:20:03.369087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.540 [2024-07-26 14:20:03.369116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:55.540 [2024-07-26 14:20:03.373777] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:55.540 [2024-07-26 14:20:03.373872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.540 [2024-07-26 14:20:03.373902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:55.540 [2024-07-26 14:20:03.378797] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:55.540 [2024-07-26 14:20:03.378889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.540 [2024-07-26 14:20:03.378917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:55.540 [2024-07-26 14:20:03.383698] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:55.540 [2024-07-26 14:20:03.383781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.540 [2024-07-26 14:20:03.383809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:55.540 [2024-07-26 14:20:03.389366] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:55.540 [2024-07-26 14:20:03.389590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.540 [2024-07-26 14:20:03.389619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:55.540 [2024-07-26 14:20:03.395417] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:55.540 [2024-07-26 14:20:03.395678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.540 [2024-07-26 14:20:03.395709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:55.540 [2024-07-26 14:20:03.400109] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:55.540 [2024-07-26 14:20:03.400574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.540 [2024-07-26 14:20:03.400605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:55.540 [2024-07-26 14:20:03.404746] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:55.540 [2024-07-26 14:20:03.405113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.540 [2024-07-26 14:20:03.405208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:55.540 [2024-07-26 14:20:03.409282] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:55.540 [2024-07-26 14:20:03.409565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.540 [2024-07-26 14:20:03.409666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:55.540 [2024-07-26 14:20:03.414379] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:55.540 [2024-07-26 14:20:03.414726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.540 [2024-07-26 14:20:03.414776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:55.540 [2024-07-26 14:20:03.419768] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:55.540 [2024-07-26 14:20:03.419966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.540 [2024-07-26 14:20:03.420000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:55.540 [2024-07-26 14:20:03.425401] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:55.540 [2024-07-26 14:20:03.425582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.540 [2024-07-26 14:20:03.425626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:55.540 [2024-07-26 14:20:03.431165] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:55.540 [2024-07-26 14:20:03.431374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.540 [2024-07-26 14:20:03.431404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:55.540 [2024-07-26 14:20:03.437462] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:55.540 [2024-07-26 14:20:03.437642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.540 [2024-07-26 14:20:03.437673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:55.540 [2024-07-26 14:20:03.443010] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:55.540 [2024-07-26 14:20:03.443479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.540 [2024-07-26 14:20:03.443510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:55.540 [2024-07-26 14:20:03.447547] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:55.540 [2024-07-26 14:20:03.447997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.540 [2024-07-26 14:20:03.448066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:55.540 [2024-07-26 14:20:03.452113] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:55.540 [2024-07-26 14:20:03.452472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.540 [2024-07-26 14:20:03.452504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:55.540 [2024-07-26 14:20:03.456676] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:55.540 [2024-07-26 14:20:03.457013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.540 [2024-07-26 14:20:03.457091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:55.540 [2024-07-26 14:20:03.461141] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:55.540 [2024-07-26 14:20:03.461431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.540 [2024-07-26 14:20:03.461522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:55.540 [2024-07-26 14:20:03.466571] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:55.540 [2024-07-26 14:20:03.466736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.540 [2024-07-26 14:20:03.466766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:55.540 [2024-07-26 14:20:03.471446] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:55.540 [2024-07-26 14:20:03.472028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.540 [2024-07-26 14:20:03.472096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:55.540 [2024-07-26 14:20:03.476085] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:55.540 [2024-07-26 14:20:03.476544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.540 [2024-07-26 14:20:03.476597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:55.540 [2024-07-26 14:20:03.480941] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:55.540 [2024-07-26 14:20:03.481147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.541 [2024-07-26 14:20:03.481185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:55.541 [2024-07-26 14:20:03.486274] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:55.541 [2024-07-26 14:20:03.486400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.541 [2024-07-26 14:20:03.486428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:55.541 [2024-07-26 14:20:03.491136] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:55.541 [2024-07-26 14:20:03.491496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.541 [2024-07-26 14:20:03.491559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:55.541 [2024-07-26 14:20:03.495515] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:55.541 [2024-07-26 14:20:03.495876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.541 [2024-07-26 14:20:03.495906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:55.541 [2024-07-26 14:20:03.499911] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:55.541 [2024-07-26 14:20:03.500192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.541 [2024-07-26 14:20:03.500234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:55.541 [2024-07-26 14:20:03.504383] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:55.541 [2024-07-26 14:20:03.504774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.541 [2024-07-26 14:20:03.504824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:55.541 [2024-07-26 14:20:03.508974] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:55.541 [2024-07-26 14:20:03.509270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.541 [2024-07-26 14:20:03.509301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:55.541 [2024-07-26 14:20:03.513399] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:55.541 [2024-07-26 14:20:03.513885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.541 [2024-07-26 14:20:03.513982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:55.541 [2024-07-26 14:20:03.517899] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:55.541 [2024-07-26 14:20:03.518333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.541 [2024-07-26 14:20:03.518363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:55.541 [2024-07-26 14:20:03.522047] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:55.541 [2024-07-26 14:20:03.522396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.541 [2024-07-26 14:20:03.522426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:55.541 [2024-07-26 14:20:03.526281] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:55.541 [2024-07-26 14:20:03.526605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.541 [2024-07-26 14:20:03.526666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:55.541 [2024-07-26 14:20:03.530683] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:55.541 [2024-07-26 14:20:03.531079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.541 [2024-07-26 14:20:03.531109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:55.541 [2024-07-26 14:20:03.535161] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:55.541 [2024-07-26 14:20:03.535543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.541 [2024-07-26 14:20:03.535583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:55.541 [2024-07-26 14:20:03.539707] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:55.541 [2024-07-26 14:20:03.540148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.541 [2024-07-26 14:20:03.540191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:55.541 [2024-07-26 14:20:03.544053] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:55.541 [2024-07-26 14:20:03.544445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.541 [2024-07-26 14:20:03.544475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:55.541 [2024-07-26 14:20:03.548454] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:55.541 [2024-07-26 14:20:03.549069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.541 [2024-07-26 14:20:03.549100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:55.541 [2024-07-26 14:20:03.552761] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:55.541 [2024-07-26 14:20:03.553087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.541 [2024-07-26 14:20:03.553117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:55.800 [2024-07-26 14:20:03.557113] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:55.800 [2024-07-26 14:20:03.557482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.800 [2024-07-26 14:20:03.557525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:55.800 [2024-07-26 14:20:03.561489] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:55.800 [2024-07-26 14:20:03.561748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.800 [2024-07-26 14:20:03.561782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:55.800 [2024-07-26 14:20:03.565832] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:55.800 [2024-07-26 14:20:03.566262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.800 [2024-07-26 14:20:03.566291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:55.800 [2024-07-26 14:20:03.570206] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:55.800 [2024-07-26 14:20:03.570661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.800 [2024-07-26 14:20:03.570750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:55.800 [2024-07-26 14:20:03.574788] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:55.800 [2024-07-26 14:20:03.575241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.800 [2024-07-26 14:20:03.575296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:55.800 [2024-07-26 14:20:03.579140] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:55.800 [2024-07-26 14:20:03.579598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.800 [2024-07-26 14:20:03.579655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:55.800 [2024-07-26 14:20:03.583677] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:55.800 [2024-07-26 14:20:03.583836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.800 [2024-07-26 14:20:03.583940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:55.800 [2024-07-26 14:20:03.588602] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:55.800 [2024-07-26 14:20:03.588775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.800 [2024-07-26 14:20:03.588840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:55.800 [2024-07-26 14:20:03.593948] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:55.801 [2024-07-26 14:20:03.594170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.801 [2024-07-26 14:20:03.594201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:55.801 [2024-07-26 14:20:03.599981] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:55.801 [2024-07-26 14:20:03.600123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.801 [2024-07-26 14:20:03.600156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:55.801 [2024-07-26 14:20:03.605265] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:55.801 [2024-07-26 14:20:03.605665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.801 [2024-07-26 14:20:03.605744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:55.801 [2024-07-26 14:20:03.609696] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:55.801 [2024-07-26 14:20:03.610029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.801 [2024-07-26 14:20:03.610059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:55.801 [2024-07-26 14:20:03.614425] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:55.801 [2024-07-26 14:20:03.614680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.801 [2024-07-26 14:20:03.614761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:55.801 [2024-07-26 14:20:03.619053] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:55.801 [2024-07-26 14:20:03.619520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.801 [2024-07-26 14:20:03.619559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:55.801 [2024-07-26 14:20:03.623750] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:55.801 [2024-07-26 14:20:03.624015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.801 [2024-07-26 14:20:03.624048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:55.801 [2024-07-26 14:20:03.628362] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:55.801 [2024-07-26 14:20:03.628609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.801 [2024-07-26 14:20:03.628652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:55.801 [2024-07-26 14:20:03.633036] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:55.801 [2024-07-26 14:20:03.633362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.801 [2024-07-26 14:20:03.633433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:55.801 [2024-07-26 14:20:03.637742] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:55.801 [2024-07-26 14:20:03.638021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.801 [2024-07-26 14:20:03.638090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:55.801 [2024-07-26 14:20:03.642164] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:55.801 [2024-07-26 14:20:03.642392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.801 [2024-07-26 14:20:03.642426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:55.801 [2024-07-26 14:20:03.647011] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:55.801 [2024-07-26 14:20:03.647273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.801 [2024-07-26 14:20:03.647345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:55.801 [2024-07-26 14:20:03.651729] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:55.801 [2024-07-26 14:20:03.651982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.801 [2024-07-26 14:20:03.652033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:55.801 [2024-07-26 14:20:03.656431] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:55.801 [2024-07-26 14:20:03.656735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.801 [2024-07-26 14:20:03.656816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:55.801 [2024-07-26 14:20:03.661043] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:55.801 [2024-07-26 14:20:03.661415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.801 [2024-07-26 14:20:03.661536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:55.801 [2024-07-26 14:20:03.665689] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:55.801 [2024-07-26 14:20:03.665849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.801 [2024-07-26 14:20:03.665947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:55.801 [2024-07-26 14:20:03.670505] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:55.801 [2024-07-26 14:20:03.670645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.801 [2024-07-26 14:20:03.670738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:55.801 [2024-07-26 14:20:03.675201] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:55.801 [2024-07-26 14:20:03.675388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.801 [2024-07-26 14:20:03.675447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:55.801 [2024-07-26 14:20:03.679894] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:55.801 [2024-07-26 14:20:03.680119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.801 [2024-07-26 14:20:03.680166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:55.801 [2024-07-26 14:20:03.684606] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:55.801 [2024-07-26 14:20:03.684964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.801 [2024-07-26 14:20:03.684994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:55.801 [2024-07-26 14:20:03.689357] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:55.801 [2024-07-26 14:20:03.689630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.801 [2024-07-26 14:20:03.689693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:55.801 [2024-07-26 14:20:03.694079] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:55.801 [2024-07-26 14:20:03.694358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.801 [2024-07-26 14:20:03.694411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:55.801 [2024-07-26 14:20:03.698934] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:55.801 [2024-07-26 14:20:03.699044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.801 [2024-07-26 14:20:03.699089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:55.801 [2024-07-26 14:20:03.703668] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:55.801 [2024-07-26 14:20:03.704108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.801 [2024-07-26 14:20:03.704191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:55.801 [2024-07-26 14:20:03.708332] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:55.801 [2024-07-26 14:20:03.708699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.801 [2024-07-26 14:20:03.708730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:55.801 [2024-07-26 14:20:03.713688] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:55.801 [2024-07-26 14:20:03.714062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.801 [2024-07-26 14:20:03.714093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:55.801 [2024-07-26 14:20:03.719030] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:55.801 [2024-07-26 14:20:03.719354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.801 [2024-07-26 14:20:03.719384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:55.802 [2024-07-26 14:20:03.725094] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:55.802 [2024-07-26 14:20:03.725301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.802 [2024-07-26 14:20:03.725335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:55.802 [2024-07-26 14:20:03.730398] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:55.802 [2024-07-26 14:20:03.730580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.802 [2024-07-26 14:20:03.730613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:55.802 [2024-07-26 14:20:03.736381] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:55.802 [2024-07-26 14:20:03.736665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.802 [2024-07-26 14:20:03.736697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:55.802 [2024-07-26 14:20:03.741822] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:55.802 [2024-07-26 14:20:03.742051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.802 [2024-07-26 14:20:03.742083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:55.802 [2024-07-26 14:20:03.747707] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:55.802 [2024-07-26 14:20:03.748026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.802 [2024-07-26 14:20:03.748056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:55.802 [2024-07-26 14:20:03.753127] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:55.802 [2024-07-26 14:20:03.753416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.802 [2024-07-26 14:20:03.753462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:55.802 [2024-07-26 14:20:03.758676] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:55.802 [2024-07-26 14:20:03.759039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.802 [2024-07-26 14:20:03.759070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:55.802 [2024-07-26 14:20:03.764182] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:55.802 [2024-07-26 14:20:03.764442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.802 [2024-07-26 14:20:03.764488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:55.802 [2024-07-26 14:20:03.769808] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:55.802 [2024-07-26 14:20:03.770091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.802 [2024-07-26 14:20:03.770123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:55.802 [2024-07-26 14:20:03.775343] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:55.802 [2024-07-26 14:20:03.775674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.802 [2024-07-26 14:20:03.775705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:55.802 [2024-07-26 14:20:03.781199] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:55.802 [2024-07-26 14:20:03.781556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.802 [2024-07-26 14:20:03.781612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:55.802 [2024-07-26 14:20:03.786826] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:55.802 [2024-07-26 14:20:03.787155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.802 [2024-07-26 14:20:03.787191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:55.802 [2024-07-26 14:20:03.793014] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:55.802 [2024-07-26 14:20:03.793143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.802 [2024-07-26 14:20:03.793229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:55.802 [2024-07-26 14:20:03.798433] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:55.802 [2024-07-26 14:20:03.798669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.802 [2024-07-26 14:20:03.798716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:55.802 [2024-07-26 14:20:03.804176] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:55.802 [2024-07-26 14:20:03.804408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.802 [2024-07-26 14:20:03.804438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:55.802 [2024-07-26 14:20:03.809778] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:55.802 [2024-07-26 14:20:03.809986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.802 [2024-07-26 14:20:03.810031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:55.802 [2024-07-26 14:20:03.815905] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:55.802 [2024-07-26 14:20:03.816086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.802 [2024-07-26 14:20:03.816119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.062 [2024-07-26 14:20:03.821614] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.062 [2024-07-26 14:20:03.821843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.062 [2024-07-26 14:20:03.821890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.062 [2024-07-26 14:20:03.827458] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.062 [2024-07-26 14:20:03.827745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.062 [2024-07-26 14:20:03.827821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.062 [2024-07-26 14:20:03.833029] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.062 [2024-07-26 14:20:03.833321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.062 [2024-07-26 14:20:03.833383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.062 [2024-07-26 14:20:03.838552] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.062 [2024-07-26 14:20:03.838791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.062 [2024-07-26 14:20:03.838822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.062 [2024-07-26 14:20:03.843973] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.062 [2024-07-26 14:20:03.844178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.062 [2024-07-26 14:20:03.844215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.062 [2024-07-26 14:20:03.849541] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.062 [2024-07-26 14:20:03.849795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.062 [2024-07-26 14:20:03.849842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.062 [2024-07-26 14:20:03.855365] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.062 [2024-07-26 14:20:03.855582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.062 [2024-07-26 14:20:03.855611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.062 [2024-07-26 14:20:03.860579] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.062 [2024-07-26 14:20:03.860933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.062 [2024-07-26 14:20:03.860964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.062 [2024-07-26 14:20:03.865687] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.062 [2024-07-26 14:20:03.866184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.062 [2024-07-26 14:20:03.866221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.062 [2024-07-26 14:20:03.870882] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.062 [2024-07-26 14:20:03.871111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.062 [2024-07-26 14:20:03.871148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.062 [2024-07-26 14:20:03.876358] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.062 [2024-07-26 14:20:03.876674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.062 [2024-07-26 14:20:03.876707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.062 [2024-07-26 14:20:03.881918] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.062 [2024-07-26 14:20:03.882090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.062 [2024-07-26 14:20:03.882118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.062 [2024-07-26 14:20:03.888149] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.062 [2024-07-26 14:20:03.888475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.062 [2024-07-26 14:20:03.888516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.062 [2024-07-26 14:20:03.893822] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.062 [2024-07-26 14:20:03.894009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.062 [2024-07-26 14:20:03.894043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.062 [2024-07-26 14:20:03.899192] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.062 [2024-07-26 14:20:03.899374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.062 [2024-07-26 14:20:03.899423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.062 [2024-07-26 14:20:03.904256] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.062 [2024-07-26 14:20:03.904611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.062 [2024-07-26 14:20:03.904689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.062 [2024-07-26 14:20:03.909806] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.062 [2024-07-26 14:20:03.910046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.062 [2024-07-26 14:20:03.910078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.062 [2024-07-26 14:20:03.916210] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.062 [2024-07-26 14:20:03.916467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.062 [2024-07-26 14:20:03.916498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.062 [2024-07-26 14:20:03.921332] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.062 [2024-07-26 14:20:03.921679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.062 [2024-07-26 14:20:03.921709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.062 [2024-07-26 14:20:03.925839] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.062 [2024-07-26 14:20:03.926219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.062 [2024-07-26 14:20:03.926250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.063 [2024-07-26 14:20:03.930072] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.063 [2024-07-26 14:20:03.930145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.063 [2024-07-26 14:20:03.930172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.063 [2024-07-26 14:20:03.935138] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.063 [2024-07-26 14:20:03.935323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.063 [2024-07-26 14:20:03.935352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.063 [2024-07-26 14:20:03.939606] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.063 [2024-07-26 14:20:03.940036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.063 [2024-07-26 14:20:03.940104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.063 [2024-07-26 14:20:03.943942] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.063 [2024-07-26 14:20:03.944238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.063 [2024-07-26 14:20:03.944341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.063 [2024-07-26 14:20:03.948384] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.063 [2024-07-26 14:20:03.948856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.063 [2024-07-26 14:20:03.948888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.063 [2024-07-26 14:20:03.953007] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.063 [2024-07-26 14:20:03.953315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.063 [2024-07-26 14:20:03.953346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.063 [2024-07-26 14:20:03.958192] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.063 [2024-07-26 14:20:03.958265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.063 [2024-07-26 14:20:03.958293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.063 [2024-07-26 14:20:03.962780] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.063 [2024-07-26 14:20:03.963064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.063 [2024-07-26 14:20:03.963137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.063 [2024-07-26 14:20:03.967259] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.063 [2024-07-26 14:20:03.967617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.063 [2024-07-26 14:20:03.967648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.063 [2024-07-26 14:20:03.971625] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.063 [2024-07-26 14:20:03.972020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.063 [2024-07-26 14:20:03.972093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.063 [2024-07-26 14:20:03.976044] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.063 [2024-07-26 14:20:03.976501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.063 [2024-07-26 14:20:03.976538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.063 [2024-07-26 14:20:03.980460] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.063 [2024-07-26 14:20:03.980860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.063 [2024-07-26 14:20:03.980891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.063 [2024-07-26 14:20:03.984897] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.063 [2024-07-26 14:20:03.985457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.063 [2024-07-26 14:20:03.985487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.063 [2024-07-26 14:20:03.989285] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.063 [2024-07-26 14:20:03.989752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.063 [2024-07-26 14:20:03.989783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.063 [2024-07-26 14:20:03.993777] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.063 [2024-07-26 14:20:03.994129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.063 [2024-07-26 14:20:03.994160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.063 [2024-07-26 14:20:03.998434] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.063 [2024-07-26 14:20:03.998752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.063 [2024-07-26 14:20:03.998789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.063 [2024-07-26 14:20:04.003944] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.063 [2024-07-26 14:20:04.004329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.063 [2024-07-26 14:20:04.004375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.063 [2024-07-26 14:20:04.009602] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.063 [2024-07-26 14:20:04.009811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.063 [2024-07-26 14:20:04.009840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.063 [2024-07-26 14:20:04.015898] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.063 [2024-07-26 14:20:04.016238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.063 [2024-07-26 14:20:04.016269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.063 [2024-07-26 14:20:04.022540] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.063 [2024-07-26 14:20:04.022850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.063 [2024-07-26 14:20:04.022881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.064 [2024-07-26 14:20:04.028229] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.064 [2024-07-26 14:20:04.028430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.064 [2024-07-26 14:20:04.028461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.064 [2024-07-26 14:20:04.033408] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.064 [2024-07-26 14:20:04.033600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.064 [2024-07-26 14:20:04.033678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.064 [2024-07-26 14:20:04.037953] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.064 [2024-07-26 14:20:04.038502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.064 [2024-07-26 14:20:04.038540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.064 [2024-07-26 14:20:04.042641] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.064 [2024-07-26 14:20:04.042911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.064 [2024-07-26 14:20:04.042942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.064 [2024-07-26 14:20:04.048133] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.064 [2024-07-26 14:20:04.048394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.064 [2024-07-26 14:20:04.048429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.064 [2024-07-26 14:20:04.053367] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.064 [2024-07-26 14:20:04.053614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.064 [2024-07-26 14:20:04.053645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.064 [2024-07-26 14:20:04.058630] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.064 [2024-07-26 14:20:04.058860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.064 [2024-07-26 14:20:04.058936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.064 [2024-07-26 14:20:04.063801] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.064 [2024-07-26 14:20:04.064262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.064 [2024-07-26 14:20:04.064306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.064 [2024-07-26 14:20:04.068996] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.064 [2024-07-26 14:20:04.069229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.064 [2024-07-26 14:20:04.069297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.064 [2024-07-26 14:20:04.074431] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.064 [2024-07-26 14:20:04.074722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.064 [2024-07-26 14:20:04.074754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.324 [2024-07-26 14:20:04.080571] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.324 [2024-07-26 14:20:04.080833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.324 [2024-07-26 14:20:04.080930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.324 [2024-07-26 14:20:04.085168] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.324 [2024-07-26 14:20:04.085647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.324 [2024-07-26 14:20:04.085684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.324 [2024-07-26 14:20:04.089907] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.324 [2024-07-26 14:20:04.090032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.324 [2024-07-26 14:20:04.090088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.324 [2024-07-26 14:20:04.095332] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.325 [2024-07-26 14:20:04.095453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.325 [2024-07-26 14:20:04.095486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.325 [2024-07-26 14:20:04.100637] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.325 [2024-07-26 14:20:04.100914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.325 [2024-07-26 14:20:04.101002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.325 [2024-07-26 14:20:04.105023] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.325 [2024-07-26 14:20:04.105287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.325 [2024-07-26 14:20:04.105374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.325 [2024-07-26 14:20:04.109509] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.325 [2024-07-26 14:20:04.109920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.325 [2024-07-26 14:20:04.109952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.325 [2024-07-26 14:20:04.113999] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.325 [2024-07-26 14:20:04.114525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.325 [2024-07-26 14:20:04.114593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.325 [2024-07-26 14:20:04.118486] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.325 [2024-07-26 14:20:04.118940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.325 [2024-07-26 14:20:04.118982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.325 [2024-07-26 14:20:04.123027] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.325 [2024-07-26 14:20:04.123436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.325 [2024-07-26 14:20:04.123514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.325 [2024-07-26 14:20:04.128021] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.325 [2024-07-26 14:20:04.128166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.325 [2024-07-26 14:20:04.128195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.325 [2024-07-26 14:20:04.132672] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.325 [2024-07-26 14:20:04.133008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.325 [2024-07-26 14:20:04.133093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.325 [2024-07-26 14:20:04.137133] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.325 [2024-07-26 14:20:04.137645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.325 [2024-07-26 14:20:04.137676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.325 [2024-07-26 14:20:04.141685] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.325 [2024-07-26 14:20:04.142005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.325 [2024-07-26 14:20:04.142063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.325 [2024-07-26 14:20:04.146108] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.325 [2024-07-26 14:20:04.146572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.325 [2024-07-26 14:20:04.146602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.325 [2024-07-26 14:20:04.150463] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.325 [2024-07-26 14:20:04.150856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.325 [2024-07-26 14:20:04.150928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.325 [2024-07-26 14:20:04.154988] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.325 [2024-07-26 14:20:04.155340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.325 [2024-07-26 14:20:04.155427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.325 [2024-07-26 14:20:04.159355] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.325 [2024-07-26 14:20:04.159712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.325 [2024-07-26 14:20:04.159743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.325 [2024-07-26 14:20:04.163788] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.325 [2024-07-26 14:20:04.164173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.325 [2024-07-26 14:20:04.164241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.325 [2024-07-26 14:20:04.168169] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.325 [2024-07-26 14:20:04.168706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.325 [2024-07-26 14:20:04.168737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.325 [2024-07-26 14:20:04.172815] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.325 [2024-07-26 14:20:04.173244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.325 [2024-07-26 14:20:04.173274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.325 [2024-07-26 14:20:04.177317] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.325 [2024-07-26 14:20:04.177877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.325 [2024-07-26 14:20:04.177932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.325 [2024-07-26 14:20:04.181775] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.325 [2024-07-26 14:20:04.182288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.325 [2024-07-26 14:20:04.182321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.325 [2024-07-26 14:20:04.186150] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.325 [2024-07-26 14:20:04.186619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.325 [2024-07-26 14:20:04.186651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.325 [2024-07-26 14:20:04.190657] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.325 [2024-07-26 14:20:04.191088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.325 [2024-07-26 14:20:04.191167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.325 [2024-07-26 14:20:04.195165] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.325 [2024-07-26 14:20:04.195728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.325 [2024-07-26 14:20:04.195758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.325 [2024-07-26 14:20:04.199760] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.325 [2024-07-26 14:20:04.200165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.325 [2024-07-26 14:20:04.200235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.325 [2024-07-26 14:20:04.204294] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.325 [2024-07-26 14:20:04.204671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.325 [2024-07-26 14:20:04.204719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.325 [2024-07-26 14:20:04.208875] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.325 [2024-07-26 14:20:04.209103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.325 [2024-07-26 14:20:04.209178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.325 [2024-07-26 14:20:04.213931] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.325 [2024-07-26 14:20:04.214149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.325 [2024-07-26 14:20:04.214184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.325 [2024-07-26 14:20:04.219491] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.326 [2024-07-26 14:20:04.219678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.326 [2024-07-26 14:20:04.219735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.326 [2024-07-26 14:20:04.224768] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.326 [2024-07-26 14:20:04.224927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.326 [2024-07-26 14:20:04.224956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.326 [2024-07-26 14:20:04.231171] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.326 [2024-07-26 14:20:04.231390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.326 [2024-07-26 14:20:04.231420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.326 [2024-07-26 14:20:04.236742] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.326 [2024-07-26 14:20:04.236959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.326 [2024-07-26 14:20:04.236988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.326 [2024-07-26 14:20:04.242597] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.326 [2024-07-26 14:20:04.242837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.326 [2024-07-26 14:20:04.242869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.326 [2024-07-26 14:20:04.248026] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.326 [2024-07-26 14:20:04.248206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.326 [2024-07-26 14:20:04.248278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.326 [2024-07-26 14:20:04.253519] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.326 [2024-07-26 14:20:04.253756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.326 [2024-07-26 14:20:04.253792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.326 [2024-07-26 14:20:04.259338] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.326 [2024-07-26 14:20:04.259662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.326 [2024-07-26 14:20:04.259694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.326 [2024-07-26 14:20:04.264912] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.326 [2024-07-26 14:20:04.265118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.326 [2024-07-26 14:20:04.265150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.326 [2024-07-26 14:20:04.270481] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.326 [2024-07-26 14:20:04.270715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.326 [2024-07-26 14:20:04.270745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.326 [2024-07-26 14:20:04.276060] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.326 [2024-07-26 14:20:04.276352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.326 [2024-07-26 14:20:04.276387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.326 [2024-07-26 14:20:04.281495] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.326 [2024-07-26 14:20:04.281714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.326 [2024-07-26 14:20:04.281745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.326 [2024-07-26 14:20:04.287146] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.326 [2024-07-26 14:20:04.287338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.326 [2024-07-26 14:20:04.287410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.326 [2024-07-26 14:20:04.292861] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.326 [2024-07-26 14:20:04.293055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.326 [2024-07-26 14:20:04.293117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.326 [2024-07-26 14:20:04.298605] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.326 [2024-07-26 14:20:04.298812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.326 [2024-07-26 14:20:04.298857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.326 [2024-07-26 14:20:04.304452] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.326 [2024-07-26 14:20:04.304667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.326 [2024-07-26 14:20:04.304702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.326 [2024-07-26 14:20:04.310121] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.326 [2024-07-26 14:20:04.310303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.326 [2024-07-26 14:20:04.310331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.326 [2024-07-26 14:20:04.315383] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.326 [2024-07-26 14:20:04.315650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.326 [2024-07-26 14:20:04.315736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.326 [2024-07-26 14:20:04.320186] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.326 [2024-07-26 14:20:04.320686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.326 [2024-07-26 14:20:04.320759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.326 [2024-07-26 14:20:04.325062] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.326 [2024-07-26 14:20:04.325571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.326 [2024-07-26 14:20:04.325665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.326 [2024-07-26 14:20:04.329801] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.326 [2024-07-26 14:20:04.330175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.326 [2024-07-26 14:20:04.330226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.326 [2024-07-26 14:20:04.334863] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.326 [2024-07-26 14:20:04.335086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.326 [2024-07-26 14:20:04.335116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.326 [2024-07-26 14:20:04.340247] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.326 [2024-07-26 14:20:04.340505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.326 [2024-07-26 14:20:04.340588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.586 [2024-07-26 14:20:04.345495] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.586 [2024-07-26 14:20:04.345688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.586 [2024-07-26 14:20:04.345717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.586 [2024-07-26 14:20:04.351914] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.586 [2024-07-26 14:20:04.352148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.586 [2024-07-26 14:20:04.352221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.586 [2024-07-26 14:20:04.356612] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.586 [2024-07-26 14:20:04.357035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.586 [2024-07-26 14:20:04.357092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.586 [2024-07-26 14:20:04.361291] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.586 [2024-07-26 14:20:04.361795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.586 [2024-07-26 14:20:04.361903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.586 [2024-07-26 14:20:04.366022] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.586 [2024-07-26 14:20:04.366436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.586 [2024-07-26 14:20:04.366516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.586 [2024-07-26 14:20:04.370776] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.586 [2024-07-26 14:20:04.371286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.586 [2024-07-26 14:20:04.371331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.586 [2024-07-26 14:20:04.375671] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.586 [2024-07-26 14:20:04.376109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.586 [2024-07-26 14:20:04.376209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.586 [2024-07-26 14:20:04.380430] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.586 [2024-07-26 14:20:04.380859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.586 [2024-07-26 14:20:04.380936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.586 [2024-07-26 14:20:04.385267] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.586 [2024-07-26 14:20:04.385461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.586 [2024-07-26 14:20:04.385571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.587 [2024-07-26 14:20:04.391014] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.587 [2024-07-26 14:20:04.391087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.587 [2024-07-26 14:20:04.391169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.587 [2024-07-26 14:20:04.395506] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.587 [2024-07-26 14:20:04.395993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.587 [2024-07-26 14:20:04.396056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.587 [2024-07-26 14:20:04.400078] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.587 [2024-07-26 14:20:04.400370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.587 [2024-07-26 14:20:04.400403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.587 [2024-07-26 14:20:04.404740] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.587 [2024-07-26 14:20:04.405015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.587 [2024-07-26 14:20:04.405079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.587 [2024-07-26 14:20:04.409745] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.587 [2024-07-26 14:20:04.409825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.587 [2024-07-26 14:20:04.409868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.587 [2024-07-26 14:20:04.414567] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.587 [2024-07-26 14:20:04.415053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.587 [2024-07-26 14:20:04.415106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.587 [2024-07-26 14:20:04.418949] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.587 [2024-07-26 14:20:04.419350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.587 [2024-07-26 14:20:04.419387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.587 [2024-07-26 14:20:04.423226] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.587 [2024-07-26 14:20:04.423574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.587 [2024-07-26 14:20:04.423683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.587 [2024-07-26 14:20:04.427588] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.587 [2024-07-26 14:20:04.427898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.587 [2024-07-26 14:20:04.427979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.587 [2024-07-26 14:20:04.432061] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.587 [2024-07-26 14:20:04.432447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.587 [2024-07-26 14:20:04.432525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.587 [2024-07-26 14:20:04.436490] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.587 [2024-07-26 14:20:04.436930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.587 [2024-07-26 14:20:04.436965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.587 [2024-07-26 14:20:04.440893] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.587 [2024-07-26 14:20:04.441372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.587 [2024-07-26 14:20:04.441442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.587 [2024-07-26 14:20:04.445261] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.587 [2024-07-26 14:20:04.445617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.587 [2024-07-26 14:20:04.445662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.587 [2024-07-26 14:20:04.449626] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.587 [2024-07-26 14:20:04.450075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.587 [2024-07-26 14:20:04.450155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.587 [2024-07-26 14:20:04.454060] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.587 [2024-07-26 14:20:04.454537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.587 [2024-07-26 14:20:04.454567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.587 [2024-07-26 14:20:04.458653] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.587 [2024-07-26 14:20:04.458908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.587 [2024-07-26 14:20:04.458972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.587 [2024-07-26 14:20:04.463260] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.587 [2024-07-26 14:20:04.463667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.587 [2024-07-26 14:20:04.463746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.587 [2024-07-26 14:20:04.468710] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.587 [2024-07-26 14:20:04.468909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.587 [2024-07-26 14:20:04.468960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.587 [2024-07-26 14:20:04.474026] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.587 [2024-07-26 14:20:04.474229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.587 [2024-07-26 14:20:04.474306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.587 [2024-07-26 14:20:04.479160] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.587 [2024-07-26 14:20:04.479456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.587 [2024-07-26 14:20:04.479579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.587 [2024-07-26 14:20:04.484288] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.587 [2024-07-26 14:20:04.484767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.587 [2024-07-26 14:20:04.484798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.587 [2024-07-26 14:20:04.489471] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.587 [2024-07-26 14:20:04.489873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.587 [2024-07-26 14:20:04.489963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.587 [2024-07-26 14:20:04.494695] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.587 [2024-07-26 14:20:04.495030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.587 [2024-07-26 14:20:04.495065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.587 [2024-07-26 14:20:04.499497] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.587 [2024-07-26 14:20:04.499988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.587 [2024-07-26 14:20:04.500020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.587 [2024-07-26 14:20:04.504857] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.587 [2024-07-26 14:20:04.505117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.587 [2024-07-26 14:20:04.505198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.587 [2024-07-26 14:20:04.509818] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.587 [2024-07-26 14:20:04.510171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.587 [2024-07-26 14:20:04.510258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.587 [2024-07-26 14:20:04.515020] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.587 [2024-07-26 14:20:04.515343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.588 [2024-07-26 14:20:04.515389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.588 [2024-07-26 14:20:04.520235] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.588 [2024-07-26 14:20:04.520494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.588 [2024-07-26 14:20:04.520612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.588 [2024-07-26 14:20:04.525417] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.588 [2024-07-26 14:20:04.525660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.588 [2024-07-26 14:20:04.525693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.588 [2024-07-26 14:20:04.530654] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.588 [2024-07-26 14:20:04.530985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.588 [2024-07-26 14:20:04.531021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.588 [2024-07-26 14:20:04.535721] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.588 [2024-07-26 14:20:04.535975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.588 [2024-07-26 14:20:04.536061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.588 [2024-07-26 14:20:04.541022] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.588 [2024-07-26 14:20:04.541253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.588 [2024-07-26 14:20:04.541284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.588 [2024-07-26 14:20:04.546261] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.588 [2024-07-26 14:20:04.546713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.588 [2024-07-26 14:20:04.546749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.588 [2024-07-26 14:20:04.551216] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.588 [2024-07-26 14:20:04.551594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.588 [2024-07-26 14:20:04.551672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.588 [2024-07-26 14:20:04.556391] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.588 [2024-07-26 14:20:04.556665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.588 [2024-07-26 14:20:04.556745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.588 [2024-07-26 14:20:04.561635] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.588 [2024-07-26 14:20:04.562071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.588 [2024-07-26 14:20:04.562101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.588 [2024-07-26 14:20:04.566846] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.588 [2024-07-26 14:20:04.567161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.588 [2024-07-26 14:20:04.567195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.588 [2024-07-26 14:20:04.572058] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.588 [2024-07-26 14:20:04.572295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.588 [2024-07-26 14:20:04.572400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.588 [2024-07-26 14:20:04.577396] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.588 [2024-07-26 14:20:04.577700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.588 [2024-07-26 14:20:04.577747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.588 [2024-07-26 14:20:04.582625] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.588 [2024-07-26 14:20:04.582929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.588 [2024-07-26 14:20:04.582975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.588 [2024-07-26 14:20:04.587829] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.588 [2024-07-26 14:20:04.588167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.588 [2024-07-26 14:20:04.588204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.588 [2024-07-26 14:20:04.593089] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.588 [2024-07-26 14:20:04.593323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.588 [2024-07-26 14:20:04.593353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.588 [2024-07-26 14:20:04.598307] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.588 [2024-07-26 14:20:04.598546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.588 [2024-07-26 14:20:04.598580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.847 [2024-07-26 14:20:04.603399] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.848 [2024-07-26 14:20:04.603678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.848 [2024-07-26 14:20:04.603762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.848 [2024-07-26 14:20:04.608739] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.848 [2024-07-26 14:20:04.608992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.848 [2024-07-26 14:20:04.609092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.848 [2024-07-26 14:20:04.613700] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.848 [2024-07-26 14:20:04.614032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.848 [2024-07-26 14:20:04.614061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.848 [2024-07-26 14:20:04.618915] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.848 [2024-07-26 14:20:04.619283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.848 [2024-07-26 14:20:04.619358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.848 [2024-07-26 14:20:04.623987] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.848 [2024-07-26 14:20:04.624222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.848 [2024-07-26 14:20:04.624252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.848 [2024-07-26 14:20:04.629134] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.848 [2024-07-26 14:20:04.629291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.848 [2024-07-26 14:20:04.629325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.848 [2024-07-26 14:20:04.634283] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.848 [2024-07-26 14:20:04.634546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.848 [2024-07-26 14:20:04.634627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.848 [2024-07-26 14:20:04.639795] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.848 [2024-07-26 14:20:04.639985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.848 [2024-07-26 14:20:04.640016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.848 [2024-07-26 14:20:04.645091] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.848 [2024-07-26 14:20:04.645302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.848 [2024-07-26 14:20:04.645333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.848 [2024-07-26 14:20:04.650747] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.848 [2024-07-26 14:20:04.650877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.848 [2024-07-26 14:20:04.650926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.848 [2024-07-26 14:20:04.657443] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.848 [2024-07-26 14:20:04.657759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.848 [2024-07-26 14:20:04.657789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.848 [2024-07-26 14:20:04.663863] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.848 [2024-07-26 14:20:04.664061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.848 [2024-07-26 14:20:04.664090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.848 [2024-07-26 14:20:04.670255] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.848 [2024-07-26 14:20:04.670568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.848 [2024-07-26 14:20:04.670649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.848 [2024-07-26 14:20:04.676488] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.848 [2024-07-26 14:20:04.676766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.848 [2024-07-26 14:20:04.676803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.848 [2024-07-26 14:20:04.683023] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.848 [2024-07-26 14:20:04.683184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.848 [2024-07-26 14:20:04.683214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.848 [2024-07-26 14:20:04.688804] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.848 [2024-07-26 14:20:04.688982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.848 [2024-07-26 14:20:04.689011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.848 [2024-07-26 14:20:04.694882] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.848 [2024-07-26 14:20:04.695051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.848 [2024-07-26 14:20:04.695095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.848 [2024-07-26 14:20:04.700444] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.848 [2024-07-26 14:20:04.700610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.848 [2024-07-26 14:20:04.700734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.848 [2024-07-26 14:20:04.705890] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.848 [2024-07-26 14:20:04.706108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.848 [2024-07-26 14:20:04.706138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.848 [2024-07-26 14:20:04.711412] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.848 [2024-07-26 14:20:04.711592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.848 [2024-07-26 14:20:04.711623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.848 [2024-07-26 14:20:04.716296] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.848 [2024-07-26 14:20:04.716722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.848 [2024-07-26 14:20:04.716818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.848 [2024-07-26 14:20:04.720594] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.848 [2024-07-26 14:20:04.720900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.848 [2024-07-26 14:20:04.720931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.848 [2024-07-26 14:20:04.724909] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.848 [2024-07-26 14:20:04.725331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.848 [2024-07-26 14:20:04.725396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.848 [2024-07-26 14:20:04.729412] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.848 [2024-07-26 14:20:04.729759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.848 [2024-07-26 14:20:04.729851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.848 [2024-07-26 14:20:04.733741] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.848 [2024-07-26 14:20:04.734155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.848 [2024-07-26 14:20:04.734217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.848 [2024-07-26 14:20:04.738074] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.848 [2024-07-26 14:20:04.738458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.849 [2024-07-26 14:20:04.738489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.849 [2024-07-26 14:20:04.742388] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.849 [2024-07-26 14:20:04.742682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.849 [2024-07-26 14:20:04.742748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.849 [2024-07-26 14:20:04.746723] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.849 [2024-07-26 14:20:04.747035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.849 [2024-07-26 14:20:04.747119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.849 [2024-07-26 14:20:04.751125] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.849 [2024-07-26 14:20:04.751379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.849 [2024-07-26 14:20:04.751469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.849 [2024-07-26 14:20:04.755426] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.849 [2024-07-26 14:20:04.755844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.849 [2024-07-26 14:20:04.755945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.849 [2024-07-26 14:20:04.759925] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.849 [2024-07-26 14:20:04.760278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.849 [2024-07-26 14:20:04.760324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.849 [2024-07-26 14:20:04.764454] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.849 [2024-07-26 14:20:04.764960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.849 [2024-07-26 14:20:04.765014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.849 [2024-07-26 14:20:04.768856] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.849 [2024-07-26 14:20:04.769268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.849 [2024-07-26 14:20:04.769298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.849 [2024-07-26 14:20:04.773229] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.849 [2024-07-26 14:20:04.773507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.849 [2024-07-26 14:20:04.773579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.849 [2024-07-26 14:20:04.777737] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.849 [2024-07-26 14:20:04.778029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.849 [2024-07-26 14:20:04.778097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.849 [2024-07-26 14:20:04.782308] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.849 [2024-07-26 14:20:04.782723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.849 [2024-07-26 14:20:04.782796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.849 [2024-07-26 14:20:04.786861] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.849 [2024-07-26 14:20:04.787085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.849 [2024-07-26 14:20:04.787114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.849 [2024-07-26 14:20:04.791426] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.849 [2024-07-26 14:20:04.791812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.849 [2024-07-26 14:20:04.791844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.849 [2024-07-26 14:20:04.795970] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.849 [2024-07-26 14:20:04.796292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.849 [2024-07-26 14:20:04.796366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.849 [2024-07-26 14:20:04.800907] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.849 [2024-07-26 14:20:04.801050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.849 [2024-07-26 14:20:04.801091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.849 [2024-07-26 14:20:04.806178] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.849 [2024-07-26 14:20:04.806381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.849 [2024-07-26 14:20:04.806410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.849 [2024-07-26 14:20:04.812083] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.849 [2024-07-26 14:20:04.812295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.849 [2024-07-26 14:20:04.812325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.849 [2024-07-26 14:20:04.817582] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.849 [2024-07-26 14:20:04.817683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.849 [2024-07-26 14:20:04.817711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.849 [2024-07-26 14:20:04.822168] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.849 [2024-07-26 14:20:04.822414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.849 [2024-07-26 14:20:04.822488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.849 [2024-07-26 14:20:04.826969] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.849 [2024-07-26 14:20:04.827220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.849 [2024-07-26 14:20:04.827250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.849 [2024-07-26 14:20:04.831916] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.849 [2024-07-26 14:20:04.832109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.849 [2024-07-26 14:20:04.832138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.849 [2024-07-26 14:20:04.836538] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.849 [2024-07-26 14:20:04.836958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.849 [2024-07-26 14:20:04.836987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.849 [2024-07-26 14:20:04.841052] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.849 [2024-07-26 14:20:04.841454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.849 [2024-07-26 14:20:04.841523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.849 [2024-07-26 14:20:04.845610] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.849 [2024-07-26 14:20:04.845885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.849 [2024-07-26 14:20:04.845939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.849 [2024-07-26 14:20:04.850229] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.849 [2024-07-26 14:20:04.850378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.849 [2024-07-26 14:20:04.850430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.849 [2024-07-26 14:20:04.854921] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.849 [2024-07-26 14:20:04.855214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.849 [2024-07-26 14:20:04.855246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.849 [2024-07-26 14:20:04.859650] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.850 [2024-07-26 14:20:04.859803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.850 [2024-07-26 14:20:04.859862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.850 [2024-07-26 14:20:04.864189] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:56.850 [2024-07-26 14:20:04.864465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.109 [2024-07-26 14:20:04.864523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.109 [2024-07-26 14:20:04.868879] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:57.109 [2024-07-26 14:20:04.869085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.109 [2024-07-26 14:20:04.869121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.109 [2024-07-26 14:20:04.873345] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:57.109 [2024-07-26 14:20:04.873520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.109 [2024-07-26 14:20:04.873583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.109 [2024-07-26 14:20:04.878064] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:57.109 [2024-07-26 14:20:04.878245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.109 [2024-07-26 14:20:04.878276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.109 [2024-07-26 14:20:04.882746] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:57.109 [2024-07-26 14:20:04.883008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.109 [2024-07-26 14:20:04.883054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.109 [2024-07-26 14:20:04.887335] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:57.109 [2024-07-26 14:20:04.887787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.109 [2024-07-26 14:20:04.887818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.109 [2024-07-26 14:20:04.892059] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:57.109 [2024-07-26 14:20:04.892399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.109 [2024-07-26 14:20:04.892429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.109 [2024-07-26 14:20:04.896727] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:57.109 [2024-07-26 14:20:04.897034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.109 [2024-07-26 14:20:04.897069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.109 [2024-07-26 14:20:04.901325] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:57.109 [2024-07-26 14:20:04.901547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.109 [2024-07-26 14:20:04.901632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.109 [2024-07-26 14:20:04.905996] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:57.109 [2024-07-26 14:20:04.906288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.109 [2024-07-26 14:20:04.906363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.109 [2024-07-26 14:20:04.910418] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:57.109 [2024-07-26 14:20:04.910701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.109 [2024-07-26 14:20:04.910733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.109 [2024-07-26 14:20:04.915222] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:57.109 [2024-07-26 14:20:04.915484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.109 [2024-07-26 14:20:04.915573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.109 [2024-07-26 14:20:04.920422] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:57.109 [2024-07-26 14:20:04.920764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.109 [2024-07-26 14:20:04.920795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.109 [2024-07-26 14:20:04.925697] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:57.109 [2024-07-26 14:20:04.926057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.109 [2024-07-26 14:20:04.926119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.109 [2024-07-26 14:20:04.931133] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:57.109 [2024-07-26 14:20:04.931474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.109 [2024-07-26 14:20:04.931570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.109 [2024-07-26 14:20:04.936473] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:57.110 [2024-07-26 14:20:04.936809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.110 [2024-07-26 14:20:04.936855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.110 [2024-07-26 14:20:04.942919] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:57.110 [2024-07-26 14:20:04.943236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.110 [2024-07-26 14:20:04.943297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.110 [2024-07-26 14:20:04.948139] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:57.110 [2024-07-26 14:20:04.948488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.110 [2024-07-26 14:20:04.948555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.110 [2024-07-26 14:20:04.952767] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:57.110 [2024-07-26 14:20:04.953140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.110 [2024-07-26 14:20:04.953185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.110 [2024-07-26 14:20:04.957195] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:57.110 [2024-07-26 14:20:04.957522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.110 [2024-07-26 14:20:04.957620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.110 [2024-07-26 14:20:04.961988] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:57.110 [2024-07-26 14:20:04.962136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.110 [2024-07-26 14:20:04.962164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.110 [2024-07-26 14:20:04.967167] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:57.110 [2024-07-26 14:20:04.967360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.110 [2024-07-26 14:20:04.967388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.110 [2024-07-26 14:20:04.971723] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:57.110 [2024-07-26 14:20:04.972096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.110 [2024-07-26 14:20:04.972190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.110 [2024-07-26 14:20:04.976745] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:57.110 [2024-07-26 14:20:04.977071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.110 [2024-07-26 14:20:04.977104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.110 [2024-07-26 14:20:04.982038] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:57.110 [2024-07-26 14:20:04.982210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.110 [2024-07-26 14:20:04.982282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.110 [2024-07-26 14:20:04.988303] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:57.110 [2024-07-26 14:20:04.988489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.110 [2024-07-26 14:20:04.988542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.110 [2024-07-26 14:20:04.993650] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:57.110 [2024-07-26 14:20:04.993823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.110 [2024-07-26 14:20:04.993851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.110 [2024-07-26 14:20:04.998994] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:57.110 [2024-07-26 14:20:04.999183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.110 [2024-07-26 14:20:04.999278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.110 [2024-07-26 14:20:05.004295] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:57.110 [2024-07-26 14:20:05.004580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.110 [2024-07-26 14:20:05.004615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.110 [2024-07-26 14:20:05.009497] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:57.110 [2024-07-26 14:20:05.009710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.110 [2024-07-26 14:20:05.009742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.110 [2024-07-26 14:20:05.014828] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:57.110 [2024-07-26 14:20:05.015041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.110 [2024-07-26 14:20:05.015102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.110 [2024-07-26 14:20:05.020447] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:57.110 [2024-07-26 14:20:05.020627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.110 [2024-07-26 14:20:05.020656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.110 [2024-07-26 14:20:05.025659] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:57.110 [2024-07-26 14:20:05.025884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.110 [2024-07-26 14:20:05.025920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.110 [2024-07-26 14:20:05.030992] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:57.110 [2024-07-26 14:20:05.031212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.110 [2024-07-26 14:20:05.031253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.110 [2024-07-26 14:20:05.036235] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:57.110 [2024-07-26 14:20:05.036442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.110 [2024-07-26 14:20:05.036470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.110 [2024-07-26 14:20:05.041439] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:57.110 [2024-07-26 14:20:05.041590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.110 [2024-07-26 14:20:05.041673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.110 [2024-07-26 14:20:05.046744] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:57.110 [2024-07-26 14:20:05.046932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.110 [2024-07-26 14:20:05.046959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.110 [2024-07-26 14:20:05.052119] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:57.110 [2024-07-26 14:20:05.052303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.110 [2024-07-26 14:20:05.052333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.110 [2024-07-26 14:20:05.058066] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:57.110 [2024-07-26 14:20:05.058250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.110 [2024-07-26 14:20:05.058278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.110 [2024-07-26 14:20:05.063584] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:57.110 [2024-07-26 14:20:05.063783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.110 [2024-07-26 14:20:05.063854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.110 [2024-07-26 14:20:05.068072] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:57.110 [2024-07-26 14:20:05.068467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.110 [2024-07-26 14:20:05.068552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.110 [2024-07-26 14:20:05.072573] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:57.110 [2024-07-26 14:20:05.072977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.110 [2024-07-26 14:20:05.073042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.110 [2024-07-26 14:20:05.077217] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:57.111 [2024-07-26 14:20:05.077583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.111 [2024-07-26 14:20:05.077637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.111 [2024-07-26 14:20:05.082016] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:57.111 [2024-07-26 14:20:05.082297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.111 [2024-07-26 14:20:05.082328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.111 [2024-07-26 14:20:05.086668] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:57.111 [2024-07-26 14:20:05.086920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.111 [2024-07-26 14:20:05.086989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.111 [2024-07-26 14:20:05.091446] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:57.111 [2024-07-26 14:20:05.091811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.111 [2024-07-26 14:20:05.091900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.111 [2024-07-26 14:20:05.095773] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:57.111 [2024-07-26 14:20:05.096104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.111 [2024-07-26 14:20:05.096134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.111 [2024-07-26 14:20:05.100584] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11b4af0) with pdu=0x2000190fef90 00:25:57.111 [2024-07-26 14:20:05.100766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.111 [2024-07-26 14:20:05.100828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.111 00:25:57.111 Latency(us) 00:25:57.111 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:57.111 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:25:57.111 nvme0n1 : 2.00 6222.51 777.81 0.00 0.00 2560.36 1747.63 6699.24 00:25:57.111 =================================================================================================================== 00:25:57.111 Total : 6222.51 777.81 0.00 0.00 2560.36 1747.63 6699.24 00:25:57.111 0 00:25:57.111 14:20:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:57.111 14:20:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:57.111 14:20:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:57.111 14:20:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:57.111 | .driver_specific 00:25:57.111 | .nvme_error 00:25:57.111 | .status_code 00:25:57.111 | .command_transient_transport_error' 00:25:57.369 14:20:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 401 > 0 )) 00:25:57.369 14:20:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 322292 00:25:57.369 14:20:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 322292 ']' 00:25:57.369 14:20:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 322292 00:25:57.369 14:20:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:25:57.369 14:20:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:57.369 14:20:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 322292 00:25:57.627 14:20:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:57.627 14:20:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:57.627 14:20:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 322292' 00:25:57.627 killing process with pid 322292 00:25:57.627 14:20:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 322292 00:25:57.627 Received shutdown signal, test time was about 2.000000 seconds 00:25:57.627 00:25:57.627 Latency(us) 00:25:57.627 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:57.627 =================================================================================================================== 00:25:57.627 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:57.627 14:20:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 322292 00:25:57.885 14:20:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 320924 00:25:57.885 14:20:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 320924 ']' 00:25:57.885 14:20:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 320924 00:25:57.885 14:20:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:25:57.885 14:20:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:57.885 14:20:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 320924 00:25:57.885 14:20:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:57.885 14:20:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:57.885 14:20:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 320924' 00:25:57.885 killing process with pid 320924 00:25:57.885 14:20:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 320924 00:25:57.885 14:20:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 320924 00:25:58.143 00:25:58.143 real 0m15.330s 00:25:58.143 user 0m29.235s 00:25:58.143 sys 0m4.605s 00:25:58.143 14:20:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:58.143 14:20:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:58.143 ************************************ 00:25:58.143 END TEST nvmf_digest_error 00:25:58.143 ************************************ 00:25:58.143 14:20:05 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:25:58.143 14:20:05 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:25:58.143 14:20:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:58.143 14:20:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:25:58.143 14:20:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:58.143 14:20:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:25:58.143 14:20:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:58.143 14:20:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:58.143 rmmod nvme_tcp 00:25:58.143 rmmod nvme_fabrics 00:25:58.143 rmmod nvme_keyring 00:25:58.143 14:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:58.143 14:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:25:58.143 14:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:25:58.143 14:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 320924 ']' 00:25:58.143 14:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 320924 00:25:58.143 14:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@950 -- # '[' -z 320924 ']' 00:25:58.143 14:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # kill -0 320924 00:25:58.143 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (320924) - No such process 00:25:58.143 14:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@977 -- # echo 'Process with pid 320924 is not found' 00:25:58.143 Process with pid 320924 is not found 00:25:58.143 14:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:58.143 14:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:58.143 14:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:58.143 14:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:58.143 14:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:58.143 14:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:58.143 14:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:58.143 14:20:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:00.679 14:20:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:00.679 00:26:00.679 real 0m34.989s 00:26:00.679 user 0m59.804s 00:26:00.679 sys 0m10.623s 00:26:00.679 14:20:08 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:00.679 14:20:08 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:00.679 ************************************ 00:26:00.679 END TEST nvmf_digest 00:26:00.679 ************************************ 00:26:00.679 14:20:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:26:00.679 14:20:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:26:00.679 14:20:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:26:00.679 14:20:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:26:00.679 14:20:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:00.679 14:20:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:00.679 14:20:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.679 ************************************ 00:26:00.679 START TEST nvmf_bdevperf 00:26:00.679 ************************************ 00:26:00.679 14:20:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:26:00.679 * Looking for test storage... 00:26:00.679 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:00.679 14:20:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:00.679 14:20:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:26:00.679 14:20:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:00.679 14:20:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:00.679 14:20:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:00.679 14:20:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:00.679 14:20:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:00.679 14:20:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:00.679 14:20:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:00.679 14:20:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:00.679 14:20:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:00.679 14:20:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:00.679 14:20:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:26:00.679 14:20:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:26:00.679 14:20:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:00.679 14:20:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:00.679 14:20:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:00.679 14:20:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:00.679 14:20:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:00.679 14:20:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:00.679 14:20:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:00.679 14:20:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:00.679 14:20:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:00.680 14:20:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:00.680 14:20:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:00.680 14:20:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:26:00.680 14:20:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:00.680 14:20:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:26:00.680 14:20:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:00.680 14:20:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:00.680 14:20:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:00.680 14:20:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:00.680 14:20:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:00.680 14:20:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:00.680 14:20:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:00.680 14:20:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:00.680 14:20:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:00.680 14:20:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:00.680 14:20:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:26:00.680 14:20:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:00.680 14:20:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:00.680 14:20:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:00.680 14:20:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:00.680 14:20:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:00.680 14:20:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:00.680 14:20:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:00.680 14:20:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:00.680 14:20:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:00.680 14:20:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:00.680 14:20:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:26:00.680 14:20:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:02.579 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:02.579 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:26:02.579 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:02.579 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:02.579 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:02.579 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:02.579 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:02.579 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:26:02.579 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:02.579 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:26:02.579 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:26:02.579 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:26:02.579 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:26:02.579 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:26:02.579 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:26:02.579 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:02.579 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:02.579 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:02.579 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:02.579 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:02.579 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:02.579 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:02.579 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:02.579 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:02.579 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:02.579 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:02.579 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:02.579 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:02.579 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:02.579 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:02.579 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:02.579 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:02.579 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:02.579 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:26:02.579 Found 0000:09:00.0 (0x8086 - 0x159b) 00:26:02.579 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:02.579 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:02.579 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:02.579 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:02.579 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:02.579 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:02.579 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:26:02.579 Found 0000:09:00.1 (0x8086 - 0x159b) 00:26:02.579 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:02.579 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:02.579 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:02.579 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:02.579 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:02.579 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:02.579 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:02.579 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:02.579 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:02.579 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:02.579 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:02.579 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:02.579 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:02.579 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:02.579 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:02.579 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:26:02.579 Found net devices under 0000:09:00.0: cvl_0_0 00:26:02.579 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:02.579 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:02.579 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:02.579 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:02.579 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:02.579 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:02.579 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:02.579 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:02.579 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:26:02.579 Found net devices under 0000:09:00.1: cvl_0_1 00:26:02.579 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:02.579 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:02.579 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:26:02.579 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:02.579 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:02.579 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:02.579 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:02.579 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:02.579 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:02.579 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:02.579 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:02.579 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:02.579 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:02.579 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:02.579 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:02.579 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:02.579 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:02.579 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:02.579 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:02.579 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:02.579 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:02.579 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:02.579 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:02.579 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:02.579 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:02.579 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:02.579 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:02.579 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.221 ms 00:26:02.579 00:26:02.579 --- 10.0.0.2 ping statistics --- 00:26:02.579 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:02.579 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:26:02.579 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:02.579 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:02.579 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.172 ms 00:26:02.579 00:26:02.579 --- 10.0.0.1 ping statistics --- 00:26:02.579 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:02.579 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:26:02.579 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:02.579 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:26:02.579 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:02.579 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:02.579 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:02.579 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:02.579 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:02.579 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:02.579 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:02.579 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:26:02.579 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:26:02.579 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:02.579 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:02.579 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:02.579 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=324663 00:26:02.579 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:02.579 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 324663 00:26:02.579 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 324663 ']' 00:26:02.579 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:02.579 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:02.579 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:02.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:02.579 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:02.579 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:02.579 [2024-07-26 14:20:10.409627] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:26:02.579 [2024-07-26 14:20:10.409704] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:02.579 EAL: No free 2048 kB hugepages reported on node 1 00:26:02.579 [2024-07-26 14:20:10.475134] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:02.579 [2024-07-26 14:20:10.587212] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:02.579 [2024-07-26 14:20:10.587272] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:02.579 [2024-07-26 14:20:10.587286] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:02.579 [2024-07-26 14:20:10.587296] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:02.579 [2024-07-26 14:20:10.587306] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:02.579 [2024-07-26 14:20:10.587388] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:02.579 [2024-07-26 14:20:10.587454] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:02.579 [2024-07-26 14:20:10.587457] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:02.837 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:02.837 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:26:02.837 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:02.837 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:02.837 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:02.837 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:02.837 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:02.837 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.837 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:02.837 [2024-07-26 14:20:10.734262] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:02.837 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.837 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:02.837 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.837 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:02.837 Malloc0 00:26:02.837 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.837 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:02.837 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.837 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:02.837 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.837 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:02.837 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.837 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:02.837 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.837 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:02.837 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.837 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:02.837 [2024-07-26 14:20:10.793622] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:02.837 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.837 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:26:02.837 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:26:02.837 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:26:02.837 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:26:02.837 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:02.837 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:02.837 { 00:26:02.837 "params": { 00:26:02.837 "name": "Nvme$subsystem", 00:26:02.837 "trtype": "$TEST_TRANSPORT", 00:26:02.837 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:02.837 "adrfam": "ipv4", 00:26:02.837 "trsvcid": "$NVMF_PORT", 00:26:02.837 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:02.837 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:02.837 "hdgst": ${hdgst:-false}, 00:26:02.837 "ddgst": ${ddgst:-false} 00:26:02.837 }, 00:26:02.837 "method": "bdev_nvme_attach_controller" 00:26:02.837 } 00:26:02.837 EOF 00:26:02.837 )") 00:26:02.837 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:26:02.837 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:26:02.837 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:26:02.837 14:20:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:02.837 "params": { 00:26:02.837 "name": "Nvme1", 00:26:02.837 "trtype": "tcp", 00:26:02.837 "traddr": "10.0.0.2", 00:26:02.837 "adrfam": "ipv4", 00:26:02.837 "trsvcid": "4420", 00:26:02.837 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:02.837 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:02.837 "hdgst": false, 00:26:02.837 "ddgst": false 00:26:02.837 }, 00:26:02.837 "method": "bdev_nvme_attach_controller" 00:26:02.837 }' 00:26:02.837 [2024-07-26 14:20:10.842391] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:26:02.837 [2024-07-26 14:20:10.842453] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid324789 ] 00:26:03.094 EAL: No free 2048 kB hugepages reported on node 1 00:26:03.095 [2024-07-26 14:20:10.901617] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:03.095 [2024-07-26 14:20:11.015985] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:03.352 Running I/O for 1 seconds... 00:26:04.723 00:26:04.723 Latency(us) 00:26:04.723 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:04.723 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:04.723 Verification LBA range: start 0x0 length 0x4000 00:26:04.723 Nvme1n1 : 1.01 8403.99 32.83 0.00 0.00 15166.93 3228.25 18155.90 00:26:04.723 =================================================================================================================== 00:26:04.723 Total : 8403.99 32.83 0.00 0.00 15166.93 3228.25 18155.90 00:26:04.723 14:20:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=324939 00:26:04.723 14:20:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:26:04.723 14:20:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:26:04.723 14:20:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:26:04.723 14:20:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:26:04.723 14:20:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:26:04.723 14:20:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:04.723 14:20:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:04.723 { 00:26:04.723 "params": { 00:26:04.723 "name": "Nvme$subsystem", 00:26:04.723 "trtype": "$TEST_TRANSPORT", 00:26:04.723 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:04.723 "adrfam": "ipv4", 00:26:04.723 "trsvcid": "$NVMF_PORT", 00:26:04.723 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:04.723 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:04.723 "hdgst": ${hdgst:-false}, 00:26:04.723 "ddgst": ${ddgst:-false} 00:26:04.723 }, 00:26:04.723 "method": "bdev_nvme_attach_controller" 00:26:04.723 } 00:26:04.723 EOF 00:26:04.723 )") 00:26:04.723 14:20:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:26:04.723 14:20:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:26:04.723 14:20:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:26:04.723 14:20:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:04.723 "params": { 00:26:04.723 "name": "Nvme1", 00:26:04.723 "trtype": "tcp", 00:26:04.723 "traddr": "10.0.0.2", 00:26:04.723 "adrfam": "ipv4", 00:26:04.723 "trsvcid": "4420", 00:26:04.723 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:04.723 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:04.723 "hdgst": false, 00:26:04.723 "ddgst": false 00:26:04.723 }, 00:26:04.723 "method": "bdev_nvme_attach_controller" 00:26:04.723 }' 00:26:04.723 [2024-07-26 14:20:12.642711] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:26:04.723 [2024-07-26 14:20:12.642790] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid324939 ] 00:26:04.723 EAL: No free 2048 kB hugepages reported on node 1 00:26:04.723 [2024-07-26 14:20:12.705672] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:04.981 [2024-07-26 14:20:12.816229] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:05.239 Running I/O for 15 seconds... 00:26:07.769 14:20:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 324663 00:26:07.769 14:20:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:26:07.769 [2024-07-26 14:20:15.610244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:48248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.769 [2024-07-26 14:20:15.610302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.769 [2024-07-26 14:20:15.610349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:48624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.769 [2024-07-26 14:20:15.610366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.769 [2024-07-26 14:20:15.610382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:48632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.769 [2024-07-26 14:20:15.610397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.769 [2024-07-26 14:20:15.610413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:48640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.769 [2024-07-26 14:20:15.610427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.769 [2024-07-26 14:20:15.610451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:48648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.769 [2024-07-26 14:20:15.610481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.769 [2024-07-26 14:20:15.610498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:48656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.769 [2024-07-26 14:20:15.610510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.769 [2024-07-26 14:20:15.610525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:48664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.769 [2024-07-26 14:20:15.610591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.769 [2024-07-26 14:20:15.610610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:48672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.769 [2024-07-26 14:20:15.610626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.769 [2024-07-26 14:20:15.610642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:48680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.769 [2024-07-26 14:20:15.610657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.769 [2024-07-26 14:20:15.610673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:48688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.769 [2024-07-26 14:20:15.610687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.769 [2024-07-26 14:20:15.610705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:48696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.769 [2024-07-26 14:20:15.610720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.769 [2024-07-26 14:20:15.610737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:48704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.769 [2024-07-26 14:20:15.610753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.769 [2024-07-26 14:20:15.610770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:48712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.769 [2024-07-26 14:20:15.610786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.769 [2024-07-26 14:20:15.610803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:48720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.769 [2024-07-26 14:20:15.610820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.769 [2024-07-26 14:20:15.610854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:48728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.769 [2024-07-26 14:20:15.610866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.769 [2024-07-26 14:20:15.610879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:48736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.769 [2024-07-26 14:20:15.610891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.769 [2024-07-26 14:20:15.610905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:48744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.769 [2024-07-26 14:20:15.610922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.769 [2024-07-26 14:20:15.610937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:48752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.769 [2024-07-26 14:20:15.610951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.769 [2024-07-26 14:20:15.610965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:48760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.769 [2024-07-26 14:20:15.610977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.769 [2024-07-26 14:20:15.610990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:48768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.769 [2024-07-26 14:20:15.611003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.769 [2024-07-26 14:20:15.611017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:48776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.769 [2024-07-26 14:20:15.611029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.769 [2024-07-26 14:20:15.611042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:48784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.769 [2024-07-26 14:20:15.611054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.769 [2024-07-26 14:20:15.611068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:48792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.769 [2024-07-26 14:20:15.611080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.769 [2024-07-26 14:20:15.611094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:48800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.770 [2024-07-26 14:20:15.611106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.770 [2024-07-26 14:20:15.611120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:48808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.770 [2024-07-26 14:20:15.611132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.770 [2024-07-26 14:20:15.611145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:48816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.770 [2024-07-26 14:20:15.611157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.770 [2024-07-26 14:20:15.611171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:48824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.770 [2024-07-26 14:20:15.611183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.770 [2024-07-26 14:20:15.611197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:48832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.770 [2024-07-26 14:20:15.611208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.770 [2024-07-26 14:20:15.611222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:48840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.770 [2024-07-26 14:20:15.611249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.770 [2024-07-26 14:20:15.611266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:48848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.770 [2024-07-26 14:20:15.611279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.770 [2024-07-26 14:20:15.611292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:48856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.770 [2024-07-26 14:20:15.611304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.770 [2024-07-26 14:20:15.611322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:48864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.770 [2024-07-26 14:20:15.611335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.770 [2024-07-26 14:20:15.611348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:48872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.770 [2024-07-26 14:20:15.611360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.770 [2024-07-26 14:20:15.611373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:48880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.770 [2024-07-26 14:20:15.611385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.770 [2024-07-26 14:20:15.611398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:48888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.770 [2024-07-26 14:20:15.611410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.770 [2024-07-26 14:20:15.611423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:48896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.770 [2024-07-26 14:20:15.611434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.770 [2024-07-26 14:20:15.611448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:48904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.770 [2024-07-26 14:20:15.611459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.770 [2024-07-26 14:20:15.611472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:48912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.770 [2024-07-26 14:20:15.611484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.770 [2024-07-26 14:20:15.611497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:48920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.770 [2024-07-26 14:20:15.611524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.770 [2024-07-26 14:20:15.611549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:48928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.770 [2024-07-26 14:20:15.611564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.770 [2024-07-26 14:20:15.611579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:48936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.770 [2024-07-26 14:20:15.611593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.770 [2024-07-26 14:20:15.611609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:48944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.770 [2024-07-26 14:20:15.611624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.770 [2024-07-26 14:20:15.611643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:48952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.770 [2024-07-26 14:20:15.611657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.770 [2024-07-26 14:20:15.611673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:48960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.770 [2024-07-26 14:20:15.611686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.770 [2024-07-26 14:20:15.611702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:48968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.770 [2024-07-26 14:20:15.611715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.770 [2024-07-26 14:20:15.611730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:48976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.770 [2024-07-26 14:20:15.611743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.770 [2024-07-26 14:20:15.611759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:48984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.770 [2024-07-26 14:20:15.611772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.770 [2024-07-26 14:20:15.611793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:48992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.770 [2024-07-26 14:20:15.611807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.770 [2024-07-26 14:20:15.611837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:49000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.770 [2024-07-26 14:20:15.611849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.770 [2024-07-26 14:20:15.611862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:49008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.770 [2024-07-26 14:20:15.611874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.770 [2024-07-26 14:20:15.611903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:49016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.770 [2024-07-26 14:20:15.611915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.770 [2024-07-26 14:20:15.611928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:49024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.770 [2024-07-26 14:20:15.611940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.770 [2024-07-26 14:20:15.611953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:49032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.770 [2024-07-26 14:20:15.611965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.770 [2024-07-26 14:20:15.611978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:49040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.770 [2024-07-26 14:20:15.611989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.770 [2024-07-26 14:20:15.612003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:49048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.770 [2024-07-26 14:20:15.612018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.770 [2024-07-26 14:20:15.612031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:49056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.770 [2024-07-26 14:20:15.612043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.770 [2024-07-26 14:20:15.612057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.770 [2024-07-26 14:20:15.612068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.770 [2024-07-26 14:20:15.612082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:49072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.770 [2024-07-26 14:20:15.612093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.770 [2024-07-26 14:20:15.612107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:49080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.770 [2024-07-26 14:20:15.612118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.770 [2024-07-26 14:20:15.612132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:49088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.770 [2024-07-26 14:20:15.612143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.770 [2024-07-26 14:20:15.612156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:49096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.770 [2024-07-26 14:20:15.612169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.770 [2024-07-26 14:20:15.612182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:49104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.770 [2024-07-26 14:20:15.612193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.770 [2024-07-26 14:20:15.612207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:49112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.771 [2024-07-26 14:20:15.612218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.771 [2024-07-26 14:20:15.612236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:49120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.771 [2024-07-26 14:20:15.612249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.771 [2024-07-26 14:20:15.612262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:49128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.771 [2024-07-26 14:20:15.612274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.771 [2024-07-26 14:20:15.612287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:49136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.771 [2024-07-26 14:20:15.612299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.771 [2024-07-26 14:20:15.612312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:49144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.771 [2024-07-26 14:20:15.612324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.771 [2024-07-26 14:20:15.612340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:49152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.771 [2024-07-26 14:20:15.612353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.771 [2024-07-26 14:20:15.612366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:49160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.771 [2024-07-26 14:20:15.612378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.771 [2024-07-26 14:20:15.612391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:49168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.771 [2024-07-26 14:20:15.612404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.771 [2024-07-26 14:20:15.612417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:49176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.771 [2024-07-26 14:20:15.612429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.771 [2024-07-26 14:20:15.612442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:49184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.771 [2024-07-26 14:20:15.612454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.771 [2024-07-26 14:20:15.612467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:49192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.771 [2024-07-26 14:20:15.612479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.771 [2024-07-26 14:20:15.612492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:49200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.771 [2024-07-26 14:20:15.612503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.771 [2024-07-26 14:20:15.612540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:48256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.771 [2024-07-26 14:20:15.612556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.771 [2024-07-26 14:20:15.612572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:48264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.771 [2024-07-26 14:20:15.612586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.771 [2024-07-26 14:20:15.612601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:48272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.771 [2024-07-26 14:20:15.612614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.771 [2024-07-26 14:20:15.612629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:48280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.771 [2024-07-26 14:20:15.612643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.771 [2024-07-26 14:20:15.612658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:48288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.771 [2024-07-26 14:20:15.612672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.771 [2024-07-26 14:20:15.612694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:48296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.771 [2024-07-26 14:20:15.612713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.771 [2024-07-26 14:20:15.612729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:48304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.771 [2024-07-26 14:20:15.612743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.771 [2024-07-26 14:20:15.612758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:49208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.771 [2024-07-26 14:20:15.612771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.771 [2024-07-26 14:20:15.612787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:49216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.771 [2024-07-26 14:20:15.612801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.771 [2024-07-26 14:20:15.612831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:49224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.771 [2024-07-26 14:20:15.612844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.771 [2024-07-26 14:20:15.612857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:49232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.771 [2024-07-26 14:20:15.612869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.771 [2024-07-26 14:20:15.612897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:49240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.771 [2024-07-26 14:20:15.612909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.771 [2024-07-26 14:20:15.612922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:49248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.771 [2024-07-26 14:20:15.612933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.771 [2024-07-26 14:20:15.612947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:49256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.771 [2024-07-26 14:20:15.612959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.771 [2024-07-26 14:20:15.612972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:49264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.771 [2024-07-26 14:20:15.612983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.771 [2024-07-26 14:20:15.612997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:48312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.771 [2024-07-26 14:20:15.613009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.771 [2024-07-26 14:20:15.613021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:48320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.771 [2024-07-26 14:20:15.613033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.771 [2024-07-26 14:20:15.613046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:48328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.771 [2024-07-26 14:20:15.613058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.771 [2024-07-26 14:20:15.613071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:48336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.771 [2024-07-26 14:20:15.613086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.771 [2024-07-26 14:20:15.613100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:48344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.771 [2024-07-26 14:20:15.613112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.771 [2024-07-26 14:20:15.613125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:48352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.771 [2024-07-26 14:20:15.613137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.771 [2024-07-26 14:20:15.613154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:48360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.771 [2024-07-26 14:20:15.613166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.771 [2024-07-26 14:20:15.613179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:48368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.771 [2024-07-26 14:20:15.613191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.771 [2024-07-26 14:20:15.613206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:48376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.771 [2024-07-26 14:20:15.613217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.771 [2024-07-26 14:20:15.613231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:48384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.771 [2024-07-26 14:20:15.613242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.771 [2024-07-26 14:20:15.613256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:48392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.771 [2024-07-26 14:20:15.613267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.771 [2024-07-26 14:20:15.613281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:48400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.771 [2024-07-26 14:20:15.613292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.771 [2024-07-26 14:20:15.613305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:48408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.771 [2024-07-26 14:20:15.613317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.771 [2024-07-26 14:20:15.613330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:48416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.772 [2024-07-26 14:20:15.613342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.772 [2024-07-26 14:20:15.613355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:48424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.772 [2024-07-26 14:20:15.613367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.772 [2024-07-26 14:20:15.613380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:48432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.772 [2024-07-26 14:20:15.613392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.772 [2024-07-26 14:20:15.613408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:48440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.772 [2024-07-26 14:20:15.613421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.772 [2024-07-26 14:20:15.613434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:48448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.772 [2024-07-26 14:20:15.613446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.772 [2024-07-26 14:20:15.613459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:48456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.772 [2024-07-26 14:20:15.613471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.772 [2024-07-26 14:20:15.613485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:48464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.772 [2024-07-26 14:20:15.613496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.772 [2024-07-26 14:20:15.613524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:48472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.772 [2024-07-26 14:20:15.613548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.772 [2024-07-26 14:20:15.613565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:48480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.772 [2024-07-26 14:20:15.613579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.772 [2024-07-26 14:20:15.613599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:48488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.772 [2024-07-26 14:20:15.613613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.772 [2024-07-26 14:20:15.613630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:48496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.772 [2024-07-26 14:20:15.613644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.772 [2024-07-26 14:20:15.613659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:48504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.772 [2024-07-26 14:20:15.613673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.772 [2024-07-26 14:20:15.613688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:48512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.772 [2024-07-26 14:20:15.613702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.772 [2024-07-26 14:20:15.613718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:48520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.772 [2024-07-26 14:20:15.613731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.772 [2024-07-26 14:20:15.613747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:48528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.772 [2024-07-26 14:20:15.613760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.772 [2024-07-26 14:20:15.613776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:48536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.772 [2024-07-26 14:20:15.613794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.772 [2024-07-26 14:20:15.613810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:48544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.772 [2024-07-26 14:20:15.613839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.772 [2024-07-26 14:20:15.613853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:48552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.772 [2024-07-26 14:20:15.613865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.772 [2024-07-26 14:20:15.613879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:48560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.772 [2024-07-26 14:20:15.613906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.772 [2024-07-26 14:20:15.613920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:48568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.772 [2024-07-26 14:20:15.613932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.772 [2024-07-26 14:20:15.613945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:48576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.772 [2024-07-26 14:20:15.613957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.772 [2024-07-26 14:20:15.613970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:48584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.772 [2024-07-26 14:20:15.613982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.772 [2024-07-26 14:20:15.613995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:48592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.772 [2024-07-26 14:20:15.614007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.772 [2024-07-26 14:20:15.614020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:48600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.772 [2024-07-26 14:20:15.614032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.772 [2024-07-26 14:20:15.614045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:48608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.772 [2024-07-26 14:20:15.614057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.772 [2024-07-26 14:20:15.614079] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcb98c0 is same with the state(5) to be set 00:26:07.772 [2024-07-26 14:20:15.614096] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:07.772 [2024-07-26 14:20:15.614106] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:07.772 [2024-07-26 14:20:15.614116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:48616 len:8 PRP1 0x0 PRP2 0x0 00:26:07.772 [2024-07-26 14:20:15.614127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.772 [2024-07-26 14:20:15.614184] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xcb98c0 was disconnected and freed. reset controller. 00:26:07.772 [2024-07-26 14:20:15.614261] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:07.772 [2024-07-26 14:20:15.614287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.772 [2024-07-26 14:20:15.614301] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:07.772 [2024-07-26 14:20:15.614328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.772 [2024-07-26 14:20:15.614342] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:07.772 [2024-07-26 14:20:15.614355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.772 [2024-07-26 14:20:15.614369] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:07.772 [2024-07-26 14:20:15.614382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.772 [2024-07-26 14:20:15.614394] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:07.772 [2024-07-26 14:20:15.617455] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:07.772 [2024-07-26 14:20:15.617489] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:07.772 [2024-07-26 14:20:15.618336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.772 [2024-07-26 14:20:15.618366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:07.772 [2024-07-26 14:20:15.618383] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:07.772 [2024-07-26 14:20:15.618622] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:07.772 [2024-07-26 14:20:15.618853] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:07.772 [2024-07-26 14:20:15.618872] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:07.772 [2024-07-26 14:20:15.618902] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:07.772 [2024-07-26 14:20:15.621935] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:07.772 [2024-07-26 14:20:15.631031] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:07.772 [2024-07-26 14:20:15.631502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.772 [2024-07-26 14:20:15.631562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:07.772 [2024-07-26 14:20:15.631578] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:07.772 [2024-07-26 14:20:15.631798] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:07.772 [2024-07-26 14:20:15.632019] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:07.772 [2024-07-26 14:20:15.632038] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:07.772 [2024-07-26 14:20:15.632051] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:07.773 [2024-07-26 14:20:15.635056] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:07.773 [2024-07-26 14:20:15.644138] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:07.773 [2024-07-26 14:20:15.644617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.773 [2024-07-26 14:20:15.644651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:07.773 [2024-07-26 14:20:15.644668] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:07.773 [2024-07-26 14:20:15.644919] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:07.773 [2024-07-26 14:20:15.645124] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:07.773 [2024-07-26 14:20:15.645143] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:07.773 [2024-07-26 14:20:15.645155] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:07.773 [2024-07-26 14:20:15.648089] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:07.773 [2024-07-26 14:20:15.657226] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:07.773 [2024-07-26 14:20:15.657541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.773 [2024-07-26 14:20:15.657568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:07.773 [2024-07-26 14:20:15.657584] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:07.773 [2024-07-26 14:20:15.657837] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:07.773 [2024-07-26 14:20:15.658042] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:07.773 [2024-07-26 14:20:15.658061] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:07.773 [2024-07-26 14:20:15.658073] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:07.773 [2024-07-26 14:20:15.661166] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:07.773 [2024-07-26 14:20:15.670630] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:07.773 [2024-07-26 14:20:15.671029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.773 [2024-07-26 14:20:15.671057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:07.773 [2024-07-26 14:20:15.671073] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:07.773 [2024-07-26 14:20:15.671300] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:07.773 [2024-07-26 14:20:15.671522] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:07.773 [2024-07-26 14:20:15.671560] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:07.773 [2024-07-26 14:20:15.671577] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:07.773 [2024-07-26 14:20:15.674769] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:07.773 [2024-07-26 14:20:15.683936] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:07.773 [2024-07-26 14:20:15.684356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.773 [2024-07-26 14:20:15.684386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:07.773 [2024-07-26 14:20:15.684403] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:07.773 [2024-07-26 14:20:15.684630] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:07.773 [2024-07-26 14:20:15.684865] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:07.773 [2024-07-26 14:20:15.684886] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:07.773 [2024-07-26 14:20:15.684899] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:07.773 [2024-07-26 14:20:15.687950] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:07.773 [2024-07-26 14:20:15.697122] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:07.773 [2024-07-26 14:20:15.697470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.773 [2024-07-26 14:20:15.697498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:07.773 [2024-07-26 14:20:15.697535] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:07.773 [2024-07-26 14:20:15.697781] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:07.773 [2024-07-26 14:20:15.698003] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:07.773 [2024-07-26 14:20:15.698023] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:07.773 [2024-07-26 14:20:15.698035] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:07.773 [2024-07-26 14:20:15.701002] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:07.773 [2024-07-26 14:20:15.710250] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:07.773 [2024-07-26 14:20:15.710597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.773 [2024-07-26 14:20:15.710625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:07.773 [2024-07-26 14:20:15.710641] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:07.773 [2024-07-26 14:20:15.710878] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:07.773 [2024-07-26 14:20:15.711082] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:07.773 [2024-07-26 14:20:15.711102] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:07.773 [2024-07-26 14:20:15.711114] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:07.773 [2024-07-26 14:20:15.714175] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:07.773 [2024-07-26 14:20:15.723841] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:07.773 [2024-07-26 14:20:15.724201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.773 [2024-07-26 14:20:15.724228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:07.773 [2024-07-26 14:20:15.724244] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:07.773 [2024-07-26 14:20:15.724467] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:07.773 [2024-07-26 14:20:15.724719] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:07.773 [2024-07-26 14:20:15.724742] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:07.773 [2024-07-26 14:20:15.724758] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:07.773 [2024-07-26 14:20:15.727840] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:07.773 [2024-07-26 14:20:15.737309] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:07.773 [2024-07-26 14:20:15.737619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.773 [2024-07-26 14:20:15.737648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:07.773 [2024-07-26 14:20:15.737664] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:07.773 [2024-07-26 14:20:15.737898] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:07.773 [2024-07-26 14:20:15.738109] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:07.773 [2024-07-26 14:20:15.738129] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:07.773 [2024-07-26 14:20:15.738142] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:07.774 [2024-07-26 14:20:15.741271] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:07.774 [2024-07-26 14:20:15.750789] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:07.774 [2024-07-26 14:20:15.751203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.774 [2024-07-26 14:20:15.751231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:07.774 [2024-07-26 14:20:15.751247] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:07.774 [2024-07-26 14:20:15.751470] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:07.774 [2024-07-26 14:20:15.751723] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:07.774 [2024-07-26 14:20:15.751745] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:07.774 [2024-07-26 14:20:15.751759] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:07.774 [2024-07-26 14:20:15.754874] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:07.774 [2024-07-26 14:20:15.764069] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:07.774 [2024-07-26 14:20:15.764381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.774 [2024-07-26 14:20:15.764408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:07.774 [2024-07-26 14:20:15.764424] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:07.774 [2024-07-26 14:20:15.764674] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:07.774 [2024-07-26 14:20:15.764918] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:07.774 [2024-07-26 14:20:15.764937] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:07.774 [2024-07-26 14:20:15.764950] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:07.774 [2024-07-26 14:20:15.767904] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:07.774 [2024-07-26 14:20:15.777291] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:07.774 [2024-07-26 14:20:15.777643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.774 [2024-07-26 14:20:15.777672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:07.774 [2024-07-26 14:20:15.777693] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:07.774 [2024-07-26 14:20:15.777934] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:07.774 [2024-07-26 14:20:15.778137] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:07.774 [2024-07-26 14:20:15.778156] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:07.774 [2024-07-26 14:20:15.778169] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:07.774 [2024-07-26 14:20:15.781313] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.033 [2024-07-26 14:20:15.790484] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.033 [2024-07-26 14:20:15.790848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.033 [2024-07-26 14:20:15.790875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:08.033 [2024-07-26 14:20:15.790890] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:08.033 [2024-07-26 14:20:15.791105] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:08.033 [2024-07-26 14:20:15.791309] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.033 [2024-07-26 14:20:15.791328] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.033 [2024-07-26 14:20:15.791340] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.033 [2024-07-26 14:20:15.794585] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.033 [2024-07-26 14:20:15.803909] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.033 [2024-07-26 14:20:15.804282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.033 [2024-07-26 14:20:15.804309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:08.033 [2024-07-26 14:20:15.804324] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:08.033 [2024-07-26 14:20:15.804549] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:08.033 [2024-07-26 14:20:15.804749] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.033 [2024-07-26 14:20:15.804769] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.033 [2024-07-26 14:20:15.804782] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.033 [2024-07-26 14:20:15.807764] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.033 [2024-07-26 14:20:15.817171] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.033 [2024-07-26 14:20:15.817576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.034 [2024-07-26 14:20:15.817605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:08.034 [2024-07-26 14:20:15.817620] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:08.034 [2024-07-26 14:20:15.817856] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:08.034 [2024-07-26 14:20:15.818060] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.034 [2024-07-26 14:20:15.818084] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.034 [2024-07-26 14:20:15.818097] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.034 [2024-07-26 14:20:15.820979] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.034 [2024-07-26 14:20:15.830145] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.034 [2024-07-26 14:20:15.830554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.034 [2024-07-26 14:20:15.830597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:08.034 [2024-07-26 14:20:15.830613] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:08.034 [2024-07-26 14:20:15.830854] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:08.034 [2024-07-26 14:20:15.831058] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.034 [2024-07-26 14:20:15.831077] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.034 [2024-07-26 14:20:15.831090] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.034 [2024-07-26 14:20:15.834073] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.034 [2024-07-26 14:20:15.843396] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.034 [2024-07-26 14:20:15.843828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.034 [2024-07-26 14:20:15.843856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:08.034 [2024-07-26 14:20:15.843872] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:08.034 [2024-07-26 14:20:15.844109] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:08.034 [2024-07-26 14:20:15.844312] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.034 [2024-07-26 14:20:15.844332] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.034 [2024-07-26 14:20:15.844345] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.034 [2024-07-26 14:20:15.847258] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.034 [2024-07-26 14:20:15.856565] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.034 [2024-07-26 14:20:15.856881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.034 [2024-07-26 14:20:15.856907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:08.034 [2024-07-26 14:20:15.856922] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:08.034 [2024-07-26 14:20:15.857134] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:08.034 [2024-07-26 14:20:15.857337] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.034 [2024-07-26 14:20:15.857356] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.034 [2024-07-26 14:20:15.857368] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.034 [2024-07-26 14:20:15.860268] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.034 [2024-07-26 14:20:15.869634] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.034 [2024-07-26 14:20:15.870028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.034 [2024-07-26 14:20:15.870057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:08.034 [2024-07-26 14:20:15.870073] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:08.034 [2024-07-26 14:20:15.870313] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:08.034 [2024-07-26 14:20:15.870580] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.034 [2024-07-26 14:20:15.870603] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.034 [2024-07-26 14:20:15.870617] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.034 [2024-07-26 14:20:15.874092] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.034 [2024-07-26 14:20:15.883424] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.034 [2024-07-26 14:20:15.883788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.034 [2024-07-26 14:20:15.883817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:08.034 [2024-07-26 14:20:15.883834] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:08.034 [2024-07-26 14:20:15.884048] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:08.034 [2024-07-26 14:20:15.884288] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.034 [2024-07-26 14:20:15.884309] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.034 [2024-07-26 14:20:15.884323] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.034 [2024-07-26 14:20:15.887367] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.034 [2024-07-26 14:20:15.896591] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.034 [2024-07-26 14:20:15.896981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.034 [2024-07-26 14:20:15.897008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:08.034 [2024-07-26 14:20:15.897024] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:08.034 [2024-07-26 14:20:15.897259] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:08.034 [2024-07-26 14:20:15.897463] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.034 [2024-07-26 14:20:15.897482] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.034 [2024-07-26 14:20:15.897495] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.034 [2024-07-26 14:20:15.900484] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.034 [2024-07-26 14:20:15.909790] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.034 [2024-07-26 14:20:15.910147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.034 [2024-07-26 14:20:15.910174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:08.034 [2024-07-26 14:20:15.910190] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:08.034 [2024-07-26 14:20:15.910425] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:08.034 [2024-07-26 14:20:15.910672] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.034 [2024-07-26 14:20:15.910694] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.034 [2024-07-26 14:20:15.910707] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.034 [2024-07-26 14:20:15.913583] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.034 [2024-07-26 14:20:15.922913] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.034 [2024-07-26 14:20:15.923322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.034 [2024-07-26 14:20:15.923350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:08.034 [2024-07-26 14:20:15.923365] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:08.034 [2024-07-26 14:20:15.923612] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:08.034 [2024-07-26 14:20:15.923829] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.034 [2024-07-26 14:20:15.923849] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.034 [2024-07-26 14:20:15.923861] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.034 [2024-07-26 14:20:15.926632] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.034 [2024-07-26 14:20:15.935930] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.034 [2024-07-26 14:20:15.936272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.034 [2024-07-26 14:20:15.936300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:08.034 [2024-07-26 14:20:15.936315] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:08.034 [2024-07-26 14:20:15.936560] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:08.034 [2024-07-26 14:20:15.936755] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.034 [2024-07-26 14:20:15.936775] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.034 [2024-07-26 14:20:15.936787] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.034 [2024-07-26 14:20:15.939560] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.034 [2024-07-26 14:20:15.948913] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.034 [2024-07-26 14:20:15.949318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.035 [2024-07-26 14:20:15.949346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:08.035 [2024-07-26 14:20:15.949361] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:08.035 [2024-07-26 14:20:15.949609] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:08.035 [2024-07-26 14:20:15.949833] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.035 [2024-07-26 14:20:15.949853] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.035 [2024-07-26 14:20:15.949870] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.035 [2024-07-26 14:20:15.952628] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.035 [2024-07-26 14:20:15.961920] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.035 [2024-07-26 14:20:15.962211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.035 [2024-07-26 14:20:15.962253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:08.035 [2024-07-26 14:20:15.962268] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:08.035 [2024-07-26 14:20:15.962485] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:08.035 [2024-07-26 14:20:15.962721] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.035 [2024-07-26 14:20:15.962742] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.035 [2024-07-26 14:20:15.962755] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.035 [2024-07-26 14:20:15.965627] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.035 [2024-07-26 14:20:15.975033] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.035 [2024-07-26 14:20:15.975373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.035 [2024-07-26 14:20:15.975400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:08.035 [2024-07-26 14:20:15.975414] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:08.035 [2024-07-26 14:20:15.975660] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:08.035 [2024-07-26 14:20:15.975888] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.035 [2024-07-26 14:20:15.975908] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.035 [2024-07-26 14:20:15.975920] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.035 [2024-07-26 14:20:15.978779] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.035 [2024-07-26 14:20:15.988001] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.035 [2024-07-26 14:20:15.988354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.035 [2024-07-26 14:20:15.988395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:08.035 [2024-07-26 14:20:15.988410] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:08.035 [2024-07-26 14:20:15.988656] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:08.035 [2024-07-26 14:20:15.988899] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.035 [2024-07-26 14:20:15.988918] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.035 [2024-07-26 14:20:15.988931] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.035 [2024-07-26 14:20:15.991787] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.035 [2024-07-26 14:20:16.001119] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.035 [2024-07-26 14:20:16.001520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.035 [2024-07-26 14:20:16.001571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:08.035 [2024-07-26 14:20:16.001588] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:08.035 [2024-07-26 14:20:16.001824] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:08.035 [2024-07-26 14:20:16.002028] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.035 [2024-07-26 14:20:16.002047] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.035 [2024-07-26 14:20:16.002060] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.035 [2024-07-26 14:20:16.004969] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.035 [2024-07-26 14:20:16.014317] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.035 [2024-07-26 14:20:16.014718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.035 [2024-07-26 14:20:16.014746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:08.035 [2024-07-26 14:20:16.014761] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:08.035 [2024-07-26 14:20:16.014994] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:08.035 [2024-07-26 14:20:16.015198] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.035 [2024-07-26 14:20:16.015218] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.035 [2024-07-26 14:20:16.015230] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.035 [2024-07-26 14:20:16.018193] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.035 [2024-07-26 14:20:16.027583] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.035 [2024-07-26 14:20:16.027998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.035 [2024-07-26 14:20:16.028026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:08.035 [2024-07-26 14:20:16.028043] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:08.035 [2024-07-26 14:20:16.028279] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:08.035 [2024-07-26 14:20:16.028483] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.035 [2024-07-26 14:20:16.028503] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.035 [2024-07-26 14:20:16.028538] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.035 [2024-07-26 14:20:16.031503] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.035 [2024-07-26 14:20:16.041360] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.035 [2024-07-26 14:20:16.041687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.035 [2024-07-26 14:20:16.041717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:08.035 [2024-07-26 14:20:16.041733] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:08.035 [2024-07-26 14:20:16.041975] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:08.035 [2024-07-26 14:20:16.042178] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.035 [2024-07-26 14:20:16.042199] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.035 [2024-07-26 14:20:16.042212] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.035 [2024-07-26 14:20:16.045472] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.295 [2024-07-26 14:20:16.054810] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.295 [2024-07-26 14:20:16.055278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.295 [2024-07-26 14:20:16.055308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:08.295 [2024-07-26 14:20:16.055324] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:08.295 [2024-07-26 14:20:16.055549] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:08.295 [2024-07-26 14:20:16.055769] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.295 [2024-07-26 14:20:16.055793] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.295 [2024-07-26 14:20:16.055807] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.295 [2024-07-26 14:20:16.059085] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.295 [2024-07-26 14:20:16.068383] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.295 [2024-07-26 14:20:16.068701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.295 [2024-07-26 14:20:16.068730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:08.295 [2024-07-26 14:20:16.068746] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:08.295 [2024-07-26 14:20:16.068988] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:08.295 [2024-07-26 14:20:16.069198] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.295 [2024-07-26 14:20:16.069218] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.295 [2024-07-26 14:20:16.069231] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.295 [2024-07-26 14:20:16.072452] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.295 [2024-07-26 14:20:16.081723] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.295 [2024-07-26 14:20:16.082124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.295 [2024-07-26 14:20:16.082161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:08.295 [2024-07-26 14:20:16.082195] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:08.295 [2024-07-26 14:20:16.082409] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:08.295 [2024-07-26 14:20:16.082656] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.295 [2024-07-26 14:20:16.082678] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.295 [2024-07-26 14:20:16.082694] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.295 [2024-07-26 14:20:16.085713] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.295 [2024-07-26 14:20:16.095009] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.295 [2024-07-26 14:20:16.095419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.295 [2024-07-26 14:20:16.095471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:08.295 [2024-07-26 14:20:16.095487] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:08.295 [2024-07-26 14:20:16.095724] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:08.295 [2024-07-26 14:20:16.095942] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.295 [2024-07-26 14:20:16.095962] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.295 [2024-07-26 14:20:16.095974] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.295 [2024-07-26 14:20:16.099005] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.295 [2024-07-26 14:20:16.108339] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.295 [2024-07-26 14:20:16.108739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.295 [2024-07-26 14:20:16.108769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:08.295 [2024-07-26 14:20:16.108794] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:08.295 [2024-07-26 14:20:16.109047] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:08.295 [2024-07-26 14:20:16.109235] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.295 [2024-07-26 14:20:16.109254] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.295 [2024-07-26 14:20:16.109266] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.295 [2024-07-26 14:20:16.112298] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.295 [2024-07-26 14:20:16.121452] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.295 [2024-07-26 14:20:16.121858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.295 [2024-07-26 14:20:16.121887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:08.295 [2024-07-26 14:20:16.121903] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:08.295 [2024-07-26 14:20:16.122171] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:08.295 [2024-07-26 14:20:16.122384] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.295 [2024-07-26 14:20:16.122418] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.295 [2024-07-26 14:20:16.122432] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.295 [2024-07-26 14:20:16.125945] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.295 [2024-07-26 14:20:16.134852] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.295 [2024-07-26 14:20:16.135237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.295 [2024-07-26 14:20:16.135264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:08.295 [2024-07-26 14:20:16.135284] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:08.295 [2024-07-26 14:20:16.135500] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:08.295 [2024-07-26 14:20:16.135726] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.295 [2024-07-26 14:20:16.135748] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.295 [2024-07-26 14:20:16.135762] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.295 [2024-07-26 14:20:16.138746] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.295 [2024-07-26 14:20:16.148206] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.295 [2024-07-26 14:20:16.148678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.295 [2024-07-26 14:20:16.148707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:08.295 [2024-07-26 14:20:16.148724] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:08.295 [2024-07-26 14:20:16.148959] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:08.295 [2024-07-26 14:20:16.149163] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.295 [2024-07-26 14:20:16.149182] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.295 [2024-07-26 14:20:16.149195] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.295 [2024-07-26 14:20:16.152095] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.295 [2024-07-26 14:20:16.161432] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.295 [2024-07-26 14:20:16.161847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.295 [2024-07-26 14:20:16.161876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:08.295 [2024-07-26 14:20:16.161908] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:08.295 [2024-07-26 14:20:16.162123] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:08.295 [2024-07-26 14:20:16.162342] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.295 [2024-07-26 14:20:16.162362] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.295 [2024-07-26 14:20:16.162375] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.295 [2024-07-26 14:20:16.165292] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.295 [2024-07-26 14:20:16.174629] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.295 [2024-07-26 14:20:16.175022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.295 [2024-07-26 14:20:16.175075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:08.295 [2024-07-26 14:20:16.175110] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:08.295 [2024-07-26 14:20:16.175357] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:08.296 [2024-07-26 14:20:16.175571] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.296 [2024-07-26 14:20:16.175610] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.296 [2024-07-26 14:20:16.175624] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.296 [2024-07-26 14:20:16.178664] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.296 [2024-07-26 14:20:16.187957] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.296 [2024-07-26 14:20:16.188272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.296 [2024-07-26 14:20:16.188301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:08.296 [2024-07-26 14:20:16.188317] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:08.296 [2024-07-26 14:20:16.188543] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:08.296 [2024-07-26 14:20:16.188743] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.296 [2024-07-26 14:20:16.188765] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.296 [2024-07-26 14:20:16.188779] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.296 [2024-07-26 14:20:16.191763] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.296 [2024-07-26 14:20:16.201078] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.296 [2024-07-26 14:20:16.201451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.296 [2024-07-26 14:20:16.201478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:08.296 [2024-07-26 14:20:16.201493] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:08.296 [2024-07-26 14:20:16.201773] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:08.296 [2024-07-26 14:20:16.201993] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.296 [2024-07-26 14:20:16.202014] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.296 [2024-07-26 14:20:16.202026] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.296 [2024-07-26 14:20:16.204900] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.296 [2024-07-26 14:20:16.214108] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.296 [2024-07-26 14:20:16.214468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.296 [2024-07-26 14:20:16.214516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:08.296 [2024-07-26 14:20:16.214542] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:08.296 [2024-07-26 14:20:16.214811] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:08.296 [2024-07-26 14:20:16.215016] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.296 [2024-07-26 14:20:16.215037] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.296 [2024-07-26 14:20:16.215049] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.296 [2024-07-26 14:20:16.217962] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.296 [2024-07-26 14:20:16.227198] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.296 [2024-07-26 14:20:16.227578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.296 [2024-07-26 14:20:16.227606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:08.296 [2024-07-26 14:20:16.227621] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:08.296 [2024-07-26 14:20:16.227839] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:08.296 [2024-07-26 14:20:16.228044] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.296 [2024-07-26 14:20:16.228064] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.296 [2024-07-26 14:20:16.228076] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.296 [2024-07-26 14:20:16.230979] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.296 [2024-07-26 14:20:16.240261] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.296 [2024-07-26 14:20:16.240574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.296 [2024-07-26 14:20:16.240603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:08.296 [2024-07-26 14:20:16.240619] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:08.296 [2024-07-26 14:20:16.240839] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:08.296 [2024-07-26 14:20:16.241043] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.296 [2024-07-26 14:20:16.241064] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.296 [2024-07-26 14:20:16.241076] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.296 [2024-07-26 14:20:16.243998] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.296 [2024-07-26 14:20:16.253448] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.296 [2024-07-26 14:20:16.253837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.296 [2024-07-26 14:20:16.253866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:08.296 [2024-07-26 14:20:16.253882] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:08.296 [2024-07-26 14:20:16.254099] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:08.296 [2024-07-26 14:20:16.254303] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.296 [2024-07-26 14:20:16.254323] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.296 [2024-07-26 14:20:16.254336] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.296 [2024-07-26 14:20:16.257237] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.296 [2024-07-26 14:20:16.266485] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.296 [2024-07-26 14:20:16.266835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.296 [2024-07-26 14:20:16.266864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:08.296 [2024-07-26 14:20:16.266880] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:08.296 [2024-07-26 14:20:16.267119] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:08.296 [2024-07-26 14:20:16.267322] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.296 [2024-07-26 14:20:16.267343] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.296 [2024-07-26 14:20:16.267355] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.296 [2024-07-26 14:20:16.270258] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.296 [2024-07-26 14:20:16.279551] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.296 [2024-07-26 14:20:16.279894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.296 [2024-07-26 14:20:16.279922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:08.296 [2024-07-26 14:20:16.279937] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:08.296 [2024-07-26 14:20:16.280152] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:08.296 [2024-07-26 14:20:16.280354] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.296 [2024-07-26 14:20:16.280373] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.296 [2024-07-26 14:20:16.280385] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.296 [2024-07-26 14:20:16.283311] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.296 [2024-07-26 14:20:16.292566] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.296 [2024-07-26 14:20:16.292911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.296 [2024-07-26 14:20:16.292939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:08.296 [2024-07-26 14:20:16.292954] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:08.296 [2024-07-26 14:20:16.293186] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:08.296 [2024-07-26 14:20:16.293375] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.296 [2024-07-26 14:20:16.293394] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.296 [2024-07-26 14:20:16.293406] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.296 [2024-07-26 14:20:16.296307] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.296 [2024-07-26 14:20:16.305684] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.296 [2024-07-26 14:20:16.306029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.296 [2024-07-26 14:20:16.306057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:08.296 [2024-07-26 14:20:16.306073] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:08.296 [2024-07-26 14:20:16.306287] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:08.296 [2024-07-26 14:20:16.306489] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.297 [2024-07-26 14:20:16.306509] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.297 [2024-07-26 14:20:16.306526] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.297 [2024-07-26 14:20:16.309755] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.556 [2024-07-26 14:20:16.319135] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.556 [2024-07-26 14:20:16.319478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.556 [2024-07-26 14:20:16.319508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:08.556 [2024-07-26 14:20:16.319523] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:08.556 [2024-07-26 14:20:16.319792] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:08.556 [2024-07-26 14:20:16.320011] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.556 [2024-07-26 14:20:16.320032] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.556 [2024-07-26 14:20:16.320044] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.556 [2024-07-26 14:20:16.322955] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.556 [2024-07-26 14:20:16.332245] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.556 [2024-07-26 14:20:16.332589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.556 [2024-07-26 14:20:16.332618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:08.556 [2024-07-26 14:20:16.332633] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:08.556 [2024-07-26 14:20:16.332863] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:08.557 [2024-07-26 14:20:16.333066] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.557 [2024-07-26 14:20:16.333087] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.557 [2024-07-26 14:20:16.333099] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.557 [2024-07-26 14:20:16.335982] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.557 [2024-07-26 14:20:16.345387] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.557 [2024-07-26 14:20:16.345709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.557 [2024-07-26 14:20:16.345737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:08.557 [2024-07-26 14:20:16.345753] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:08.557 [2024-07-26 14:20:16.345970] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:08.557 [2024-07-26 14:20:16.346174] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.557 [2024-07-26 14:20:16.346193] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.557 [2024-07-26 14:20:16.346207] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.557 [2024-07-26 14:20:16.349132] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.557 [2024-07-26 14:20:16.358551] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.557 [2024-07-26 14:20:16.358910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.557 [2024-07-26 14:20:16.358944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:08.557 [2024-07-26 14:20:16.358961] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:08.557 [2024-07-26 14:20:16.359196] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:08.557 [2024-07-26 14:20:16.359400] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.557 [2024-07-26 14:20:16.359431] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.557 [2024-07-26 14:20:16.359444] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.557 [2024-07-26 14:20:16.362365] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.557 [2024-07-26 14:20:16.371629] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.557 [2024-07-26 14:20:16.371949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.557 [2024-07-26 14:20:16.371992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:08.557 [2024-07-26 14:20:16.372009] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:08.557 [2024-07-26 14:20:16.372232] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:08.557 [2024-07-26 14:20:16.372447] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.557 [2024-07-26 14:20:16.372483] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.557 [2024-07-26 14:20:16.372498] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.557 [2024-07-26 14:20:16.376080] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.557 [2024-07-26 14:20:16.384897] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.557 [2024-07-26 14:20:16.385241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.557 [2024-07-26 14:20:16.385270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:08.557 [2024-07-26 14:20:16.385285] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:08.557 [2024-07-26 14:20:16.385520] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:08.557 [2024-07-26 14:20:16.385746] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.557 [2024-07-26 14:20:16.385767] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.557 [2024-07-26 14:20:16.385780] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.557 [2024-07-26 14:20:16.388761] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.557 [2024-07-26 14:20:16.398083] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.557 [2024-07-26 14:20:16.398455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.557 [2024-07-26 14:20:16.398493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:08.557 [2024-07-26 14:20:16.398526] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:08.557 [2024-07-26 14:20:16.398790] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:08.557 [2024-07-26 14:20:16.399017] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.557 [2024-07-26 14:20:16.399037] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.557 [2024-07-26 14:20:16.399052] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.557 [2024-07-26 14:20:16.401886] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.557 [2024-07-26 14:20:16.411181] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.557 [2024-07-26 14:20:16.411587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.557 [2024-07-26 14:20:16.411616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:08.557 [2024-07-26 14:20:16.411632] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:08.557 [2024-07-26 14:20:16.411866] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:08.557 [2024-07-26 14:20:16.412069] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.557 [2024-07-26 14:20:16.412090] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.557 [2024-07-26 14:20:16.412103] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.557 [2024-07-26 14:20:16.415020] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.557 [2024-07-26 14:20:16.424361] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.557 [2024-07-26 14:20:16.424734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.557 [2024-07-26 14:20:16.424763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:08.557 [2024-07-26 14:20:16.424778] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:08.557 [2024-07-26 14:20:16.425023] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:08.557 [2024-07-26 14:20:16.425225] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.557 [2024-07-26 14:20:16.425246] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.557 [2024-07-26 14:20:16.425259] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.557 [2024-07-26 14:20:16.428137] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.557 [2024-07-26 14:20:16.437480] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.557 [2024-07-26 14:20:16.437833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.557 [2024-07-26 14:20:16.437861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:08.557 [2024-07-26 14:20:16.437877] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:08.557 [2024-07-26 14:20:16.438112] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:08.557 [2024-07-26 14:20:16.438316] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.557 [2024-07-26 14:20:16.438336] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.557 [2024-07-26 14:20:16.438348] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.557 [2024-07-26 14:20:16.441230] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.557 [2024-07-26 14:20:16.450551] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.557 [2024-07-26 14:20:16.450896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.557 [2024-07-26 14:20:16.450924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:08.557 [2024-07-26 14:20:16.450940] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:08.557 [2024-07-26 14:20:16.451175] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:08.557 [2024-07-26 14:20:16.451378] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.557 [2024-07-26 14:20:16.451398] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.557 [2024-07-26 14:20:16.451411] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.557 [2024-07-26 14:20:16.454292] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.557 [2024-07-26 14:20:16.463629] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.557 [2024-07-26 14:20:16.464011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.557 [2024-07-26 14:20:16.464040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:08.557 [2024-07-26 14:20:16.464055] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:08.557 [2024-07-26 14:20:16.464273] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:08.557 [2024-07-26 14:20:16.464476] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.558 [2024-07-26 14:20:16.464497] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.558 [2024-07-26 14:20:16.464510] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.558 [2024-07-26 14:20:16.467410] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.558 [2024-07-26 14:20:16.476821] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.558 [2024-07-26 14:20:16.477163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.558 [2024-07-26 14:20:16.477190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:08.558 [2024-07-26 14:20:16.477205] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:08.558 [2024-07-26 14:20:16.477420] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:08.558 [2024-07-26 14:20:16.477671] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.558 [2024-07-26 14:20:16.477691] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.558 [2024-07-26 14:20:16.477704] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.558 [2024-07-26 14:20:16.480537] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.558 [2024-07-26 14:20:16.489838] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.558 [2024-07-26 14:20:16.490241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.558 [2024-07-26 14:20:16.490269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:08.558 [2024-07-26 14:20:16.490290] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:08.558 [2024-07-26 14:20:16.490526] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:08.558 [2024-07-26 14:20:16.490753] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.558 [2024-07-26 14:20:16.490774] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.558 [2024-07-26 14:20:16.490788] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.558 [2024-07-26 14:20:16.493566] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.558 [2024-07-26 14:20:16.502901] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.558 [2024-07-26 14:20:16.503304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.558 [2024-07-26 14:20:16.503333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:08.558 [2024-07-26 14:20:16.503349] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:08.558 [2024-07-26 14:20:16.503599] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:08.558 [2024-07-26 14:20:16.503808] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.558 [2024-07-26 14:20:16.503829] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.558 [2024-07-26 14:20:16.503842] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.558 [2024-07-26 14:20:16.506778] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.558 [2024-07-26 14:20:16.515909] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.558 [2024-07-26 14:20:16.516315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.558 [2024-07-26 14:20:16.516344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:08.558 [2024-07-26 14:20:16.516360] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:08.558 [2024-07-26 14:20:16.516609] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:08.558 [2024-07-26 14:20:16.516809] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.558 [2024-07-26 14:20:16.516843] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.558 [2024-07-26 14:20:16.516856] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.558 [2024-07-26 14:20:16.519715] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.558 [2024-07-26 14:20:16.528979] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.558 [2024-07-26 14:20:16.529385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.558 [2024-07-26 14:20:16.529412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:08.558 [2024-07-26 14:20:16.529427] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:08.558 [2024-07-26 14:20:16.529691] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:08.558 [2024-07-26 14:20:16.529904] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.558 [2024-07-26 14:20:16.529929] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.558 [2024-07-26 14:20:16.529942] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.558 [2024-07-26 14:20:16.532799] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.558 [2024-07-26 14:20:16.542141] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.558 [2024-07-26 14:20:16.542511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.558 [2024-07-26 14:20:16.542547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:08.558 [2024-07-26 14:20:16.542580] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:08.558 [2024-07-26 14:20:16.542817] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:08.558 [2024-07-26 14:20:16.543023] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.558 [2024-07-26 14:20:16.543044] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.558 [2024-07-26 14:20:16.543057] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.558 [2024-07-26 14:20:16.545903] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.558 [2024-07-26 14:20:16.555208] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.558 [2024-07-26 14:20:16.555516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.558 [2024-07-26 14:20:16.555550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:08.558 [2024-07-26 14:20:16.555566] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:08.558 [2024-07-26 14:20:16.555782] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:08.558 [2024-07-26 14:20:16.556003] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.558 [2024-07-26 14:20:16.556024] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.558 [2024-07-26 14:20:16.556037] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.558 [2024-07-26 14:20:16.558837] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.558 [2024-07-26 14:20:16.568341] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.558 [2024-07-26 14:20:16.568701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.558 [2024-07-26 14:20:16.568731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:08.558 [2024-07-26 14:20:16.568748] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:08.558 [2024-07-26 14:20:16.568976] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:08.558 [2024-07-26 14:20:16.569216] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.558 [2024-07-26 14:20:16.569253] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.558 [2024-07-26 14:20:16.569266] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.558 [2024-07-26 14:20:16.572626] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.818 [2024-07-26 14:20:16.581625] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.818 [2024-07-26 14:20:16.581973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.818 [2024-07-26 14:20:16.582002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:08.818 [2024-07-26 14:20:16.582017] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:08.818 [2024-07-26 14:20:16.582253] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:08.818 [2024-07-26 14:20:16.582457] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.818 [2024-07-26 14:20:16.582477] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.818 [2024-07-26 14:20:16.582490] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.818 [2024-07-26 14:20:16.585403] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.818 [2024-07-26 14:20:16.594697] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.818 [2024-07-26 14:20:16.595038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.818 [2024-07-26 14:20:16.595065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:08.818 [2024-07-26 14:20:16.595080] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:08.818 [2024-07-26 14:20:16.595295] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:08.818 [2024-07-26 14:20:16.595498] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.818 [2024-07-26 14:20:16.595536] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.818 [2024-07-26 14:20:16.595567] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.818 [2024-07-26 14:20:16.598420] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.818 [2024-07-26 14:20:16.607877] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.818 [2024-07-26 14:20:16.608281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.818 [2024-07-26 14:20:16.608309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:08.818 [2024-07-26 14:20:16.608325] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:08.818 [2024-07-26 14:20:16.608573] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:08.818 [2024-07-26 14:20:16.608772] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.818 [2024-07-26 14:20:16.608792] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.818 [2024-07-26 14:20:16.608805] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.818 [2024-07-26 14:20:16.611692] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.818 [2024-07-26 14:20:16.621117] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.818 [2024-07-26 14:20:16.621520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.818 [2024-07-26 14:20:16.621557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:08.818 [2024-07-26 14:20:16.621574] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:08.818 [2024-07-26 14:20:16.621814] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:08.818 [2024-07-26 14:20:16.622053] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.818 [2024-07-26 14:20:16.622074] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.818 [2024-07-26 14:20:16.622088] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.818 [2024-07-26 14:20:16.625594] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.818 [2024-07-26 14:20:16.634360] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.818 [2024-07-26 14:20:16.634977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.818 [2024-07-26 14:20:16.635006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:08.818 [2024-07-26 14:20:16.635022] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:08.818 [2024-07-26 14:20:16.635256] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:08.818 [2024-07-26 14:20:16.635459] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.818 [2024-07-26 14:20:16.635480] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.818 [2024-07-26 14:20:16.635493] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.818 [2024-07-26 14:20:16.638524] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.818 [2024-07-26 14:20:16.647470] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.818 [2024-07-26 14:20:16.647845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.818 [2024-07-26 14:20:16.647873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:08.818 [2024-07-26 14:20:16.647889] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:08.818 [2024-07-26 14:20:16.648124] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:08.818 [2024-07-26 14:20:16.648328] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.818 [2024-07-26 14:20:16.648349] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.818 [2024-07-26 14:20:16.648362] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.818 [2024-07-26 14:20:16.651256] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.818 [2024-07-26 14:20:16.660567] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.818 [2024-07-26 14:20:16.660941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.818 [2024-07-26 14:20:16.660968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:08.818 [2024-07-26 14:20:16.660984] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:08.818 [2024-07-26 14:20:16.661199] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:08.818 [2024-07-26 14:20:16.661403] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.819 [2024-07-26 14:20:16.661423] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.819 [2024-07-26 14:20:16.661440] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.819 [2024-07-26 14:20:16.664364] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.819 [2024-07-26 14:20:16.673660] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.819 [2024-07-26 14:20:16.674105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.819 [2024-07-26 14:20:16.674137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:08.819 [2024-07-26 14:20:16.674154] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:08.819 [2024-07-26 14:20:16.674394] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:08.819 [2024-07-26 14:20:16.674626] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.819 [2024-07-26 14:20:16.674648] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.819 [2024-07-26 14:20:16.674661] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.819 [2024-07-26 14:20:16.677508] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.819 [2024-07-26 14:20:16.686831] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.819 [2024-07-26 14:20:16.687238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.819 [2024-07-26 14:20:16.687266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:08.819 [2024-07-26 14:20:16.687282] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:08.819 [2024-07-26 14:20:16.687516] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:08.819 [2024-07-26 14:20:16.687742] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.819 [2024-07-26 14:20:16.687764] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.819 [2024-07-26 14:20:16.687778] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.819 [2024-07-26 14:20:16.690664] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.819 [2024-07-26 14:20:16.700010] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.819 [2024-07-26 14:20:16.700323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.819 [2024-07-26 14:20:16.700351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:08.819 [2024-07-26 14:20:16.700366] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:08.819 [2024-07-26 14:20:16.700594] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:08.819 [2024-07-26 14:20:16.700804] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.819 [2024-07-26 14:20:16.700839] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.819 [2024-07-26 14:20:16.700852] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.819 [2024-07-26 14:20:16.703708] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.819 [2024-07-26 14:20:16.713174] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.819 [2024-07-26 14:20:16.713519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.819 [2024-07-26 14:20:16.713555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:08.819 [2024-07-26 14:20:16.713571] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:08.819 [2024-07-26 14:20:16.713800] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:08.819 [2024-07-26 14:20:16.714004] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.819 [2024-07-26 14:20:16.714025] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.819 [2024-07-26 14:20:16.714038] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.819 [2024-07-26 14:20:16.716835] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.819 [2024-07-26 14:20:16.726294] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.819 [2024-07-26 14:20:16.726634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.819 [2024-07-26 14:20:16.726662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:08.819 [2024-07-26 14:20:16.726678] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:08.819 [2024-07-26 14:20:16.726909] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:08.819 [2024-07-26 14:20:16.727113] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.819 [2024-07-26 14:20:16.727133] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.819 [2024-07-26 14:20:16.727146] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.819 [2024-07-26 14:20:16.729944] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.819 [2024-07-26 14:20:16.739444] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.819 [2024-07-26 14:20:16.739858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.819 [2024-07-26 14:20:16.739887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:08.819 [2024-07-26 14:20:16.739903] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:08.819 [2024-07-26 14:20:16.740138] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:08.819 [2024-07-26 14:20:16.740342] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.819 [2024-07-26 14:20:16.740362] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.819 [2024-07-26 14:20:16.740375] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.819 [2024-07-26 14:20:16.743309] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.819 [2024-07-26 14:20:16.752611] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.819 [2024-07-26 14:20:16.753017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.819 [2024-07-26 14:20:16.753046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:08.819 [2024-07-26 14:20:16.753062] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:08.819 [2024-07-26 14:20:16.753299] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:08.819 [2024-07-26 14:20:16.753522] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.819 [2024-07-26 14:20:16.753553] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.819 [2024-07-26 14:20:16.753567] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.819 [2024-07-26 14:20:16.756380] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.819 [2024-07-26 14:20:16.765697] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.819 [2024-07-26 14:20:16.766030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.819 [2024-07-26 14:20:16.766057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:08.819 [2024-07-26 14:20:16.766073] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:08.819 [2024-07-26 14:20:16.766290] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:08.819 [2024-07-26 14:20:16.766496] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.819 [2024-07-26 14:20:16.766542] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.819 [2024-07-26 14:20:16.766558] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.819 [2024-07-26 14:20:16.769377] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.819 [2024-07-26 14:20:16.778875] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.819 [2024-07-26 14:20:16.779290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.819 [2024-07-26 14:20:16.779318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:08.819 [2024-07-26 14:20:16.779334] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:08.819 [2024-07-26 14:20:16.779582] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:08.819 [2024-07-26 14:20:16.779782] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.819 [2024-07-26 14:20:16.779803] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.819 [2024-07-26 14:20:16.779816] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.819 [2024-07-26 14:20:16.782703] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.819 [2024-07-26 14:20:16.792152] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.819 [2024-07-26 14:20:16.792609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.819 [2024-07-26 14:20:16.792638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:08.819 [2024-07-26 14:20:16.792654] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:08.819 [2024-07-26 14:20:16.792882] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:08.819 [2024-07-26 14:20:16.793092] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.820 [2024-07-26 14:20:16.793111] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.820 [2024-07-26 14:20:16.793124] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.820 [2024-07-26 14:20:16.796184] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.820 [2024-07-26 14:20:16.805533] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.820 [2024-07-26 14:20:16.805874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.820 [2024-07-26 14:20:16.805904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:08.820 [2024-07-26 14:20:16.805920] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:08.820 [2024-07-26 14:20:16.806152] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:08.820 [2024-07-26 14:20:16.806362] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.820 [2024-07-26 14:20:16.806382] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.820 [2024-07-26 14:20:16.806395] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.820 [2024-07-26 14:20:16.809406] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.820 [2024-07-26 14:20:16.818937] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.820 [2024-07-26 14:20:16.819255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.820 [2024-07-26 14:20:16.819283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:08.820 [2024-07-26 14:20:16.819299] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:08.820 [2024-07-26 14:20:16.819522] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:08.820 [2024-07-26 14:20:16.819736] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.820 [2024-07-26 14:20:16.819758] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.820 [2024-07-26 14:20:16.819772] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.820 [2024-07-26 14:20:16.822752] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.820 [2024-07-26 14:20:16.832442] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.820 [2024-07-26 14:20:16.832827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.820 [2024-07-26 14:20:16.832857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:08.820 [2024-07-26 14:20:16.832873] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:08.820 [2024-07-26 14:20:16.833104] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:08.820 [2024-07-26 14:20:16.833341] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.820 [2024-07-26 14:20:16.833363] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.820 [2024-07-26 14:20:16.833392] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:09.079 [2024-07-26 14:20:16.836669] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:09.079 [2024-07-26 14:20:16.845765] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:09.079 [2024-07-26 14:20:16.846208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.079 [2024-07-26 14:20:16.846237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:09.079 [2024-07-26 14:20:16.846258] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:09.079 [2024-07-26 14:20:16.846495] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:09.079 [2024-07-26 14:20:16.846741] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:09.079 [2024-07-26 14:20:16.846764] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:09.079 [2024-07-26 14:20:16.846778] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:09.079 [2024-07-26 14:20:16.849861] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:09.079 [2024-07-26 14:20:16.859117] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:09.079 [2024-07-26 14:20:16.859461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.079 [2024-07-26 14:20:16.859487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:09.079 [2024-07-26 14:20:16.859503] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:09.079 [2024-07-26 14:20:16.859756] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:09.079 [2024-07-26 14:20:16.859984] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:09.079 [2024-07-26 14:20:16.860003] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:09.079 [2024-07-26 14:20:16.860016] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:09.079 [2024-07-26 14:20:16.863031] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:09.079 [2024-07-26 14:20:16.872334] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:09.079 [2024-07-26 14:20:16.872724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.079 [2024-07-26 14:20:16.872753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:09.079 [2024-07-26 14:20:16.872769] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:09.079 [2024-07-26 14:20:16.873013] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:09.079 [2024-07-26 14:20:16.873242] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:09.079 [2024-07-26 14:20:16.873264] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:09.079 [2024-07-26 14:20:16.873277] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:09.079 [2024-07-26 14:20:16.876840] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:09.079 [2024-07-26 14:20:16.885535] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:09.079 [2024-07-26 14:20:16.885891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.079 [2024-07-26 14:20:16.885919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:09.079 [2024-07-26 14:20:16.885935] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:09.079 [2024-07-26 14:20:16.886169] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:09.079 [2024-07-26 14:20:16.886373] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:09.079 [2024-07-26 14:20:16.886398] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:09.079 [2024-07-26 14:20:16.886412] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:09.079 [2024-07-26 14:20:16.889353] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:09.079 [2024-07-26 14:20:16.898727] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:09.079 [2024-07-26 14:20:16.899088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.079 [2024-07-26 14:20:16.899116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:09.079 [2024-07-26 14:20:16.899132] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:09.079 [2024-07-26 14:20:16.899369] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:09.079 [2024-07-26 14:20:16.899615] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:09.079 [2024-07-26 14:20:16.899636] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:09.079 [2024-07-26 14:20:16.899650] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:09.079 [2024-07-26 14:20:16.902581] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:09.079 [2024-07-26 14:20:16.911857] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:09.079 [2024-07-26 14:20:16.912265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.079 [2024-07-26 14:20:16.912294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:09.079 [2024-07-26 14:20:16.912310] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:09.079 [2024-07-26 14:20:16.912554] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:09.079 [2024-07-26 14:20:16.912754] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:09.079 [2024-07-26 14:20:16.912773] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:09.079 [2024-07-26 14:20:16.912788] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:09.079 [2024-07-26 14:20:16.915703] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:09.079 [2024-07-26 14:20:16.924975] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:09.079 [2024-07-26 14:20:16.925382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.079 [2024-07-26 14:20:16.925411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:09.079 [2024-07-26 14:20:16.925427] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:09.079 [2024-07-26 14:20:16.925693] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:09.079 [2024-07-26 14:20:16.925902] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:09.080 [2024-07-26 14:20:16.925922] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:09.080 [2024-07-26 14:20:16.925935] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:09.080 [2024-07-26 14:20:16.928847] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:09.080 [2024-07-26 14:20:16.938079] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:09.080 [2024-07-26 14:20:16.938421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.080 [2024-07-26 14:20:16.938449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:09.080 [2024-07-26 14:20:16.938465] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:09.080 [2024-07-26 14:20:16.938725] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:09.080 [2024-07-26 14:20:16.938936] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:09.080 [2024-07-26 14:20:16.938956] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:09.080 [2024-07-26 14:20:16.938969] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:09.080 [2024-07-26 14:20:16.941830] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:09.080 [2024-07-26 14:20:16.951142] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:09.080 [2024-07-26 14:20:16.951517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.080 [2024-07-26 14:20:16.951559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:09.080 [2024-07-26 14:20:16.951576] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:09.080 [2024-07-26 14:20:16.951792] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:09.080 [2024-07-26 14:20:16.952005] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:09.080 [2024-07-26 14:20:16.952025] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:09.080 [2024-07-26 14:20:16.952038] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:09.080 [2024-07-26 14:20:16.954873] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:09.080 [2024-07-26 14:20:16.964326] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:09.080 [2024-07-26 14:20:16.964697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.080 [2024-07-26 14:20:16.964726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:09.080 [2024-07-26 14:20:16.964742] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:09.080 [2024-07-26 14:20:16.964987] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:09.080 [2024-07-26 14:20:16.965189] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:09.080 [2024-07-26 14:20:16.965209] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:09.080 [2024-07-26 14:20:16.965222] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:09.080 [2024-07-26 14:20:16.968097] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:09.080 [2024-07-26 14:20:16.977507] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:09.080 [2024-07-26 14:20:16.977887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.080 [2024-07-26 14:20:16.977915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:09.080 [2024-07-26 14:20:16.977930] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:09.080 [2024-07-26 14:20:16.978150] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:09.080 [2024-07-26 14:20:16.978353] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:09.080 [2024-07-26 14:20:16.978372] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:09.080 [2024-07-26 14:20:16.978385] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:09.080 [2024-07-26 14:20:16.981310] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:09.080 [2024-07-26 14:20:16.990681] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:09.080 [2024-07-26 14:20:16.991105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.080 [2024-07-26 14:20:16.991133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:09.080 [2024-07-26 14:20:16.991149] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:09.080 [2024-07-26 14:20:16.991378] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:09.080 [2024-07-26 14:20:16.991625] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:09.080 [2024-07-26 14:20:16.991648] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:09.080 [2024-07-26 14:20:16.991661] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:09.080 [2024-07-26 14:20:16.994554] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:09.080 [2024-07-26 14:20:17.003751] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:09.080 [2024-07-26 14:20:17.004140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.080 [2024-07-26 14:20:17.004194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:09.080 [2024-07-26 14:20:17.004209] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:09.080 [2024-07-26 14:20:17.004447] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:09.080 [2024-07-26 14:20:17.004665] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:09.080 [2024-07-26 14:20:17.004687] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:09.080 [2024-07-26 14:20:17.004700] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:09.080 [2024-07-26 14:20:17.007569] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:09.080 [2024-07-26 14:20:17.016952] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:09.080 [2024-07-26 14:20:17.017357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.080 [2024-07-26 14:20:17.017385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:09.080 [2024-07-26 14:20:17.017401] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:09.080 [2024-07-26 14:20:17.017649] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:09.080 [2024-07-26 14:20:17.017849] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:09.080 [2024-07-26 14:20:17.017870] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:09.080 [2024-07-26 14:20:17.017906] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:09.080 [2024-07-26 14:20:17.020785] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:09.080 [2024-07-26 14:20:17.029960] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:09.080 [2024-07-26 14:20:17.030366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.080 [2024-07-26 14:20:17.030394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:09.080 [2024-07-26 14:20:17.030409] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:09.080 [2024-07-26 14:20:17.030658] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:09.080 [2024-07-26 14:20:17.030886] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:09.080 [2024-07-26 14:20:17.030906] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:09.080 [2024-07-26 14:20:17.030919] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:09.080 [2024-07-26 14:20:17.033780] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:09.080 [2024-07-26 14:20:17.043083] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:09.080 [2024-07-26 14:20:17.043394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.080 [2024-07-26 14:20:17.043422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:09.080 [2024-07-26 14:20:17.043438] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:09.080 [2024-07-26 14:20:17.043685] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:09.080 [2024-07-26 14:20:17.043927] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:09.080 [2024-07-26 14:20:17.043946] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:09.080 [2024-07-26 14:20:17.043959] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:09.080 [2024-07-26 14:20:17.046843] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:09.080 [2024-07-26 14:20:17.056139] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:09.080 [2024-07-26 14:20:17.056450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.080 [2024-07-26 14:20:17.056477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:09.080 [2024-07-26 14:20:17.056493] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:09.080 [2024-07-26 14:20:17.056772] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:09.080 [2024-07-26 14:20:17.056984] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:09.081 [2024-07-26 14:20:17.057004] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:09.081 [2024-07-26 14:20:17.057017] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:09.081 [2024-07-26 14:20:17.059893] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:09.081 [2024-07-26 14:20:17.069262] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:09.081 [2024-07-26 14:20:17.069666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.081 [2024-07-26 14:20:17.069700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:09.081 [2024-07-26 14:20:17.069717] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:09.081 [2024-07-26 14:20:17.069953] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:09.081 [2024-07-26 14:20:17.070157] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:09.081 [2024-07-26 14:20:17.070177] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:09.081 [2024-07-26 14:20:17.070190] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:09.081 [2024-07-26 14:20:17.073220] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:09.081 [2024-07-26 14:20:17.082721] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:09.081 [2024-07-26 14:20:17.083064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.081 [2024-07-26 14:20:17.083105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:09.081 [2024-07-26 14:20:17.083121] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:09.081 [2024-07-26 14:20:17.083336] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:09.081 [2024-07-26 14:20:17.083570] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:09.081 [2024-07-26 14:20:17.083608] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:09.081 [2024-07-26 14:20:17.083622] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:09.081 [2024-07-26 14:20:17.086693] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:09.339 [2024-07-26 14:20:17.096279] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:09.339 [2024-07-26 14:20:17.096656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.339 [2024-07-26 14:20:17.096684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:09.339 [2024-07-26 14:20:17.096701] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:09.339 [2024-07-26 14:20:17.096942] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:09.339 [2024-07-26 14:20:17.097163] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:09.339 [2024-07-26 14:20:17.097183] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:09.339 [2024-07-26 14:20:17.097195] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:09.339 [2024-07-26 14:20:17.100284] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:09.339 [2024-07-26 14:20:17.109708] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:09.339 [2024-07-26 14:20:17.110099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.339 [2024-07-26 14:20:17.110127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:09.339 [2024-07-26 14:20:17.110143] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:09.339 [2024-07-26 14:20:17.110379] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:09.339 [2024-07-26 14:20:17.110642] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:09.339 [2024-07-26 14:20:17.110664] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:09.340 [2024-07-26 14:20:17.110679] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:09.340 [2024-07-26 14:20:17.113797] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:09.340 [2024-07-26 14:20:17.123132] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:09.340 [2024-07-26 14:20:17.123514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.340 [2024-07-26 14:20:17.123553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:09.340 [2024-07-26 14:20:17.123576] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:09.340 [2024-07-26 14:20:17.123806] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:09.340 [2024-07-26 14:20:17.124044] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:09.340 [2024-07-26 14:20:17.124066] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:09.340 [2024-07-26 14:20:17.124080] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:09.340 [2024-07-26 14:20:17.127487] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:09.340 [2024-07-26 14:20:17.136368] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:09.340 [2024-07-26 14:20:17.136721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.340 [2024-07-26 14:20:17.136750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:09.340 [2024-07-26 14:20:17.136766] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:09.340 [2024-07-26 14:20:17.137018] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:09.340 [2024-07-26 14:20:17.137213] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:09.340 [2024-07-26 14:20:17.137233] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:09.340 [2024-07-26 14:20:17.137246] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:09.340 [2024-07-26 14:20:17.140282] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:09.340 [2024-07-26 14:20:17.149821] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:09.340 [2024-07-26 14:20:17.150222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.340 [2024-07-26 14:20:17.150254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:09.340 [2024-07-26 14:20:17.150286] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:09.340 [2024-07-26 14:20:17.150510] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:09.340 [2024-07-26 14:20:17.150726] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:09.340 [2024-07-26 14:20:17.150748] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:09.340 [2024-07-26 14:20:17.150761] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:09.340 [2024-07-26 14:20:17.153866] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:09.340 [2024-07-26 14:20:17.163108] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:09.340 [2024-07-26 14:20:17.163521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.340 [2024-07-26 14:20:17.163574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:09.340 [2024-07-26 14:20:17.163595] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:09.340 [2024-07-26 14:20:17.163837] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:09.340 [2024-07-26 14:20:17.164065] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:09.340 [2024-07-26 14:20:17.164086] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:09.340 [2024-07-26 14:20:17.164100] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:09.340 [2024-07-26 14:20:17.167064] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:09.340 [2024-07-26 14:20:17.176371] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:09.340 [2024-07-26 14:20:17.176754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.340 [2024-07-26 14:20:17.176783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:09.340 [2024-07-26 14:20:17.176800] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:09.340 [2024-07-26 14:20:17.177050] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:09.340 [2024-07-26 14:20:17.177245] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:09.340 [2024-07-26 14:20:17.177267] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:09.340 [2024-07-26 14:20:17.177280] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:09.340 [2024-07-26 14:20:17.180269] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:09.340 [2024-07-26 14:20:17.189857] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:09.340 [2024-07-26 14:20:17.190205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.340 [2024-07-26 14:20:17.190231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:09.340 [2024-07-26 14:20:17.190246] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:09.340 [2024-07-26 14:20:17.190462] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:09.340 [2024-07-26 14:20:17.190703] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:09.340 [2024-07-26 14:20:17.190725] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:09.340 [2024-07-26 14:20:17.190738] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:09.340 [2024-07-26 14:20:17.193747] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:09.340 [2024-07-26 14:20:17.203038] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:09.340 [2024-07-26 14:20:17.203474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.340 [2024-07-26 14:20:17.203503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:09.340 [2024-07-26 14:20:17.203524] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:09.340 [2024-07-26 14:20:17.203780] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:09.340 [2024-07-26 14:20:17.203992] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:09.340 [2024-07-26 14:20:17.204012] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:09.340 [2024-07-26 14:20:17.204025] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:09.340 [2024-07-26 14:20:17.206990] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:09.340 [2024-07-26 14:20:17.216281] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:09.340 [2024-07-26 14:20:17.216632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.340 [2024-07-26 14:20:17.216662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:09.340 [2024-07-26 14:20:17.216678] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:09.340 [2024-07-26 14:20:17.216921] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:09.340 [2024-07-26 14:20:17.217116] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:09.340 [2024-07-26 14:20:17.217135] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:09.340 [2024-07-26 14:20:17.217148] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:09.340 [2024-07-26 14:20:17.220151] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:09.340 [2024-07-26 14:20:17.229663] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:09.340 [2024-07-26 14:20:17.230001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.340 [2024-07-26 14:20:17.230030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:09.340 [2024-07-26 14:20:17.230045] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:09.340 [2024-07-26 14:20:17.230268] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:09.340 [2024-07-26 14:20:17.230478] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:09.340 [2024-07-26 14:20:17.230498] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:09.340 [2024-07-26 14:20:17.230526] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:09.340 [2024-07-26 14:20:17.233484] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:09.340 [2024-07-26 14:20:17.243019] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:09.340 [2024-07-26 14:20:17.243403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.340 [2024-07-26 14:20:17.243431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:09.340 [2024-07-26 14:20:17.243446] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:09.340 [2024-07-26 14:20:17.243699] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:09.340 [2024-07-26 14:20:17.243931] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:09.340 [2024-07-26 14:20:17.243955] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:09.340 [2024-07-26 14:20:17.243969] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:09.340 [2024-07-26 14:20:17.246944] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:09.341 [2024-07-26 14:20:17.256240] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:09.341 [2024-07-26 14:20:17.256627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.341 [2024-07-26 14:20:17.256656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:09.341 [2024-07-26 14:20:17.256673] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:09.341 [2024-07-26 14:20:17.256902] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:09.341 [2024-07-26 14:20:17.257111] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:09.341 [2024-07-26 14:20:17.257131] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:09.341 [2024-07-26 14:20:17.257143] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:09.341 [2024-07-26 14:20:17.260140] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:09.341 [2024-07-26 14:20:17.269396] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:09.341 [2024-07-26 14:20:17.269837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.341 [2024-07-26 14:20:17.269866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:09.341 [2024-07-26 14:20:17.269883] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:09.341 [2024-07-26 14:20:17.270125] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:09.341 [2024-07-26 14:20:17.270334] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:09.341 [2024-07-26 14:20:17.270355] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:09.341 [2024-07-26 14:20:17.270369] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:09.341 [2024-07-26 14:20:17.273371] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:09.341 [2024-07-26 14:20:17.282758] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:09.341 [2024-07-26 14:20:17.283188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.341 [2024-07-26 14:20:17.283217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:09.341 [2024-07-26 14:20:17.283232] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:09.341 [2024-07-26 14:20:17.283474] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:09.341 [2024-07-26 14:20:17.283717] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:09.341 [2024-07-26 14:20:17.283738] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:09.341 [2024-07-26 14:20:17.283752] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:09.341 [2024-07-26 14:20:17.286745] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:09.341 [2024-07-26 14:20:17.296058] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:09.341 [2024-07-26 14:20:17.296421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.341 [2024-07-26 14:20:17.296451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:09.341 [2024-07-26 14:20:17.296467] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:09.341 [2024-07-26 14:20:17.296718] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:09.341 [2024-07-26 14:20:17.296931] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:09.341 [2024-07-26 14:20:17.296951] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:09.341 [2024-07-26 14:20:17.296964] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:09.341 [2024-07-26 14:20:17.299968] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:09.341 [2024-07-26 14:20:17.309275] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:09.341 [2024-07-26 14:20:17.309669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.341 [2024-07-26 14:20:17.309706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:09.341 [2024-07-26 14:20:17.309722] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:09.341 [2024-07-26 14:20:17.309950] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:09.341 [2024-07-26 14:20:17.310159] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:09.341 [2024-07-26 14:20:17.310179] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:09.341 [2024-07-26 14:20:17.310192] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:09.341 [2024-07-26 14:20:17.313155] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:09.341 [2024-07-26 14:20:17.322506] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:09.341 [2024-07-26 14:20:17.323016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.341 [2024-07-26 14:20:17.323044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:09.341 [2024-07-26 14:20:17.323060] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:09.341 [2024-07-26 14:20:17.323282] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:09.341 [2024-07-26 14:20:17.323492] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:09.341 [2024-07-26 14:20:17.323535] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:09.341 [2024-07-26 14:20:17.323551] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:09.341 [2024-07-26 14:20:17.326508] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:09.341 [2024-07-26 14:20:17.335872] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:09.341 [2024-07-26 14:20:17.336241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.341 [2024-07-26 14:20:17.336271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:09.341 [2024-07-26 14:20:17.336287] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:09.341 [2024-07-26 14:20:17.336545] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:09.341 [2024-07-26 14:20:17.336766] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:09.341 [2024-07-26 14:20:17.336788] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:09.341 [2024-07-26 14:20:17.336801] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:09.341 [2024-07-26 14:20:17.339764] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:09.341 [2024-07-26 14:20:17.349369] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:09.341 [2024-07-26 14:20:17.349775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.341 [2024-07-26 14:20:17.349805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:09.341 [2024-07-26 14:20:17.349821] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:09.341 [2024-07-26 14:20:17.350059] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:09.341 [2024-07-26 14:20:17.350274] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:09.341 [2024-07-26 14:20:17.350295] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:09.341 [2024-07-26 14:20:17.350309] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:09.341 [2024-07-26 14:20:17.353521] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:09.601 [2024-07-26 14:20:17.362964] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:09.601 [2024-07-26 14:20:17.363290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.601 [2024-07-26 14:20:17.363318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:09.601 [2024-07-26 14:20:17.363334] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:09.601 [2024-07-26 14:20:17.363567] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:09.601 [2024-07-26 14:20:17.363773] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:09.601 [2024-07-26 14:20:17.363794] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:09.601 [2024-07-26 14:20:17.363808] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:09.601 [2024-07-26 14:20:17.366824] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:09.601 [2024-07-26 14:20:17.376272] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:09.601 [2024-07-26 14:20:17.376667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.601 [2024-07-26 14:20:17.376696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:09.601 [2024-07-26 14:20:17.376712] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:09.601 [2024-07-26 14:20:17.376943] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:09.601 [2024-07-26 14:20:17.377202] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:09.601 [2024-07-26 14:20:17.377225] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:09.601 [2024-07-26 14:20:17.377244] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:09.601 [2024-07-26 14:20:17.380584] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:09.601 [2024-07-26 14:20:17.389560] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:09.601 [2024-07-26 14:20:17.389988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.601 [2024-07-26 14:20:17.390017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:09.601 [2024-07-26 14:20:17.390033] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:09.601 [2024-07-26 14:20:17.390277] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:09.601 [2024-07-26 14:20:17.390485] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:09.601 [2024-07-26 14:20:17.390506] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:09.601 [2024-07-26 14:20:17.390543] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:09.601 [2024-07-26 14:20:17.393590] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:09.601 [2024-07-26 14:20:17.402878] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:09.601 [2024-07-26 14:20:17.403247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.601 [2024-07-26 14:20:17.403277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:09.601 [2024-07-26 14:20:17.403292] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:09.601 [2024-07-26 14:20:17.403537] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:09.601 [2024-07-26 14:20:17.403757] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:09.601 [2024-07-26 14:20:17.403780] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:09.601 [2024-07-26 14:20:17.403793] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:09.601 [2024-07-26 14:20:17.406770] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:09.601 [2024-07-26 14:20:17.416079] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:09.601 [2024-07-26 14:20:17.416430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.601 [2024-07-26 14:20:17.416458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:09.601 [2024-07-26 14:20:17.416473] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:09.601 [2024-07-26 14:20:17.416738] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:09.601 [2024-07-26 14:20:17.416954] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:09.601 [2024-07-26 14:20:17.416975] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:09.601 [2024-07-26 14:20:17.416988] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:09.601 [2024-07-26 14:20:17.419946] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:09.601 [2024-07-26 14:20:17.429374] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:09.601 [2024-07-26 14:20:17.429755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.601 [2024-07-26 14:20:17.429784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:09.601 [2024-07-26 14:20:17.429800] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:09.601 [2024-07-26 14:20:17.430043] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:09.601 [2024-07-26 14:20:17.430252] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:09.601 [2024-07-26 14:20:17.430272] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:09.601 [2024-07-26 14:20:17.430286] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:09.601 [2024-07-26 14:20:17.433285] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:09.601 [2024-07-26 14:20:17.442607] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:09.601 [2024-07-26 14:20:17.442996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.601 [2024-07-26 14:20:17.443025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:09.601 [2024-07-26 14:20:17.443041] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:09.601 [2024-07-26 14:20:17.443283] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:09.601 [2024-07-26 14:20:17.443492] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:09.601 [2024-07-26 14:20:17.443513] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:09.601 [2024-07-26 14:20:17.443525] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:09.601 [2024-07-26 14:20:17.446510] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:09.601 [2024-07-26 14:20:17.455781] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:09.601 [2024-07-26 14:20:17.456083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.601 [2024-07-26 14:20:17.456125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:09.601 [2024-07-26 14:20:17.456141] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:09.601 [2024-07-26 14:20:17.456358] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:09.601 [2024-07-26 14:20:17.456611] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:09.601 [2024-07-26 14:20:17.456634] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:09.601 [2024-07-26 14:20:17.456648] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:09.601 [2024-07-26 14:20:17.459623] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:09.601 [2024-07-26 14:20:17.469113] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:09.601 [2024-07-26 14:20:17.469499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.601 [2024-07-26 14:20:17.469534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:09.601 [2024-07-26 14:20:17.469568] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:09.601 [2024-07-26 14:20:17.469813] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:09.602 [2024-07-26 14:20:17.470027] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:09.602 [2024-07-26 14:20:17.470048] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:09.602 [2024-07-26 14:20:17.470061] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:09.602 [2024-07-26 14:20:17.473018] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:09.602 [2024-07-26 14:20:17.482428] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:09.602 [2024-07-26 14:20:17.482737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.602 [2024-07-26 14:20:17.482779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:09.602 [2024-07-26 14:20:17.482796] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:09.602 [2024-07-26 14:20:17.483029] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:09.602 [2024-07-26 14:20:17.483238] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:09.602 [2024-07-26 14:20:17.483258] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:09.602 [2024-07-26 14:20:17.483271] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:09.602 [2024-07-26 14:20:17.486271] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:09.602 [2024-07-26 14:20:17.495736] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:09.602 [2024-07-26 14:20:17.496141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.602 [2024-07-26 14:20:17.496169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:09.602 [2024-07-26 14:20:17.496184] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:09.602 [2024-07-26 14:20:17.496407] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:09.602 [2024-07-26 14:20:17.496662] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:09.602 [2024-07-26 14:20:17.496685] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:09.602 [2024-07-26 14:20:17.496698] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:09.602 [2024-07-26 14:20:17.499679] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:09.602 [2024-07-26 14:20:17.508946] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:09.602 [2024-07-26 14:20:17.509300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.602 [2024-07-26 14:20:17.509329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:09.602 [2024-07-26 14:20:17.509345] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:09.602 [2024-07-26 14:20:17.509600] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:09.602 [2024-07-26 14:20:17.509805] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:09.602 [2024-07-26 14:20:17.509841] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:09.602 [2024-07-26 14:20:17.509855] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:09.602 [2024-07-26 14:20:17.512829] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:09.602 [2024-07-26 14:20:17.522265] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:09.602 [2024-07-26 14:20:17.522677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.602 [2024-07-26 14:20:17.522706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:09.602 [2024-07-26 14:20:17.522722] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:09.602 [2024-07-26 14:20:17.522964] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:09.602 [2024-07-26 14:20:17.523173] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:09.602 [2024-07-26 14:20:17.523193] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:09.602 [2024-07-26 14:20:17.523207] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:09.602 [2024-07-26 14:20:17.526203] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:09.602 [2024-07-26 14:20:17.535535] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:09.602 [2024-07-26 14:20:17.535910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.602 [2024-07-26 14:20:17.535940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:09.602 [2024-07-26 14:20:17.535957] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:09.602 [2024-07-26 14:20:17.536200] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:09.602 [2024-07-26 14:20:17.536408] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:09.602 [2024-07-26 14:20:17.536429] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:09.602 [2024-07-26 14:20:17.536442] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:09.602 [2024-07-26 14:20:17.539440] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:09.602 [2024-07-26 14:20:17.548766] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:09.602 [2024-07-26 14:20:17.549135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.602 [2024-07-26 14:20:17.549165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:09.602 [2024-07-26 14:20:17.549180] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:09.602 [2024-07-26 14:20:17.549422] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:09.602 [2024-07-26 14:20:17.549677] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:09.602 [2024-07-26 14:20:17.549700] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:09.602 [2024-07-26 14:20:17.549713] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:09.602 [2024-07-26 14:20:17.552693] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:09.602 [2024-07-26 14:20:17.561984] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:09.602 [2024-07-26 14:20:17.562367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.602 [2024-07-26 14:20:17.562394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:09.602 [2024-07-26 14:20:17.562418] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:09.602 [2024-07-26 14:20:17.562669] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:09.602 [2024-07-26 14:20:17.562917] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:09.602 [2024-07-26 14:20:17.562939] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:09.602 [2024-07-26 14:20:17.562951] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:09.602 [2024-07-26 14:20:17.565909] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:09.602 [2024-07-26 14:20:17.575133] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:09.602 [2024-07-26 14:20:17.575547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.602 [2024-07-26 14:20:17.575577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:09.602 [2024-07-26 14:20:17.575593] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:09.602 [2024-07-26 14:20:17.575836] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:09.602 [2024-07-26 14:20:17.576045] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:09.602 [2024-07-26 14:20:17.576066] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:09.602 [2024-07-26 14:20:17.576080] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:09.602 [2024-07-26 14:20:17.579065] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:09.602 [2024-07-26 14:20:17.588332] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:09.602 [2024-07-26 14:20:17.588692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.602 [2024-07-26 14:20:17.588721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:09.602 [2024-07-26 14:20:17.588737] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:09.602 [2024-07-26 14:20:17.588991] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:09.602 [2024-07-26 14:20:17.589184] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:09.602 [2024-07-26 14:20:17.589204] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:09.602 [2024-07-26 14:20:17.589217] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:09.602 [2024-07-26 14:20:17.592214] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:09.602 [2024-07-26 14:20:17.601552] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:09.602 [2024-07-26 14:20:17.601884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.602 [2024-07-26 14:20:17.601912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:09.602 [2024-07-26 14:20:17.601929] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:09.602 [2024-07-26 14:20:17.602152] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:09.602 [2024-07-26 14:20:17.602362] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:09.602 [2024-07-26 14:20:17.602387] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:09.603 [2024-07-26 14:20:17.602400] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:09.603 [2024-07-26 14:20:17.605398] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:09.603 [2024-07-26 14:20:17.615079] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:09.603 [2024-07-26 14:20:17.615381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.603 [2024-07-26 14:20:17.615408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:09.603 [2024-07-26 14:20:17.615439] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:09.603 [2024-07-26 14:20:17.615667] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:09.603 [2024-07-26 14:20:17.615901] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:09.603 [2024-07-26 14:20:17.615937] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:09.603 [2024-07-26 14:20:17.615952] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:09.862 [2024-07-26 14:20:17.619194] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:09.862 [2024-07-26 14:20:17.628291] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:09.862 [2024-07-26 14:20:17.628665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.862 [2024-07-26 14:20:17.628696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:09.862 [2024-07-26 14:20:17.628713] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:09.862 [2024-07-26 14:20:17.628942] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:09.862 [2024-07-26 14:20:17.629211] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:09.862 [2024-07-26 14:20:17.629234] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:09.862 [2024-07-26 14:20:17.629248] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:09.862 [2024-07-26 14:20:17.632592] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:09.862 [2024-07-26 14:20:17.641574] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:09.862 [2024-07-26 14:20:17.641984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.862 [2024-07-26 14:20:17.642013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:09.862 [2024-07-26 14:20:17.642029] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:09.862 [2024-07-26 14:20:17.642270] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:09.862 [2024-07-26 14:20:17.642479] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:09.862 [2024-07-26 14:20:17.642500] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:09.862 [2024-07-26 14:20:17.642535] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:09.862 [2024-07-26 14:20:17.645599] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:09.862 [2024-07-26 14:20:17.655084] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:09.862 [2024-07-26 14:20:17.655499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.862 [2024-07-26 14:20:17.655549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:09.862 [2024-07-26 14:20:17.655568] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:09.862 [2024-07-26 14:20:17.655812] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:09.862 [2024-07-26 14:20:17.656021] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:09.862 [2024-07-26 14:20:17.656042] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:09.862 [2024-07-26 14:20:17.656055] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:09.862 [2024-07-26 14:20:17.659122] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:09.862 [2024-07-26 14:20:17.668388] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:09.862 [2024-07-26 14:20:17.668766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.862 [2024-07-26 14:20:17.668795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:09.862 [2024-07-26 14:20:17.668826] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:09.862 [2024-07-26 14:20:17.669061] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:09.862 [2024-07-26 14:20:17.669269] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:09.862 [2024-07-26 14:20:17.669290] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:09.862 [2024-07-26 14:20:17.669304] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:09.862 [2024-07-26 14:20:17.672312] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:09.862 [2024-07-26 14:20:17.681699] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:09.862 [2024-07-26 14:20:17.682073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.862 [2024-07-26 14:20:17.682102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:09.862 [2024-07-26 14:20:17.682118] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:09.862 [2024-07-26 14:20:17.682359] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:09.862 [2024-07-26 14:20:17.682611] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:09.862 [2024-07-26 14:20:17.682634] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:09.862 [2024-07-26 14:20:17.682648] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:09.862 [2024-07-26 14:20:17.685625] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:09.862 [2024-07-26 14:20:17.694915] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:09.862 [2024-07-26 14:20:17.695330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.862 [2024-07-26 14:20:17.695359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:09.862 [2024-07-26 14:20:17.695376] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:09.862 [2024-07-26 14:20:17.695622] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:09.862 [2024-07-26 14:20:17.695847] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:09.862 [2024-07-26 14:20:17.695868] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:09.862 [2024-07-26 14:20:17.695881] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:09.862 [2024-07-26 14:20:17.698842] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:09.862 [2024-07-26 14:20:17.708111] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:09.862 [2024-07-26 14:20:17.708477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.862 [2024-07-26 14:20:17.708505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:09.862 [2024-07-26 14:20:17.708546] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:09.862 [2024-07-26 14:20:17.708788] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:09.862 [2024-07-26 14:20:17.709015] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:09.862 [2024-07-26 14:20:17.709036] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:09.862 [2024-07-26 14:20:17.709048] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:09.862 [2024-07-26 14:20:17.712004] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:09.863 [2024-07-26 14:20:17.721422] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:09.863 [2024-07-26 14:20:17.721738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.863 [2024-07-26 14:20:17.721782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:09.863 [2024-07-26 14:20:17.721798] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:09.863 [2024-07-26 14:20:17.722022] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:09.863 [2024-07-26 14:20:17.722232] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:09.863 [2024-07-26 14:20:17.722252] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:09.863 [2024-07-26 14:20:17.722265] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:09.863 [2024-07-26 14:20:17.725263] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:09.863 [2024-07-26 14:20:17.734720] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:09.863 [2024-07-26 14:20:17.735151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.863 [2024-07-26 14:20:17.735180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:09.863 [2024-07-26 14:20:17.735196] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:09.863 [2024-07-26 14:20:17.735438] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:09.863 [2024-07-26 14:20:17.735693] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:09.863 [2024-07-26 14:20:17.735715] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:09.863 [2024-07-26 14:20:17.735734] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:09.863 [2024-07-26 14:20:17.738740] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:09.863 [2024-07-26 14:20:17.748060] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:09.863 [2024-07-26 14:20:17.748419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.863 [2024-07-26 14:20:17.748448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:09.863 [2024-07-26 14:20:17.748465] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:09.863 [2024-07-26 14:20:17.748707] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:09.863 [2024-07-26 14:20:17.748943] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:09.863 [2024-07-26 14:20:17.748964] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:09.863 [2024-07-26 14:20:17.748976] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:09.863 [2024-07-26 14:20:17.751935] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:09.863 [2024-07-26 14:20:17.761362] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:09.863 [2024-07-26 14:20:17.761739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.863 [2024-07-26 14:20:17.761768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:09.863 [2024-07-26 14:20:17.761784] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:09.863 [2024-07-26 14:20:17.762031] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:09.863 [2024-07-26 14:20:17.762224] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:09.863 [2024-07-26 14:20:17.762244] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:09.863 [2024-07-26 14:20:17.762257] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:09.863 [2024-07-26 14:20:17.765261] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:09.863 [2024-07-26 14:20:17.774553] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:09.863 [2024-07-26 14:20:17.774918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.863 [2024-07-26 14:20:17.774946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:09.863 [2024-07-26 14:20:17.774962] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:09.863 [2024-07-26 14:20:17.775183] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:09.863 [2024-07-26 14:20:17.775392] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:09.863 [2024-07-26 14:20:17.775413] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:09.863 [2024-07-26 14:20:17.775426] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:09.863 [2024-07-26 14:20:17.778434] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:09.863 [2024-07-26 14:20:17.787949] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:09.863 [2024-07-26 14:20:17.788372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.863 [2024-07-26 14:20:17.788402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:09.863 [2024-07-26 14:20:17.788419] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:09.863 [2024-07-26 14:20:17.788646] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:09.863 [2024-07-26 14:20:17.788891] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:09.863 [2024-07-26 14:20:17.788912] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:09.863 [2024-07-26 14:20:17.788925] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:09.863 [2024-07-26 14:20:17.791884] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:09.863 [2024-07-26 14:20:17.801156] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:09.863 [2024-07-26 14:20:17.801510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.863 [2024-07-26 14:20:17.801560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:09.863 [2024-07-26 14:20:17.801578] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:09.863 [2024-07-26 14:20:17.801834] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:09.863 [2024-07-26 14:20:17.802030] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:09.863 [2024-07-26 14:20:17.802050] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:09.863 [2024-07-26 14:20:17.802063] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:09.863 [2024-07-26 14:20:17.805050] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:09.863 [2024-07-26 14:20:17.814462] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:09.863 [2024-07-26 14:20:17.814946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.863 [2024-07-26 14:20:17.814976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:09.863 [2024-07-26 14:20:17.814992] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:09.863 [2024-07-26 14:20:17.815245] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:09.863 [2024-07-26 14:20:17.815438] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:09.863 [2024-07-26 14:20:17.815458] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:09.863 [2024-07-26 14:20:17.815471] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:09.863 [2024-07-26 14:20:17.818483] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:09.863 [2024-07-26 14:20:17.827776] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:09.863 [2024-07-26 14:20:17.828140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.863 [2024-07-26 14:20:17.828167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:09.863 [2024-07-26 14:20:17.828182] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:09.863 [2024-07-26 14:20:17.828397] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:09.863 [2024-07-26 14:20:17.828654] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:09.863 [2024-07-26 14:20:17.828677] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:09.863 [2024-07-26 14:20:17.828690] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:09.863 [2024-07-26 14:20:17.831670] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:09.863 [2024-07-26 14:20:17.840965] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:09.863 [2024-07-26 14:20:17.841345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.863 [2024-07-26 14:20:17.841373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:09.863 [2024-07-26 14:20:17.841389] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:09.863 [2024-07-26 14:20:17.841642] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:09.863 [2024-07-26 14:20:17.841879] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:09.863 [2024-07-26 14:20:17.841914] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:09.863 [2024-07-26 14:20:17.841927] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:09.863 [2024-07-26 14:20:17.844935] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:09.863 [2024-07-26 14:20:17.854237] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:09.863 [2024-07-26 14:20:17.854653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.864 [2024-07-26 14:20:17.854682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:09.864 [2024-07-26 14:20:17.854698] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:09.864 [2024-07-26 14:20:17.854938] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:09.864 [2024-07-26 14:20:17.855148] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:09.864 [2024-07-26 14:20:17.855169] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:09.864 [2024-07-26 14:20:17.855183] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:09.864 [2024-07-26 14:20:17.858103] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:09.864 [2024-07-26 14:20:17.867497] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:09.864 [2024-07-26 14:20:17.867834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.864 [2024-07-26 14:20:17.867862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:09.864 [2024-07-26 14:20:17.867878] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:09.864 [2024-07-26 14:20:17.868101] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:09.864 [2024-07-26 14:20:17.868311] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:09.864 [2024-07-26 14:20:17.868332] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:09.864 [2024-07-26 14:20:17.868345] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:09.864 [2024-07-26 14:20:17.871336] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:10.123 [2024-07-26 14:20:17.880968] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:10.123 [2024-07-26 14:20:17.881339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.123 [2024-07-26 14:20:17.881368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:10.123 [2024-07-26 14:20:17.881385] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:10.123 [2024-07-26 14:20:17.881612] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:10.123 [2024-07-26 14:20:17.881847] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:10.123 [2024-07-26 14:20:17.881869] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:10.123 [2024-07-26 14:20:17.881899] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:10.123 [2024-07-26 14:20:17.885204] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:10.123 [2024-07-26 14:20:17.894249] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:10.123 [2024-07-26 14:20:17.894603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.124 [2024-07-26 14:20:17.894632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:10.124 [2024-07-26 14:20:17.894649] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:10.124 [2024-07-26 14:20:17.894890] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:10.124 [2024-07-26 14:20:17.895099] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:10.124 [2024-07-26 14:20:17.895120] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:10.124 [2024-07-26 14:20:17.895133] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:10.124 [2024-07-26 14:20:17.898215] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:10.124 [2024-07-26 14:20:17.907572] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:10.124 [2024-07-26 14:20:17.907934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.124 [2024-07-26 14:20:17.907962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:10.124 [2024-07-26 14:20:17.907977] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:10.124 [2024-07-26 14:20:17.908218] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:10.124 [2024-07-26 14:20:17.908411] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:10.124 [2024-07-26 14:20:17.908431] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:10.124 [2024-07-26 14:20:17.908444] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:10.124 [2024-07-26 14:20:17.911441] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:10.124 [2024-07-26 14:20:17.920770] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:10.124 [2024-07-26 14:20:17.921141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.124 [2024-07-26 14:20:17.921168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:10.124 [2024-07-26 14:20:17.921189] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:10.124 [2024-07-26 14:20:17.921424] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:10.124 [2024-07-26 14:20:17.921674] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:10.124 [2024-07-26 14:20:17.921697] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:10.124 [2024-07-26 14:20:17.921710] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:10.124 [2024-07-26 14:20:17.924696] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:10.124 [2024-07-26 14:20:17.934038] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:10.124 [2024-07-26 14:20:17.934390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.124 [2024-07-26 14:20:17.934419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:10.124 [2024-07-26 14:20:17.934435] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:10.124 [2024-07-26 14:20:17.934688] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:10.124 [2024-07-26 14:20:17.934902] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:10.124 [2024-07-26 14:20:17.934922] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:10.124 [2024-07-26 14:20:17.934934] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:10.124 [2024-07-26 14:20:17.937927] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:10.124 [2024-07-26 14:20:17.947371] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:10.124 [2024-07-26 14:20:17.947816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.124 [2024-07-26 14:20:17.947844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:10.124 [2024-07-26 14:20:17.947860] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:10.124 [2024-07-26 14:20:17.948102] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:10.124 [2024-07-26 14:20:17.948312] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:10.124 [2024-07-26 14:20:17.948332] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:10.124 [2024-07-26 14:20:17.948345] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:10.124 [2024-07-26 14:20:17.951322] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:10.124 [2024-07-26 14:20:17.960602] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:10.124 [2024-07-26 14:20:17.960938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.124 [2024-07-26 14:20:17.960965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:10.124 [2024-07-26 14:20:17.960981] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:10.124 [2024-07-26 14:20:17.961189] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:10.124 [2024-07-26 14:20:17.961414] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:10.124 [2024-07-26 14:20:17.961438] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:10.124 [2024-07-26 14:20:17.961451] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:10.124 [2024-07-26 14:20:17.964415] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:10.124 [2024-07-26 14:20:17.973907] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:10.124 [2024-07-26 14:20:17.974364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.124 [2024-07-26 14:20:17.974419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:10.124 [2024-07-26 14:20:17.974434] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:10.124 [2024-07-26 14:20:17.974688] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:10.124 [2024-07-26 14:20:17.974914] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:10.124 [2024-07-26 14:20:17.974934] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:10.124 [2024-07-26 14:20:17.974947] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:10.124 [2024-07-26 14:20:17.977946] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:10.124 [2024-07-26 14:20:17.987187] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:10.124 [2024-07-26 14:20:17.987601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.124 [2024-07-26 14:20:17.987630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:10.124 [2024-07-26 14:20:17.987646] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:10.124 [2024-07-26 14:20:17.987897] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:10.124 [2024-07-26 14:20:17.988085] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:10.124 [2024-07-26 14:20:17.988105] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:10.124 [2024-07-26 14:20:17.988118] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:10.124 [2024-07-26 14:20:17.991079] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:10.124 [2024-07-26 14:20:18.000437] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:10.124 [2024-07-26 14:20:18.000926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.124 [2024-07-26 14:20:18.000981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:10.124 [2024-07-26 14:20:18.000997] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:10.124 [2024-07-26 14:20:18.001240] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:10.124 [2024-07-26 14:20:18.001429] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:10.124 [2024-07-26 14:20:18.001448] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:10.124 [2024-07-26 14:20:18.001461] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:10.124 [2024-07-26 14:20:18.004385] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:10.124 [2024-07-26 14:20:18.013592] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:10.124 [2024-07-26 14:20:18.013946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.124 [2024-07-26 14:20:18.013988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:10.124 [2024-07-26 14:20:18.014004] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:10.124 [2024-07-26 14:20:18.014219] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:10.124 [2024-07-26 14:20:18.014424] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:10.124 [2024-07-26 14:20:18.014443] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:10.124 [2024-07-26 14:20:18.014456] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:10.124 [2024-07-26 14:20:18.017651] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:10.124 [2024-07-26 14:20:18.026973] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:10.124 [2024-07-26 14:20:18.027322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.125 [2024-07-26 14:20:18.027350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:10.125 [2024-07-26 14:20:18.027366] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:10.125 [2024-07-26 14:20:18.027617] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:10.125 [2024-07-26 14:20:18.027832] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:10.125 [2024-07-26 14:20:18.027852] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:10.125 [2024-07-26 14:20:18.027879] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:10.125 [2024-07-26 14:20:18.030804] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:10.125 [2024-07-26 14:20:18.040091] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:10.125 [2024-07-26 14:20:18.040439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.125 [2024-07-26 14:20:18.040468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:10.125 [2024-07-26 14:20:18.040484] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:10.125 [2024-07-26 14:20:18.040764] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:10.125 [2024-07-26 14:20:18.040971] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:10.125 [2024-07-26 14:20:18.040991] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:10.125 [2024-07-26 14:20:18.041004] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:10.125 [2024-07-26 14:20:18.043913] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:10.125 [2024-07-26 14:20:18.053231] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:10.125 [2024-07-26 14:20:18.053636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.125 [2024-07-26 14:20:18.053665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:10.125 [2024-07-26 14:20:18.053681] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:10.125 [2024-07-26 14:20:18.053915] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:10.125 [2024-07-26 14:20:18.054119] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:10.125 [2024-07-26 14:20:18.054139] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:10.125 [2024-07-26 14:20:18.054152] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:10.125 [2024-07-26 14:20:18.057070] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:10.125 [2024-07-26 14:20:18.066429] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:10.125 [2024-07-26 14:20:18.066849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.125 [2024-07-26 14:20:18.066877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:10.125 [2024-07-26 14:20:18.066892] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:10.125 [2024-07-26 14:20:18.067108] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:10.125 [2024-07-26 14:20:18.067311] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:10.125 [2024-07-26 14:20:18.067330] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:10.125 [2024-07-26 14:20:18.067344] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:10.125 [2024-07-26 14:20:18.070309] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:10.125 [2024-07-26 14:20:18.079721] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:10.125 [2024-07-26 14:20:18.080146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.125 [2024-07-26 14:20:18.080173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:10.125 [2024-07-26 14:20:18.080188] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:10.125 [2024-07-26 14:20:18.080425] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:10.125 [2024-07-26 14:20:18.080672] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:10.125 [2024-07-26 14:20:18.080694] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:10.125 [2024-07-26 14:20:18.080707] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:10.125 [2024-07-26 14:20:18.083599] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:10.125 [2024-07-26 14:20:18.093009] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:10.125 [2024-07-26 14:20:18.093392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.125 [2024-07-26 14:20:18.093469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:10.125 [2024-07-26 14:20:18.093484] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:10.125 [2024-07-26 14:20:18.093737] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:10.125 [2024-07-26 14:20:18.093945] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:10.125 [2024-07-26 14:20:18.093976] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:10.125 [2024-07-26 14:20:18.093994] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:10.125 [2024-07-26 14:20:18.096910] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:10.125 [2024-07-26 14:20:18.106275] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:10.125 [2024-07-26 14:20:18.106680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.125 [2024-07-26 14:20:18.106708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:10.125 [2024-07-26 14:20:18.106724] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:10.125 [2024-07-26 14:20:18.106959] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:10.125 [2024-07-26 14:20:18.107167] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:10.125 [2024-07-26 14:20:18.107187] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:10.125 [2024-07-26 14:20:18.107200] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:10.125 [2024-07-26 14:20:18.110113] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:10.125 [2024-07-26 14:20:18.119396] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:10.125 [2024-07-26 14:20:18.119768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.125 [2024-07-26 14:20:18.119797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:10.125 [2024-07-26 14:20:18.119813] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:10.125 [2024-07-26 14:20:18.120067] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:10.125 [2024-07-26 14:20:18.120270] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:10.125 [2024-07-26 14:20:18.120290] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:10.125 [2024-07-26 14:20:18.120303] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:10.125 [2024-07-26 14:20:18.123242] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:10.125 [2024-07-26 14:20:18.132649] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:10.125 [2024-07-26 14:20:18.133013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.125 [2024-07-26 14:20:18.133041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:10.125 [2024-07-26 14:20:18.133058] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:10.125 [2024-07-26 14:20:18.133305] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:10.125 [2024-07-26 14:20:18.133519] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:10.125 [2024-07-26 14:20:18.133565] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:10.125 [2024-07-26 14:20:18.133581] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:10.125 [2024-07-26 14:20:18.137013] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:10.385 [2024-07-26 14:20:18.146091] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:10.385 [2024-07-26 14:20:18.146489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.385 [2024-07-26 14:20:18.146518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:10.385 [2024-07-26 14:20:18.146545] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:10.385 [2024-07-26 14:20:18.146789] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:10.385 [2024-07-26 14:20:18.146994] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:10.385 [2024-07-26 14:20:18.147013] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:10.385 [2024-07-26 14:20:18.147026] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:10.385 [2024-07-26 14:20:18.150035] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:10.385 [2024-07-26 14:20:18.159284] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:10.385 [2024-07-26 14:20:18.159684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.385 [2024-07-26 14:20:18.159713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:10.385 [2024-07-26 14:20:18.159730] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:10.385 [2024-07-26 14:20:18.159980] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:10.385 [2024-07-26 14:20:18.160168] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:10.385 [2024-07-26 14:20:18.160188] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:10.385 [2024-07-26 14:20:18.160202] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:10.385 [2024-07-26 14:20:18.163112] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:10.385 [2024-07-26 14:20:18.172411] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:10.385 [2024-07-26 14:20:18.172827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.385 [2024-07-26 14:20:18.172855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:10.385 [2024-07-26 14:20:18.172871] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:10.385 [2024-07-26 14:20:18.173106] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:10.385 [2024-07-26 14:20:18.173299] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:10.385 [2024-07-26 14:20:18.173319] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:10.385 [2024-07-26 14:20:18.173332] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:10.385 [2024-07-26 14:20:18.176305] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:10.385 [2024-07-26 14:20:18.185646] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:10.385 [2024-07-26 14:20:18.186033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.385 [2024-07-26 14:20:18.186098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:10.385 [2024-07-26 14:20:18.186113] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:10.385 [2024-07-26 14:20:18.186346] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:10.385 [2024-07-26 14:20:18.186583] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:10.385 [2024-07-26 14:20:18.186605] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:10.385 [2024-07-26 14:20:18.186618] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:10.385 [2024-07-26 14:20:18.189469] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:10.385 [2024-07-26 14:20:18.198790] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:10.385 [2024-07-26 14:20:18.199204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.385 [2024-07-26 14:20:18.199259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:10.385 [2024-07-26 14:20:18.199274] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:10.385 [2024-07-26 14:20:18.199518] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:10.385 [2024-07-26 14:20:18.199736] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:10.385 [2024-07-26 14:20:18.199756] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:10.385 [2024-07-26 14:20:18.199769] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:10.385 [2024-07-26 14:20:18.202643] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:10.385 [2024-07-26 14:20:18.211990] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:10.385 [2024-07-26 14:20:18.212360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.385 [2024-07-26 14:20:18.212387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:10.385 [2024-07-26 14:20:18.212402] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:10.385 [2024-07-26 14:20:18.212663] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:10.385 [2024-07-26 14:20:18.212892] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:10.385 [2024-07-26 14:20:18.212912] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:10.385 [2024-07-26 14:20:18.212925] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:10.385 [2024-07-26 14:20:18.215773] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:10.385 [2024-07-26 14:20:18.225117] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:10.385 [2024-07-26 14:20:18.225523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.385 [2024-07-26 14:20:18.225571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:10.385 [2024-07-26 14:20:18.225587] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:10.385 [2024-07-26 14:20:18.225822] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:10.385 [2024-07-26 14:20:18.226025] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:10.385 [2024-07-26 14:20:18.226045] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:10.385 [2024-07-26 14:20:18.226058] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:10.385 [2024-07-26 14:20:18.228977] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:10.385 [2024-07-26 14:20:18.238206] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:10.385 [2024-07-26 14:20:18.238615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.385 [2024-07-26 14:20:18.238646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:10.385 [2024-07-26 14:20:18.238662] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:10.386 [2024-07-26 14:20:18.238900] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:10.386 [2024-07-26 14:20:18.239103] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:10.386 [2024-07-26 14:20:18.239124] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:10.386 [2024-07-26 14:20:18.239136] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:10.386 [2024-07-26 14:20:18.242038] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:10.386 [2024-07-26 14:20:18.251340] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:10.386 [2024-07-26 14:20:18.251760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.386 [2024-07-26 14:20:18.251787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:10.386 [2024-07-26 14:20:18.251803] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:10.386 [2024-07-26 14:20:18.252036] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:10.386 [2024-07-26 14:20:18.252241] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:10.386 [2024-07-26 14:20:18.252262] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:10.386 [2024-07-26 14:20:18.252274] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:10.386 [2024-07-26 14:20:18.255173] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:10.386 [2024-07-26 14:20:18.264460] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:10.386 [2024-07-26 14:20:18.264875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.386 [2024-07-26 14:20:18.264903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:10.386 [2024-07-26 14:20:18.264918] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:10.386 [2024-07-26 14:20:18.265156] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:10.386 [2024-07-26 14:20:18.265358] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:10.386 [2024-07-26 14:20:18.265378] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:10.386 [2024-07-26 14:20:18.265391] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:10.386 [2024-07-26 14:20:18.268295] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:10.386 [2024-07-26 14:20:18.277590] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:10.386 [2024-07-26 14:20:18.277980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.386 [2024-07-26 14:20:18.278036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:10.386 [2024-07-26 14:20:18.278056] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:10.386 [2024-07-26 14:20:18.278302] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:10.386 [2024-07-26 14:20:18.278489] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:10.386 [2024-07-26 14:20:18.278509] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:10.386 [2024-07-26 14:20:18.278523] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:10.386 [2024-07-26 14:20:18.281401] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:10.386 [2024-07-26 14:20:18.290697] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:10.386 [2024-07-26 14:20:18.291102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.386 [2024-07-26 14:20:18.291130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:10.386 [2024-07-26 14:20:18.291145] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:10.386 [2024-07-26 14:20:18.291380] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:10.386 [2024-07-26 14:20:18.291612] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:10.386 [2024-07-26 14:20:18.291634] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:10.386 [2024-07-26 14:20:18.291647] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:10.386 [2024-07-26 14:20:18.294498] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:10.386 [2024-07-26 14:20:18.303919] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:10.386 [2024-07-26 14:20:18.304262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.386 [2024-07-26 14:20:18.304290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:10.386 [2024-07-26 14:20:18.304306] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:10.386 [2024-07-26 14:20:18.304551] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:10.386 [2024-07-26 14:20:18.304747] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:10.386 [2024-07-26 14:20:18.304767] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:10.386 [2024-07-26 14:20:18.304780] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:10.386 [2024-07-26 14:20:18.307699] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:10.386 [2024-07-26 14:20:18.317088] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:10.386 [2024-07-26 14:20:18.317432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.386 [2024-07-26 14:20:18.317460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:10.386 [2024-07-26 14:20:18.317476] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:10.386 [2024-07-26 14:20:18.317741] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:10.386 [2024-07-26 14:20:18.317977] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:10.386 [2024-07-26 14:20:18.318001] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:10.386 [2024-07-26 14:20:18.318016] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:10.386 [2024-07-26 14:20:18.320894] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:10.386 [2024-07-26 14:20:18.330282] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:10.386 [2024-07-26 14:20:18.330686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.386 [2024-07-26 14:20:18.330715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:10.386 [2024-07-26 14:20:18.330732] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:10.386 [2024-07-26 14:20:18.330971] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:10.386 [2024-07-26 14:20:18.331159] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:10.386 [2024-07-26 14:20:18.331178] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:10.386 [2024-07-26 14:20:18.331191] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:10.386 [2024-07-26 14:20:18.334094] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:10.386 [2024-07-26 14:20:18.343594] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:10.386 [2024-07-26 14:20:18.343999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.386 [2024-07-26 14:20:18.344056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:10.386 [2024-07-26 14:20:18.344072] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:10.386 [2024-07-26 14:20:18.344315] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:10.386 [2024-07-26 14:20:18.344501] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:10.386 [2024-07-26 14:20:18.344521] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:10.386 [2024-07-26 14:20:18.344558] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:10.386 [2024-07-26 14:20:18.347499] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:10.386 [2024-07-26 14:20:18.356767] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:10.386 [2024-07-26 14:20:18.357171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.386 [2024-07-26 14:20:18.357227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:10.386 [2024-07-26 14:20:18.357243] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:10.386 [2024-07-26 14:20:18.357488] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:10.386 [2024-07-26 14:20:18.357724] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:10.386 [2024-07-26 14:20:18.357745] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:10.386 [2024-07-26 14:20:18.357758] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:10.386 [2024-07-26 14:20:18.360651] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:10.386 [2024-07-26 14:20:18.369957] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:10.386 [2024-07-26 14:20:18.370364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.386 [2024-07-26 14:20:18.370394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:10.386 [2024-07-26 14:20:18.370410] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:10.386 [2024-07-26 14:20:18.370656] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:10.387 [2024-07-26 14:20:18.370865] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:10.387 [2024-07-26 14:20:18.370884] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:10.387 [2024-07-26 14:20:18.370897] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:10.387 [2024-07-26 14:20:18.373778] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:10.387 [2024-07-26 14:20:18.383124] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:10.387 [2024-07-26 14:20:18.383503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.387 [2024-07-26 14:20:18.383540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:10.387 [2024-07-26 14:20:18.383559] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:10.387 [2024-07-26 14:20:18.383788] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:10.387 [2024-07-26 14:20:18.384034] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:10.387 [2024-07-26 14:20:18.384055] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:10.387 [2024-07-26 14:20:18.384069] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:10.387 [2024-07-26 14:20:18.387593] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:10.387 [2024-07-26 14:20:18.396376] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:10.387 [2024-07-26 14:20:18.396797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.387 [2024-07-26 14:20:18.396825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:10.387 [2024-07-26 14:20:18.396857] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:10.387 [2024-07-26 14:20:18.397103] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:10.387 [2024-07-26 14:20:18.397291] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:10.387 [2024-07-26 14:20:18.397310] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:10.387 [2024-07-26 14:20:18.397322] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:10.387 [2024-07-26 14:20:18.400726] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:10.646 [2024-07-26 14:20:18.409798] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:10.646 [2024-07-26 14:20:18.410105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.646 [2024-07-26 14:20:18.410147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:10.646 [2024-07-26 14:20:18.410162] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:10.646 [2024-07-26 14:20:18.410362] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:10.646 [2024-07-26 14:20:18.410612] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:10.646 [2024-07-26 14:20:18.410633] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:10.646 [2024-07-26 14:20:18.410646] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:10.646 [2024-07-26 14:20:18.413551] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:10.647 [2024-07-26 14:20:18.422926] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:10.647 [2024-07-26 14:20:18.423253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.647 [2024-07-26 14:20:18.423319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:10.647 [2024-07-26 14:20:18.423335] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:10.647 [2024-07-26 14:20:18.423581] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:10.647 [2024-07-26 14:20:18.423782] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:10.647 [2024-07-26 14:20:18.423803] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:10.647 [2024-07-26 14:20:18.423816] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:10.647 [2024-07-26 14:20:18.426677] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:10.647 [2024-07-26 14:20:18.436016] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:10.647 [2024-07-26 14:20:18.436409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.647 [2024-07-26 14:20:18.436466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:10.647 [2024-07-26 14:20:18.436481] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:10.647 [2024-07-26 14:20:18.436739] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:10.647 [2024-07-26 14:20:18.436963] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:10.647 [2024-07-26 14:20:18.436983] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:10.647 [2024-07-26 14:20:18.436995] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:10.647 [2024-07-26 14:20:18.439870] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:10.647 [2024-07-26 14:20:18.449210] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:10.647 [2024-07-26 14:20:18.449553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.647 [2024-07-26 14:20:18.449597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:10.647 [2024-07-26 14:20:18.449614] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:10.647 [2024-07-26 14:20:18.449856] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:10.647 [2024-07-26 14:20:18.450059] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:10.647 [2024-07-26 14:20:18.450078] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:10.647 [2024-07-26 14:20:18.450095] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:10.647 [2024-07-26 14:20:18.453027] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:10.647 [2024-07-26 14:20:18.462318] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:10.647 [2024-07-26 14:20:18.462681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.647 [2024-07-26 14:20:18.462710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:10.647 [2024-07-26 14:20:18.462726] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:10.647 [2024-07-26 14:20:18.462960] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:10.647 [2024-07-26 14:20:18.463163] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:10.647 [2024-07-26 14:20:18.463183] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:10.647 [2024-07-26 14:20:18.463196] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:10.647 [2024-07-26 14:20:18.466113] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:10.647 [2024-07-26 14:20:18.475556] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:10.647 [2024-07-26 14:20:18.475965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.647 [2024-07-26 14:20:18.476021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:10.647 [2024-07-26 14:20:18.476037] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:10.647 [2024-07-26 14:20:18.476280] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:10.647 [2024-07-26 14:20:18.476468] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:10.647 [2024-07-26 14:20:18.476488] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:10.647 [2024-07-26 14:20:18.476500] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:10.647 [2024-07-26 14:20:18.479373] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:10.647 [2024-07-26 14:20:18.488797] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:10.647 [2024-07-26 14:20:18.489157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.647 [2024-07-26 14:20:18.489185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:10.647 [2024-07-26 14:20:18.489201] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:10.647 [2024-07-26 14:20:18.489436] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:10.647 [2024-07-26 14:20:18.489686] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:10.647 [2024-07-26 14:20:18.489707] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:10.647 [2024-07-26 14:20:18.489721] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:10.647 [2024-07-26 14:20:18.492642] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:10.647 [2024-07-26 14:20:18.501841] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:10.647 [2024-07-26 14:20:18.502202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.647 [2024-07-26 14:20:18.502244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:10.647 [2024-07-26 14:20:18.502260] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:10.647 [2024-07-26 14:20:18.502478] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:10.647 [2024-07-26 14:20:18.502712] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:10.647 [2024-07-26 14:20:18.502733] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:10.647 [2024-07-26 14:20:18.502747] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:10.647 [2024-07-26 14:20:18.505620] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:10.647 [2024-07-26 14:20:18.514896] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:10.647 [2024-07-26 14:20:18.515238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.647 [2024-07-26 14:20:18.515267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:10.647 [2024-07-26 14:20:18.515283] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:10.647 [2024-07-26 14:20:18.515517] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:10.647 [2024-07-26 14:20:18.515734] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:10.647 [2024-07-26 14:20:18.515755] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:10.647 [2024-07-26 14:20:18.515768] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:10.647 [2024-07-26 14:20:18.518642] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:10.647 [2024-07-26 14:20:18.527876] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:10.647 [2024-07-26 14:20:18.528325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.647 [2024-07-26 14:20:18.528353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:10.647 [2024-07-26 14:20:18.528369] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:10.647 [2024-07-26 14:20:18.528632] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:10.647 [2024-07-26 14:20:18.528826] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:10.647 [2024-07-26 14:20:18.528846] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:10.647 [2024-07-26 14:20:18.528874] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:10.647 [2024-07-26 14:20:18.531734] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:10.647 [2024-07-26 14:20:18.541004] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:10.647 [2024-07-26 14:20:18.541370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.647 [2024-07-26 14:20:18.541411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:10.647 [2024-07-26 14:20:18.541426] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:10.647 [2024-07-26 14:20:18.541655] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:10.647 [2024-07-26 14:20:18.541883] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:10.647 [2024-07-26 14:20:18.541904] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:10.647 [2024-07-26 14:20:18.541917] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:10.647 [2024-07-26 14:20:18.544777] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:10.647 [2024-07-26 14:20:18.554061] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:10.647 [2024-07-26 14:20:18.554408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.648 [2024-07-26 14:20:18.554437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:10.648 [2024-07-26 14:20:18.554454] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:10.648 [2024-07-26 14:20:18.554822] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:10.648 [2024-07-26 14:20:18.555061] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:10.648 [2024-07-26 14:20:18.555081] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:10.648 [2024-07-26 14:20:18.555094] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:10.648 [2024-07-26 14:20:18.558014] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:10.648 [2024-07-26 14:20:18.567107] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:10.648 [2024-07-26 14:20:18.567451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.648 [2024-07-26 14:20:18.567480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:10.648 [2024-07-26 14:20:18.567497] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:10.648 [2024-07-26 14:20:18.567766] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:10.648 [2024-07-26 14:20:18.568002] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:10.648 [2024-07-26 14:20:18.568023] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:10.648 [2024-07-26 14:20:18.568035] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:10.648 [2024-07-26 14:20:18.570934] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:10.648 [2024-07-26 14:20:18.580232] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:10.648 [2024-07-26 14:20:18.580637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.648 [2024-07-26 14:20:18.580664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:10.648 [2024-07-26 14:20:18.580680] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:10.648 [2024-07-26 14:20:18.580911] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:10.648 [2024-07-26 14:20:18.581116] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:10.648 [2024-07-26 14:20:18.581135] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:10.648 [2024-07-26 14:20:18.581147] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:10.648 [2024-07-26 14:20:18.584054] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:10.648 [2024-07-26 14:20:18.593342] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:10.648 [2024-07-26 14:20:18.593721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.648 [2024-07-26 14:20:18.593750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:10.648 [2024-07-26 14:20:18.593767] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:10.648 [2024-07-26 14:20:18.594012] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:10.648 [2024-07-26 14:20:18.594202] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:10.648 [2024-07-26 14:20:18.594222] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:10.648 [2024-07-26 14:20:18.594236] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:10.648 [2024-07-26 14:20:18.597126] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:10.648 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 324663 Killed "${NVMF_APP[@]}" "$@" 00:26:10.648 14:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:26:10.648 14:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:26:10.648 14:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:10.648 14:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:10.648 14:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:10.648 [2024-07-26 14:20:18.606916] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:10.648 14:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=325720 00:26:10.648 14:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:10.648 [2024-07-26 14:20:18.607339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.648 [2024-07-26 14:20:18.607370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:10.648 [2024-07-26 14:20:18.607387] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:10.648 14:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 325720 00:26:10.648 [2024-07-26 14:20:18.607611] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:10.648 14:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 325720 ']' 00:26:10.648 14:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:10.648 [2024-07-26 14:20:18.607845] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:10.648 [2024-07-26 14:20:18.607867] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:10.648 [2024-07-26 14:20:18.607896] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:10.648 14:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:10.648 14:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:10.648 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:10.648 14:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:10.648 14:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:10.648 [2024-07-26 14:20:18.611036] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:10.648 [2024-07-26 14:20:18.620288] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:10.648 [2024-07-26 14:20:18.620642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.648 [2024-07-26 14:20:18.620671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:10.648 [2024-07-26 14:20:18.620688] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:10.648 [2024-07-26 14:20:18.620917] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:10.648 [2024-07-26 14:20:18.621132] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:10.648 [2024-07-26 14:20:18.621151] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:10.648 [2024-07-26 14:20:18.621164] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:10.648 [2024-07-26 14:20:18.624265] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:10.648 [2024-07-26 14:20:18.633653] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:10.648 [2024-07-26 14:20:18.634002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.648 [2024-07-26 14:20:18.634030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:10.648 [2024-07-26 14:20:18.634046] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:10.648 [2024-07-26 14:20:18.634261] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:10.648 [2024-07-26 14:20:18.634507] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:10.648 [2024-07-26 14:20:18.634539] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:10.648 [2024-07-26 14:20:18.634555] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:10.648 [2024-07-26 14:20:18.638016] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:10.648 [2024-07-26 14:20:18.647149] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:10.648 [2024-07-26 14:20:18.647535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.648 [2024-07-26 14:20:18.647565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:10.648 [2024-07-26 14:20:18.647581] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:10.648 [2024-07-26 14:20:18.647813] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:10.648 [2024-07-26 14:20:18.648029] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:10.648 [2024-07-26 14:20:18.648048] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:10.648 [2024-07-26 14:20:18.648061] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:10.648 [2024-07-26 14:20:18.651182] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:10.648 [2024-07-26 14:20:18.652411] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:26:10.648 [2024-07-26 14:20:18.652480] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:10.648 [2024-07-26 14:20:18.660648] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:10.648 [2024-07-26 14:20:18.661020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.648 [2024-07-26 14:20:18.661048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:10.648 [2024-07-26 14:20:18.661063] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:10.648 [2024-07-26 14:20:18.661277] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:10.649 [2024-07-26 14:20:18.661563] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:10.649 [2024-07-26 14:20:18.661585] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:10.649 [2024-07-26 14:20:18.661600] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:10.914 [2024-07-26 14:20:18.664947] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:10.914 [2024-07-26 14:20:18.673967] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:10.914 [2024-07-26 14:20:18.674319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.914 [2024-07-26 14:20:18.674348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:10.914 [2024-07-26 14:20:18.674364] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:10.914 [2024-07-26 14:20:18.674603] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:10.914 [2024-07-26 14:20:18.674843] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:10.914 [2024-07-26 14:20:18.674864] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:10.914 [2024-07-26 14:20:18.674879] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:10.914 [2024-07-26 14:20:18.677877] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:10.914 [2024-07-26 14:20:18.687266] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:10.914 [2024-07-26 14:20:18.687624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.914 [2024-07-26 14:20:18.687652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:10.914 [2024-07-26 14:20:18.687669] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:10.914 [2024-07-26 14:20:18.687910] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:10.914 [2024-07-26 14:20:18.688109] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:10.914 [2024-07-26 14:20:18.688128] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:10.914 [2024-07-26 14:20:18.688141] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:10.914 EAL: No free 2048 kB hugepages reported on node 1 00:26:10.914 [2024-07-26 14:20:18.691113] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:10.914 [2024-07-26 14:20:18.700606] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:10.914 [2024-07-26 14:20:18.701048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.914 [2024-07-26 14:20:18.701080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:10.914 [2024-07-26 14:20:18.701097] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:10.914 [2024-07-26 14:20:18.701342] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:10.914 [2024-07-26 14:20:18.701585] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:10.914 [2024-07-26 14:20:18.701607] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:10.914 [2024-07-26 14:20:18.701620] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:10.914 [2024-07-26 14:20:18.704681] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:10.914 [2024-07-26 14:20:18.713902] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:10.914 [2024-07-26 14:20:18.714274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.914 [2024-07-26 14:20:18.714301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:10.914 [2024-07-26 14:20:18.714317] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:10.914 [2024-07-26 14:20:18.714566] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:10.914 [2024-07-26 14:20:18.714781] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:10.914 [2024-07-26 14:20:18.714800] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:10.914 [2024-07-26 14:20:18.714813] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:10.914 [2024-07-26 14:20:18.717717] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:10.914 [2024-07-26 14:20:18.719555] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:10.914 [2024-07-26 14:20:18.727253] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:10.914 [2024-07-26 14:20:18.727715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.914 [2024-07-26 14:20:18.727751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:10.914 [2024-07-26 14:20:18.727770] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:10.915 [2024-07-26 14:20:18.728020] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:10.915 [2024-07-26 14:20:18.728223] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:10.915 [2024-07-26 14:20:18.728243] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:10.915 [2024-07-26 14:20:18.728259] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:10.915 [2024-07-26 14:20:18.731249] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:10.915 [2024-07-26 14:20:18.740597] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:10.915 [2024-07-26 14:20:18.741000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.915 [2024-07-26 14:20:18.741032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:10.915 [2024-07-26 14:20:18.741050] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:10.915 [2024-07-26 14:20:18.741299] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:10.915 [2024-07-26 14:20:18.741548] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:10.915 [2024-07-26 14:20:18.741569] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:10.915 [2024-07-26 14:20:18.741598] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:10.915 [2024-07-26 14:20:18.744608] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:10.915 [2024-07-26 14:20:18.753993] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:10.915 [2024-07-26 14:20:18.754396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.915 [2024-07-26 14:20:18.754438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:10.915 [2024-07-26 14:20:18.754455] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:10.915 [2024-07-26 14:20:18.754679] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:10.915 [2024-07-26 14:20:18.754924] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:10.915 [2024-07-26 14:20:18.754944] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:10.915 [2024-07-26 14:20:18.754957] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:10.915 [2024-07-26 14:20:18.757939] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:10.915 [2024-07-26 14:20:18.767320] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:10.915 [2024-07-26 14:20:18.767709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.915 [2024-07-26 14:20:18.767737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:10.915 [2024-07-26 14:20:18.767753] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:10.915 [2024-07-26 14:20:18.767981] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:10.915 [2024-07-26 14:20:18.768196] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:10.915 [2024-07-26 14:20:18.768215] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:10.915 [2024-07-26 14:20:18.768228] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:10.915 [2024-07-26 14:20:18.771093] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:10.915 [2024-07-26 14:20:18.780708] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:10.915 [2024-07-26 14:20:18.781201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.915 [2024-07-26 14:20:18.781250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:10.915 [2024-07-26 14:20:18.781286] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:10.915 [2024-07-26 14:20:18.781544] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:10.915 [2024-07-26 14:20:18.781756] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:10.915 [2024-07-26 14:20:18.781777] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:10.915 [2024-07-26 14:20:18.781794] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:10.915 [2024-07-26 14:20:18.784815] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:10.915 [2024-07-26 14:20:18.794014] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:10.915 [2024-07-26 14:20:18.794411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.915 [2024-07-26 14:20:18.794441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:10.915 [2024-07-26 14:20:18.794458] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:10.915 [2024-07-26 14:20:18.794688] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:10.915 [2024-07-26 14:20:18.794934] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:10.915 [2024-07-26 14:20:18.794953] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:10.915 [2024-07-26 14:20:18.794968] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:10.915 [2024-07-26 14:20:18.797951] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:10.915 [2024-07-26 14:20:18.807252] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:10.915 [2024-07-26 14:20:18.807692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.915 [2024-07-26 14:20:18.807721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:10.915 [2024-07-26 14:20:18.807738] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:10.915 [2024-07-26 14:20:18.807984] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:10.915 [2024-07-26 14:20:18.808197] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:10.915 [2024-07-26 14:20:18.808217] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:10.915 [2024-07-26 14:20:18.808231] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:10.915 [2024-07-26 14:20:18.811214] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:10.915 [2024-07-26 14:20:18.820485] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:10.915 [2024-07-26 14:20:18.820885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.915 [2024-07-26 14:20:18.820914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:10.915 [2024-07-26 14:20:18.820930] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:10.915 [2024-07-26 14:20:18.821173] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:10.915 [2024-07-26 14:20:18.821388] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:10.915 [2024-07-26 14:20:18.821408] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:10.915 [2024-07-26 14:20:18.821421] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:10.915 [2024-07-26 14:20:18.824357] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:10.915 [2024-07-26 14:20:18.826419] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:10.915 [2024-07-26 14:20:18.826449] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:10.915 [2024-07-26 14:20:18.826476] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:10.915 [2024-07-26 14:20:18.826494] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:10.915 [2024-07-26 14:20:18.826504] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:10.915 [2024-07-26 14:20:18.826747] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:10.915 [2024-07-26 14:20:18.826774] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:10.915 [2024-07-26 14:20:18.826777] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:10.915 [2024-07-26 14:20:18.834091] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:10.915 [2024-07-26 14:20:18.834534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.915 [2024-07-26 14:20:18.834570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:10.915 [2024-07-26 14:20:18.834588] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:10.915 [2024-07-26 14:20:18.834812] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:10.915 [2024-07-26 14:20:18.835044] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:10.915 [2024-07-26 14:20:18.835065] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:10.915 [2024-07-26 14:20:18.835082] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:10.915 [2024-07-26 14:20:18.838259] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:10.915 [2024-07-26 14:20:18.847685] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:10.915 [2024-07-26 14:20:18.848163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.915 [2024-07-26 14:20:18.848201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:10.916 [2024-07-26 14:20:18.848221] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:10.916 [2024-07-26 14:20:18.848461] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:10.916 [2024-07-26 14:20:18.848709] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:10.916 [2024-07-26 14:20:18.848733] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:10.916 [2024-07-26 14:20:18.848751] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:10.916 [2024-07-26 14:20:18.852013] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:10.916 [2024-07-26 14:20:18.861264] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:10.916 [2024-07-26 14:20:18.861777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.916 [2024-07-26 14:20:18.861818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:10.916 [2024-07-26 14:20:18.861838] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:10.916 [2024-07-26 14:20:18.862077] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:10.916 [2024-07-26 14:20:18.862294] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:10.916 [2024-07-26 14:20:18.862316] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:10.916 [2024-07-26 14:20:18.862333] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:10.916 [2024-07-26 14:20:18.865519] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:10.916 [2024-07-26 14:20:18.874775] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:10.916 [2024-07-26 14:20:18.875288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.916 [2024-07-26 14:20:18.875326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:10.916 [2024-07-26 14:20:18.875346] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:10.916 [2024-07-26 14:20:18.875594] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:10.916 [2024-07-26 14:20:18.875813] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:10.916 [2024-07-26 14:20:18.875834] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:10.916 [2024-07-26 14:20:18.875852] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:10.916 [2024-07-26 14:20:18.879059] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:10.916 [2024-07-26 14:20:18.888286] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:10.916 [2024-07-26 14:20:18.888772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.916 [2024-07-26 14:20:18.888807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:10.916 [2024-07-26 14:20:18.888826] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:10.916 [2024-07-26 14:20:18.889053] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:10.916 [2024-07-26 14:20:18.889277] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:10.916 [2024-07-26 14:20:18.889300] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:10.916 [2024-07-26 14:20:18.889317] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:10.916 [2024-07-26 14:20:18.892606] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:10.916 [2024-07-26 14:20:18.901851] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:10.916 [2024-07-26 14:20:18.902344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.916 [2024-07-26 14:20:18.902381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:10.916 [2024-07-26 14:20:18.902402] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:10.916 [2024-07-26 14:20:18.902637] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:10.916 [2024-07-26 14:20:18.902877] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:10.916 [2024-07-26 14:20:18.902899] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:10.916 [2024-07-26 14:20:18.902918] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:10.916 [2024-07-26 14:20:18.906085] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:10.916 [2024-07-26 14:20:18.915301] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:10.916 [2024-07-26 14:20:18.915664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.916 [2024-07-26 14:20:18.915693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:10.916 [2024-07-26 14:20:18.915718] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:10.916 [2024-07-26 14:20:18.915949] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:10.916 [2024-07-26 14:20:18.916161] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:10.916 [2024-07-26 14:20:18.916181] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:10.916 [2024-07-26 14:20:18.916195] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:10.916 [2024-07-26 14:20:18.919370] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:11.175 [2024-07-26 14:20:18.928849] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:11.175 [2024-07-26 14:20:18.929164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.175 [2024-07-26 14:20:18.929193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:11.175 [2024-07-26 14:20:18.929208] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:11.175 [2024-07-26 14:20:18.929423] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:11.175 [2024-07-26 14:20:18.929650] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:11.175 [2024-07-26 14:20:18.929672] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:11.175 [2024-07-26 14:20:18.929686] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:11.175 [2024-07-26 14:20:18.932866] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:11.175 14:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:11.175 14:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:26:11.175 14:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:11.175 14:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:11.175 14:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:11.175 [2024-07-26 14:20:18.942392] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:11.175 [2024-07-26 14:20:18.942758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.175 [2024-07-26 14:20:18.942787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:11.175 [2024-07-26 14:20:18.942803] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:11.175 [2024-07-26 14:20:18.943017] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:11.175 [2024-07-26 14:20:18.943244] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:11.175 [2024-07-26 14:20:18.943264] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:11.175 [2024-07-26 14:20:18.943277] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:11.175 [2024-07-26 14:20:18.946502] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:11.175 [2024-07-26 14:20:18.956184] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:11.175 [2024-07-26 14:20:18.956556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.175 [2024-07-26 14:20:18.956585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:11.175 [2024-07-26 14:20:18.956607] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:11.175 [2024-07-26 14:20:18.956836] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:11.175 [2024-07-26 14:20:18.957049] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:11.175 [2024-07-26 14:20:18.957070] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:11.175 [2024-07-26 14:20:18.957083] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:11.175 [2024-07-26 14:20:18.960341] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:11.175 14:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:11.175 14:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:11.175 14:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.175 14:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:11.175 [2024-07-26 14:20:18.969808] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:11.175 [2024-07-26 14:20:18.970143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.175 [2024-07-26 14:20:18.970171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:11.175 [2024-07-26 14:20:18.970187] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:11.175 [2024-07-26 14:20:18.970401] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:11.175 [2024-07-26 14:20:18.970658] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:11.175 [2024-07-26 14:20:18.970682] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:11.175 [2024-07-26 14:20:18.970697] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:11.175 [2024-07-26 14:20:18.971221] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:11.175 [2024-07-26 14:20:18.973993] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:11.175 [2024-07-26 14:20:18.983325] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:11.175 [2024-07-26 14:20:18.983655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.175 [2024-07-26 14:20:18.983683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:11.175 [2024-07-26 14:20:18.983699] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:11.175 [2024-07-26 14:20:18.983928] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:11.176 [2024-07-26 14:20:18.984134] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:11.176 [2024-07-26 14:20:18.984154] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:11.176 [2024-07-26 14:20:18.984167] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:11.176 [2024-07-26 14:20:18.987316] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:11.176 14:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.176 14:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:11.176 14:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.176 14:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:11.176 [2024-07-26 14:20:18.996932] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:11.176 [2024-07-26 14:20:18.997265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.176 [2024-07-26 14:20:18.997293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:11.176 [2024-07-26 14:20:18.997309] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:11.176 [2024-07-26 14:20:18.997547] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:11.176 [2024-07-26 14:20:18.997781] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:11.176 [2024-07-26 14:20:18.997802] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:11.176 [2024-07-26 14:20:18.997815] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:11.176 [2024-07-26 14:20:19.001249] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:11.176 [2024-07-26 14:20:19.010470] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:11.176 [2024-07-26 14:20:19.011010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.176 [2024-07-26 14:20:19.011050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:11.176 [2024-07-26 14:20:19.011070] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:11.176 [2024-07-26 14:20:19.011309] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:11.176 [2024-07-26 14:20:19.011552] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:11.176 [2024-07-26 14:20:19.011575] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:11.176 [2024-07-26 14:20:19.011593] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:11.176 Malloc0 00:26:11.176 [2024-07-26 14:20:19.014845] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:11.176 14:20:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.176 14:20:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:11.176 14:20:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.176 14:20:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:11.176 14:20:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.176 14:20:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:11.176 14:20:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.176 14:20:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:11.176 [2024-07-26 14:20:19.024161] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:11.176 [2024-07-26 14:20:19.024510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.176 [2024-07-26 14:20:19.024545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87ac0 with addr=10.0.0.2, port=4420 00:26:11.176 [2024-07-26 14:20:19.024563] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ac0 is same with the state(5) to be set 00:26:11.176 [2024-07-26 14:20:19.024778] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ac0 (9): Bad file descriptor 00:26:11.176 [2024-07-26 14:20:19.025016] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:11.176 [2024-07-26 14:20:19.025037] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:11.176 [2024-07-26 14:20:19.025050] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:11.176 [2024-07-26 14:20:19.028330] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:11.176 14:20:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.176 14:20:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:11.176 14:20:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.176 14:20:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:11.176 [2024-07-26 14:20:19.034678] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:11.176 [2024-07-26 14:20:19.037837] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:11.176 14:20:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.176 14:20:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 324939 00:26:11.176 [2024-07-26 14:20:19.112018] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:21.145 00:26:21.145 Latency(us) 00:26:21.145 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:21.145 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:21.145 Verification LBA range: start 0x0 length 0x4000 00:26:21.145 Nvme1n1 : 15.01 6725.57 26.27 10240.87 0.00 7521.25 867.75 17282.09 00:26:21.145 =================================================================================================================== 00:26:21.145 Total : 6725.57 26.27 10240.87 0.00 7521.25 867.75 17282.09 00:26:21.145 14:20:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:26:21.145 14:20:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:21.145 14:20:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.145 14:20:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:21.145 14:20:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.145 14:20:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:26:21.145 14:20:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:26:21.145 14:20:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:21.145 14:20:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:26:21.145 14:20:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:21.145 14:20:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:26:21.145 14:20:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:21.145 14:20:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:21.145 rmmod nvme_tcp 00:26:21.145 rmmod nvme_fabrics 00:26:21.145 rmmod nvme_keyring 00:26:21.145 14:20:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:21.145 14:20:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:26:21.145 14:20:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:26:21.145 14:20:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 325720 ']' 00:26:21.145 14:20:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 325720 00:26:21.145 14:20:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 325720 ']' 00:26:21.145 14:20:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # kill -0 325720 00:26:21.145 14:20:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # uname 00:26:21.145 14:20:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:21.145 14:20:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 325720 00:26:21.145 14:20:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:21.145 14:20:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:21.145 14:20:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 325720' 00:26:21.145 killing process with pid 325720 00:26:21.145 14:20:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@969 -- # kill 325720 00:26:21.145 14:20:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@974 -- # wait 325720 00:26:21.145 14:20:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:21.145 14:20:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:21.145 14:20:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:21.145 14:20:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:21.145 14:20:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:21.145 14:20:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:21.145 14:20:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:21.145 14:20:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:23.050 14:20:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:23.050 00:26:23.050 real 0m22.617s 00:26:23.050 user 1m0.212s 00:26:23.050 sys 0m4.504s 00:26:23.050 14:20:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:23.050 14:20:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:23.050 ************************************ 00:26:23.050 END TEST nvmf_bdevperf 00:26:23.050 ************************************ 00:26:23.050 14:20:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:26:23.050 14:20:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:23.050 14:20:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:23.050 14:20:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.050 ************************************ 00:26:23.050 START TEST nvmf_target_disconnect 00:26:23.050 ************************************ 00:26:23.050 14:20:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:26:23.050 * Looking for test storage... 00:26:23.050 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:23.050 14:20:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:23.050 14:20:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:26:23.050 14:20:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:23.050 14:20:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:23.050 14:20:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:23.050 14:20:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:23.050 14:20:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:23.050 14:20:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:23.050 14:20:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:23.050 14:20:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:23.050 14:20:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:23.050 14:20:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:23.050 14:20:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:26:23.050 14:20:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:26:23.050 14:20:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:23.050 14:20:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:23.050 14:20:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:23.050 14:20:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:23.050 14:20:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:23.050 14:20:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:23.050 14:20:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:23.051 14:20:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:23.051 14:20:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:23.051 14:20:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:23.051 14:20:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:23.051 14:20:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:26:23.051 14:20:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:23.051 14:20:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:26:23.051 14:20:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:23.051 14:20:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:23.051 14:20:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:23.051 14:20:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:23.051 14:20:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:23.051 14:20:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:23.051 14:20:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:23.051 14:20:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:23.051 14:20:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:26:23.051 14:20:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:26:23.051 14:20:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:26:23.051 14:20:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:26:23.051 14:20:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:23.051 14:20:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:23.051 14:20:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:23.051 14:20:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:23.051 14:20:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:23.051 14:20:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:23.051 14:20:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:23.051 14:20:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:23.051 14:20:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:23.051 14:20:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:23.051 14:20:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:26:23.051 14:20:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:26:24.954 14:20:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:24.954 14:20:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:26:24.954 14:20:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:24.954 14:20:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:24.954 14:20:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:24.954 14:20:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:24.954 14:20:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:24.954 14:20:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:26:24.954 14:20:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:24.954 14:20:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:26:24.954 14:20:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:26:24.955 14:20:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:26:24.955 14:20:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:26:24.955 14:20:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:26:24.955 14:20:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:26:24.955 14:20:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:24.955 14:20:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:24.955 14:20:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:24.955 14:20:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:24.955 14:20:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:24.955 14:20:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:24.955 14:20:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:24.955 14:20:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:24.955 14:20:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:24.955 14:20:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:24.955 14:20:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:24.955 14:20:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:24.955 14:20:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:24.955 14:20:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:24.955 14:20:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:24.955 14:20:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:24.955 14:20:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:24.955 14:20:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:24.955 14:20:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:26:24.955 Found 0000:09:00.0 (0x8086 - 0x159b) 00:26:24.955 14:20:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:24.955 14:20:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:24.955 14:20:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:24.955 14:20:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:24.955 14:20:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:24.955 14:20:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:24.955 14:20:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:26:24.955 Found 0000:09:00.1 (0x8086 - 0x159b) 00:26:24.955 14:20:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:24.955 14:20:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:24.955 14:20:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:24.955 14:20:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:24.955 14:20:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:24.955 14:20:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:24.955 14:20:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:24.955 14:20:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:24.955 14:20:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:24.955 14:20:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:24.955 14:20:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:24.955 14:20:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:24.955 14:20:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:24.955 14:20:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:24.955 14:20:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:24.955 14:20:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:26:24.955 Found net devices under 0000:09:00.0: cvl_0_0 00:26:24.955 14:20:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:24.955 14:20:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:24.955 14:20:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:24.955 14:20:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:24.955 14:20:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:24.955 14:20:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:24.955 14:20:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:24.955 14:20:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:24.955 14:20:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:26:24.955 Found net devices under 0000:09:00.1: cvl_0_1 00:26:24.955 14:20:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:24.955 14:20:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:24.955 14:20:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:26:24.955 14:20:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:24.955 14:20:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:24.955 14:20:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:24.955 14:20:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:24.955 14:20:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:24.955 14:20:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:24.955 14:20:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:24.955 14:20:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:24.955 14:20:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:24.955 14:20:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:24.955 14:20:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:24.955 14:20:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:24.955 14:20:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:24.955 14:20:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:24.955 14:20:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:24.955 14:20:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:25.214 14:20:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:25.214 14:20:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:25.214 14:20:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:25.214 14:20:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:25.214 14:20:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:25.214 14:20:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:25.214 14:20:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:25.214 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:25.214 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.157 ms 00:26:25.214 00:26:25.214 --- 10.0.0.2 ping statistics --- 00:26:25.214 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:25.214 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:26:25.214 14:20:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:25.214 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:25.214 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.176 ms 00:26:25.214 00:26:25.214 --- 10.0.0.1 ping statistics --- 00:26:25.214 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:25.214 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:26:25.214 14:20:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:25.214 14:20:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:26:25.214 14:20:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:25.214 14:20:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:25.214 14:20:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:25.214 14:20:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:25.214 14:20:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:25.214 14:20:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:25.214 14:20:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:25.214 14:20:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:26:25.214 14:20:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:26:25.214 14:20:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:25.214 14:20:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:26:25.214 ************************************ 00:26:25.214 START TEST nvmf_target_disconnect_tc1 00:26:25.214 ************************************ 00:26:25.214 14:20:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc1 00:26:25.214 14:20:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:25.214 14:20:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # local es=0 00:26:25.214 14:20:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:25.214 14:20:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:26:25.214 14:20:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:25.214 14:20:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:26:25.214 14:20:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:25.214 14:20:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:26:25.214 14:20:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:25.214 14:20:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:26:25.214 14:20:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:26:25.214 14:20:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:25.215 EAL: No free 2048 kB hugepages reported on node 1 00:26:25.473 [2024-07-26 14:20:33.240658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.473 [2024-07-26 14:20:33.240721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9d1a0 with addr=10.0.0.2, port=4420 00:26:25.473 [2024-07-26 14:20:33.240758] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:25.473 [2024-07-26 14:20:33.240782] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:25.473 [2024-07-26 14:20:33.240797] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:26:25.473 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:26:25.473 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:26:25.473 Initializing NVMe Controllers 00:26:25.473 14:20:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # es=1 00:26:25.473 14:20:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:25.473 14:20:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:25.473 14:20:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:25.473 00:26:25.473 real 0m0.099s 00:26:25.473 user 0m0.040s 00:26:25.473 sys 0m0.058s 00:26:25.473 14:20:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:25.473 14:20:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:25.473 ************************************ 00:26:25.473 END TEST nvmf_target_disconnect_tc1 00:26:25.473 ************************************ 00:26:25.473 14:20:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:26:25.473 14:20:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:26:25.473 14:20:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:25.473 14:20:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:26:25.473 ************************************ 00:26:25.473 START TEST nvmf_target_disconnect_tc2 00:26:25.473 ************************************ 00:26:25.473 14:20:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc2 00:26:25.474 14:20:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:26:25.474 14:20:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:26:25.474 14:20:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:25.474 14:20:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:25.474 14:20:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:25.474 14:20:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=328878 00:26:25.474 14:20:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:26:25.474 14:20:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 328878 00:26:25.474 14:20:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 328878 ']' 00:26:25.474 14:20:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:25.474 14:20:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:25.474 14:20:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:25.474 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:25.474 14:20:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:25.474 14:20:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:25.474 [2024-07-26 14:20:33.352691] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:26:25.474 [2024-07-26 14:20:33.352772] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:25.474 EAL: No free 2048 kB hugepages reported on node 1 00:26:25.474 [2024-07-26 14:20:33.415921] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:25.732 [2024-07-26 14:20:33.530606] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:25.732 [2024-07-26 14:20:33.530670] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:25.732 [2024-07-26 14:20:33.530684] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:25.732 [2024-07-26 14:20:33.530696] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:25.732 [2024-07-26 14:20:33.530706] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:25.732 [2024-07-26 14:20:33.530800] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:26:25.732 [2024-07-26 14:20:33.530837] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:26:25.732 [2024-07-26 14:20:33.530925] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:26:25.732 [2024-07-26 14:20:33.530928] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:26:25.732 14:20:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:25.732 14:20:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:26:25.732 14:20:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:25.732 14:20:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:25.732 14:20:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:25.732 14:20:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:25.732 14:20:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:25.732 14:20:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.732 14:20:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:25.732 Malloc0 00:26:25.732 14:20:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.732 14:20:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:26:25.732 14:20:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.732 14:20:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:25.732 [2024-07-26 14:20:33.713535] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:25.732 14:20:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.732 14:20:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:25.732 14:20:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.732 14:20:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:25.732 14:20:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.732 14:20:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:25.732 14:20:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.732 14:20:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:25.732 14:20:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.732 14:20:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:25.732 14:20:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.732 14:20:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:25.732 [2024-07-26 14:20:33.741794] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:25.732 14:20:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.732 14:20:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:25.732 14:20:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.732 14:20:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:25.991 14:20:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.991 14:20:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=328905 00:26:25.991 14:20:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:26:25.991 14:20:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:25.991 EAL: No free 2048 kB hugepages reported on node 1 00:26:27.905 14:20:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 328878 00:26:27.905 14:20:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:26:27.905 Read completed with error (sct=0, sc=8) 00:26:27.905 starting I/O failed 00:26:27.905 Read completed with error (sct=0, sc=8) 00:26:27.905 starting I/O failed 00:26:27.905 Read completed with error (sct=0, sc=8) 00:26:27.905 starting I/O failed 00:26:27.905 Read completed with error (sct=0, sc=8) 00:26:27.905 starting I/O failed 00:26:27.905 Read completed with error (sct=0, sc=8) 00:26:27.905 starting I/O failed 00:26:27.906 Read completed with error (sct=0, sc=8) 00:26:27.906 starting I/O failed 00:26:27.906 Read completed with error (sct=0, sc=8) 00:26:27.906 starting I/O failed 00:26:27.906 Read completed with error (sct=0, sc=8) 00:26:27.906 starting I/O failed 00:26:27.906 Read completed with error (sct=0, sc=8) 00:26:27.906 starting I/O failed 00:26:27.906 Read completed with error (sct=0, sc=8) 00:26:27.906 starting I/O failed 00:26:27.906 Read completed with error (sct=0, sc=8) 00:26:27.906 starting I/O failed 00:26:27.906 Read completed with error (sct=0, sc=8) 00:26:27.906 starting I/O failed 00:26:27.906 Read completed with error (sct=0, sc=8) 00:26:27.906 starting I/O failed 00:26:27.906 Write completed with error (sct=0, sc=8) 00:26:27.906 starting I/O failed 00:26:27.906 Read completed with error (sct=0, sc=8) 00:26:27.906 starting I/O failed 00:26:27.906 Read completed with error (sct=0, sc=8) 00:26:27.906 starting I/O failed 00:26:27.906 Write completed with error (sct=0, sc=8) 00:26:27.906 starting I/O failed 00:26:27.906 Write completed with error (sct=0, sc=8) 00:26:27.906 starting I/O failed 00:26:27.906 Read completed with error (sct=0, sc=8) 00:26:27.906 starting I/O failed 00:26:27.906 Read completed with error (sct=0, sc=8) 00:26:27.906 starting I/O failed 00:26:27.906 Read completed with error (sct=0, sc=8) 00:26:27.906 starting I/O failed 00:26:27.906 Read completed with error (sct=0, sc=8) 00:26:27.906 starting I/O failed 00:26:27.906 Write completed with error (sct=0, sc=8) 00:26:27.906 starting I/O failed 00:26:27.906 Read completed with error (sct=0, sc=8) 00:26:27.906 starting I/O failed 00:26:27.906 Write completed with error (sct=0, sc=8) 00:26:27.906 starting I/O failed 00:26:27.906 Write completed with error (sct=0, sc=8) 00:26:27.906 starting I/O failed 00:26:27.906 Write completed with error (sct=0, sc=8) 00:26:27.906 starting I/O failed 00:26:27.906 Read completed with error (sct=0, sc=8) 00:26:27.906 starting I/O failed 00:26:27.906 Read completed with error (sct=0, sc=8) 00:26:27.906 starting I/O failed 00:26:27.906 Read completed with error (sct=0, sc=8) 00:26:27.906 starting I/O failed 00:26:27.906 Write completed with error (sct=0, sc=8) 00:26:27.906 starting I/O failed 00:26:27.906 Read completed with error (sct=0, sc=8) 00:26:27.906 starting I/O failed 00:26:27.906 Read completed with error (sct=0, sc=8) 00:26:27.906 starting I/O failed 00:26:27.906 Read completed with error (sct=0, sc=8) 00:26:27.906 starting I/O failed 00:26:27.906 Read completed with error (sct=0, sc=8) 00:26:27.906 starting I/O failed 00:26:27.906 Read completed with error (sct=0, sc=8) 00:26:27.906 starting I/O failed 00:26:27.906 Read completed with error (sct=0, sc=8) 00:26:27.906 [2024-07-26 14:20:35.765818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:27.906 starting I/O failed 00:26:27.906 Read completed with error (sct=0, sc=8) 00:26:27.906 starting I/O failed 00:26:27.906 Read completed with error (sct=0, sc=8) 00:26:27.906 starting I/O failed 00:26:27.906 Read completed with error (sct=0, sc=8) 00:26:27.906 starting I/O failed 00:26:27.906 Read completed with error (sct=0, sc=8) 00:26:27.906 starting I/O failed 00:26:27.906 Write completed with error (sct=0, sc=8) 00:26:27.906 starting I/O failed 00:26:27.906 Read completed with error (sct=0, sc=8) 00:26:27.906 starting I/O failed 00:26:27.906 Write completed with error (sct=0, sc=8) 00:26:27.906 starting I/O failed 00:26:27.906 Read completed with error (sct=0, sc=8) 00:26:27.906 starting I/O failed 00:26:27.906 Read completed with error (sct=0, sc=8) 00:26:27.906 starting I/O failed 00:26:27.906 Read completed with error (sct=0, sc=8) 00:26:27.906 starting I/O failed 00:26:27.906 Read completed with error (sct=0, sc=8) 00:26:27.906 starting I/O failed 00:26:27.906 Read completed with error (sct=0, sc=8) 00:26:27.906 starting I/O failed 00:26:27.906 Read completed with error (sct=0, sc=8) 00:26:27.906 starting I/O failed 00:26:27.906 Write completed with error (sct=0, sc=8) 00:26:27.906 starting I/O failed 00:26:27.906 Read completed with error (sct=0, sc=8) 00:26:27.906 starting I/O failed 00:26:27.906 Read completed with error (sct=0, sc=8) 00:26:27.906 starting I/O failed 00:26:27.906 Read completed with error (sct=0, sc=8) 00:26:27.906 starting I/O failed 00:26:27.906 Read completed with error (sct=0, sc=8) 00:26:27.906 starting I/O failed 00:26:27.906 Read completed with error (sct=0, sc=8) 00:26:27.906 starting I/O failed 00:26:27.906 Write completed with error (sct=0, sc=8) 00:26:27.906 starting I/O failed 00:26:27.906 Read completed with error (sct=0, sc=8) 00:26:27.906 starting I/O failed 00:26:27.906 Read completed with error (sct=0, sc=8) 00:26:27.906 starting I/O failed 00:26:27.906 Write completed with error (sct=0, sc=8) 00:26:27.906 starting I/O failed 00:26:27.906 Read completed with error (sct=0, sc=8) 00:26:27.906 starting I/O failed 00:26:27.906 Read completed with error (sct=0, sc=8) 00:26:27.906 starting I/O failed 00:26:27.906 Read completed with error (sct=0, sc=8) 00:26:27.906 starting I/O failed 00:26:27.906 Read completed with error (sct=0, sc=8) 00:26:27.906 starting I/O failed 00:26:27.906 [2024-07-26 14:20:35.766150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:27.906 Read completed with error (sct=0, sc=8) 00:26:27.906 starting I/O failed 00:26:27.906 Read completed with error (sct=0, sc=8) 00:26:27.906 starting I/O failed 00:26:27.906 Read completed with error (sct=0, sc=8) 00:26:27.906 starting I/O failed 00:26:27.906 Read completed with error (sct=0, sc=8) 00:26:27.906 starting I/O failed 00:26:27.906 Read completed with error (sct=0, sc=8) 00:26:27.906 starting I/O failed 00:26:27.906 Read completed with error (sct=0, sc=8) 00:26:27.906 starting I/O failed 00:26:27.906 Read completed with error (sct=0, sc=8) 00:26:27.906 starting I/O failed 00:26:27.906 Read completed with error (sct=0, sc=8) 00:26:27.906 starting I/O failed 00:26:27.906 Read completed with error (sct=0, sc=8) 00:26:27.906 starting I/O failed 00:26:27.906 Read completed with error (sct=0, sc=8) 00:26:27.906 starting I/O failed 00:26:27.906 Read completed with error (sct=0, sc=8) 00:26:27.906 starting I/O failed 00:26:27.906 Write completed with error (sct=0, sc=8) 00:26:27.906 starting I/O failed 00:26:27.906 Read completed with error (sct=0, sc=8) 00:26:27.906 starting I/O failed 00:26:27.906 Write completed with error (sct=0, sc=8) 00:26:27.906 starting I/O failed 00:26:27.906 Read completed with error (sct=0, sc=8) 00:26:27.906 starting I/O failed 00:26:27.906 Read completed with error (sct=0, sc=8) 00:26:27.906 starting I/O failed 00:26:27.906 Read completed with error (sct=0, sc=8) 00:26:27.906 starting I/O failed 00:26:27.906 Write completed with error (sct=0, sc=8) 00:26:27.906 starting I/O failed 00:26:27.906 Read completed with error (sct=0, sc=8) 00:26:27.906 starting I/O failed 00:26:27.906 Write completed with error (sct=0, sc=8) 00:26:27.906 starting I/O failed 00:26:27.906 Write completed with error (sct=0, sc=8) 00:26:27.906 starting I/O failed 00:26:27.906 Write completed with error (sct=0, sc=8) 00:26:27.906 starting I/O failed 00:26:27.906 Write completed with error (sct=0, sc=8) 00:26:27.906 starting I/O failed 00:26:27.906 Write completed with error (sct=0, sc=8) 00:26:27.906 starting I/O failed 00:26:27.906 Write completed with error (sct=0, sc=8) 00:26:27.906 starting I/O failed 00:26:27.906 Read completed with error (sct=0, sc=8) 00:26:27.906 starting I/O failed 00:26:27.906 Read completed with error (sct=0, sc=8) 00:26:27.906 starting I/O failed 00:26:27.906 Read completed with error (sct=0, sc=8) 00:26:27.906 starting I/O failed 00:26:27.906 Read completed with error (sct=0, sc=8) 00:26:27.906 starting I/O failed 00:26:27.906 Write completed with error (sct=0, sc=8) 00:26:27.906 starting I/O failed 00:26:27.906 Read completed with error (sct=0, sc=8) 00:26:27.906 starting I/O failed 00:26:27.906 Write completed with error (sct=0, sc=8) 00:26:27.906 starting I/O failed 00:26:27.906 [2024-07-26 14:20:35.766463] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:27.906 Read completed with error (sct=0, sc=8) 00:26:27.906 starting I/O failed 00:26:27.906 Read completed with error (sct=0, sc=8) 00:26:27.906 starting I/O failed 00:26:27.906 Read completed with error (sct=0, sc=8) 00:26:27.906 starting I/O failed 00:26:27.906 Read completed with error (sct=0, sc=8) 00:26:27.906 starting I/O failed 00:26:27.906 Read completed with error (sct=0, sc=8) 00:26:27.906 starting I/O failed 00:26:27.906 Read completed with error (sct=0, sc=8) 00:26:27.906 starting I/O failed 00:26:27.906 Read completed with error (sct=0, sc=8) 00:26:27.906 starting I/O failed 00:26:27.906 Read completed with error (sct=0, sc=8) 00:26:27.906 starting I/O failed 00:26:27.906 Read completed with error (sct=0, sc=8) 00:26:27.906 starting I/O failed 00:26:27.906 Read completed with error (sct=0, sc=8) 00:26:27.906 starting I/O failed 00:26:27.906 Read completed with error (sct=0, sc=8) 00:26:27.906 starting I/O failed 00:26:27.906 Read completed with error (sct=0, sc=8) 00:26:27.906 starting I/O failed 00:26:27.906 Read completed with error (sct=0, sc=8) 00:26:27.907 starting I/O failed 00:26:27.907 Write completed with error (sct=0, sc=8) 00:26:27.907 starting I/O failed 00:26:27.907 Read completed with error (sct=0, sc=8) 00:26:27.907 starting I/O failed 00:26:27.907 Read completed with error (sct=0, sc=8) 00:26:27.907 starting I/O failed 00:26:27.907 Read completed with error (sct=0, sc=8) 00:26:27.907 starting I/O failed 00:26:27.907 Read completed with error (sct=0, sc=8) 00:26:27.907 starting I/O failed 00:26:27.907 Read completed with error (sct=0, sc=8) 00:26:27.907 starting I/O failed 00:26:27.907 Read completed with error (sct=0, sc=8) 00:26:27.907 starting I/O failed 00:26:27.907 Write completed with error (sct=0, sc=8) 00:26:27.907 starting I/O failed 00:26:27.907 Write completed with error (sct=0, sc=8) 00:26:27.907 starting I/O failed 00:26:27.907 Read completed with error (sct=0, sc=8) 00:26:27.907 starting I/O failed 00:26:27.907 Read completed with error (sct=0, sc=8) 00:26:27.907 starting I/O failed 00:26:27.907 Read completed with error (sct=0, sc=8) 00:26:27.907 starting I/O failed 00:26:27.907 Read completed with error (sct=0, sc=8) 00:26:27.907 starting I/O failed 00:26:27.907 Write completed with error (sct=0, sc=8) 00:26:27.907 starting I/O failed 00:26:27.907 Write completed with error (sct=0, sc=8) 00:26:27.907 starting I/O failed 00:26:27.907 Write completed with error (sct=0, sc=8) 00:26:27.907 starting I/O failed 00:26:27.907 Write completed with error (sct=0, sc=8) 00:26:27.907 starting I/O failed 00:26:27.907 Write completed with error (sct=0, sc=8) 00:26:27.907 starting I/O failed 00:26:27.907 Read completed with error (sct=0, sc=8) 00:26:27.907 starting I/O failed 00:26:27.907 [2024-07-26 14:20:35.766801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:27.907 [2024-07-26 14:20:35.766962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.907 [2024-07-26 14:20:35.766996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.907 qpair failed and we were unable to recover it. 00:26:27.907 [2024-07-26 14:20:35.767129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.907 [2024-07-26 14:20:35.767156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.907 qpair failed and we were unable to recover it. 00:26:27.907 [2024-07-26 14:20:35.767248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.907 [2024-07-26 14:20:35.767275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.907 qpair failed and we were unable to recover it. 00:26:27.907 [2024-07-26 14:20:35.767396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.907 [2024-07-26 14:20:35.767423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.907 qpair failed and we were unable to recover it. 00:26:27.907 [2024-07-26 14:20:35.767513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.907 [2024-07-26 14:20:35.767550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.907 qpair failed and we were unable to recover it. 00:26:27.907 [2024-07-26 14:20:35.767659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.907 [2024-07-26 14:20:35.767686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.907 qpair failed and we were unable to recover it. 00:26:27.907 [2024-07-26 14:20:35.767782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.907 [2024-07-26 14:20:35.767809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.907 qpair failed and we were unable to recover it. 00:26:27.907 [2024-07-26 14:20:35.767947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.907 [2024-07-26 14:20:35.767973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.907 qpair failed and we were unable to recover it. 00:26:27.907 [2024-07-26 14:20:35.768068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.907 [2024-07-26 14:20:35.768095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.907 qpair failed and we were unable to recover it. 00:26:27.907 [2024-07-26 14:20:35.768222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.907 [2024-07-26 14:20:35.768250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.907 qpair failed and we were unable to recover it. 00:26:27.907 [2024-07-26 14:20:35.768380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.907 [2024-07-26 14:20:35.768420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.907 qpair failed and we were unable to recover it. 00:26:27.907 [2024-07-26 14:20:35.768552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.907 [2024-07-26 14:20:35.768581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.907 qpair failed and we were unable to recover it. 00:26:27.907 [2024-07-26 14:20:35.768675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.907 [2024-07-26 14:20:35.768701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.907 qpair failed and we were unable to recover it. 00:26:27.907 [2024-07-26 14:20:35.768794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.907 [2024-07-26 14:20:35.768820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.907 qpair failed and we were unable to recover it. 00:26:27.907 [2024-07-26 14:20:35.768906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.907 [2024-07-26 14:20:35.768932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.907 qpair failed and we were unable to recover it. 00:26:27.907 [2024-07-26 14:20:35.769064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.907 [2024-07-26 14:20:35.769104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.907 qpair failed and we were unable to recover it. 00:26:27.907 [2024-07-26 14:20:35.769285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.907 [2024-07-26 14:20:35.769344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.907 qpair failed and we were unable to recover it. 00:26:27.907 [2024-07-26 14:20:35.769468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.907 [2024-07-26 14:20:35.769495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.907 qpair failed and we were unable to recover it. 00:26:27.907 [2024-07-26 14:20:35.769617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.907 [2024-07-26 14:20:35.769644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.907 qpair failed and we were unable to recover it. 00:26:27.907 [2024-07-26 14:20:35.769732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.907 [2024-07-26 14:20:35.769757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.907 qpair failed and we were unable to recover it. 00:26:27.907 [2024-07-26 14:20:35.769864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.907 [2024-07-26 14:20:35.769889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.907 qpair failed and we were unable to recover it. 00:26:27.907 [2024-07-26 14:20:35.769979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.907 [2024-07-26 14:20:35.770005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.907 qpair failed and we were unable to recover it. 00:26:27.907 [2024-07-26 14:20:35.770118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.907 [2024-07-26 14:20:35.770144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.907 qpair failed and we were unable to recover it. 00:26:27.907 [2024-07-26 14:20:35.770231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.907 [2024-07-26 14:20:35.770257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.907 qpair failed and we were unable to recover it. 00:26:27.907 [2024-07-26 14:20:35.770365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.907 [2024-07-26 14:20:35.770390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.907 qpair failed and we were unable to recover it. 00:26:27.907 [2024-07-26 14:20:35.770478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.907 [2024-07-26 14:20:35.770504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.907 qpair failed and we were unable to recover it. 00:26:27.907 [2024-07-26 14:20:35.770630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.907 [2024-07-26 14:20:35.770656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.907 qpair failed and we were unable to recover it. 00:26:27.907 [2024-07-26 14:20:35.770748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.907 [2024-07-26 14:20:35.770773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.907 qpair failed and we were unable to recover it. 00:26:27.907 [2024-07-26 14:20:35.770851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.907 [2024-07-26 14:20:35.770876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.907 qpair failed and we were unable to recover it. 00:26:27.908 [2024-07-26 14:20:35.770982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.908 [2024-07-26 14:20:35.771008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.908 qpair failed and we were unable to recover it. 00:26:27.908 [2024-07-26 14:20:35.771097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.908 [2024-07-26 14:20:35.771124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.908 qpair failed and we were unable to recover it. 00:26:27.908 [2024-07-26 14:20:35.771241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.908 [2024-07-26 14:20:35.771270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.908 qpair failed and we were unable to recover it. 00:26:27.908 [2024-07-26 14:20:35.771372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.908 [2024-07-26 14:20:35.771412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.908 qpair failed and we were unable to recover it. 00:26:27.908 [2024-07-26 14:20:35.771534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.908 [2024-07-26 14:20:35.771564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.908 qpair failed and we were unable to recover it. 00:26:27.908 [2024-07-26 14:20:35.771683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.908 [2024-07-26 14:20:35.771709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.908 qpair failed and we were unable to recover it. 00:26:27.908 [2024-07-26 14:20:35.771813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.908 [2024-07-26 14:20:35.771840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.908 qpair failed and we were unable to recover it. 00:26:27.908 [2024-07-26 14:20:35.771915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.908 [2024-07-26 14:20:35.771941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.908 qpair failed and we were unable to recover it. 00:26:27.908 [2024-07-26 14:20:35.772053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.908 [2024-07-26 14:20:35.772080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.908 qpair failed and we were unable to recover it. 00:26:27.908 [2024-07-26 14:20:35.772163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.908 [2024-07-26 14:20:35.772190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.908 qpair failed and we were unable to recover it. 00:26:27.908 [2024-07-26 14:20:35.772296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.908 [2024-07-26 14:20:35.772327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.908 qpair failed and we were unable to recover it. 00:26:27.908 [2024-07-26 14:20:35.772472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.908 [2024-07-26 14:20:35.772500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.908 qpair failed and we were unable to recover it. 00:26:27.908 [2024-07-26 14:20:35.772602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.908 [2024-07-26 14:20:35.772630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.908 qpair failed and we were unable to recover it. 00:26:27.908 [2024-07-26 14:20:35.772717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.908 [2024-07-26 14:20:35.772743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.908 qpair failed and we were unable to recover it. 00:26:27.908 [2024-07-26 14:20:35.773635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.908 [2024-07-26 14:20:35.773663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.908 qpair failed and we were unable to recover it. 00:26:27.908 [2024-07-26 14:20:35.773801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.908 [2024-07-26 14:20:35.773827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.908 qpair failed and we were unable to recover it. 00:26:27.908 [2024-07-26 14:20:35.773911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.908 [2024-07-26 14:20:35.773938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.908 qpair failed and we were unable to recover it. 00:26:27.908 [2024-07-26 14:20:35.774049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.908 [2024-07-26 14:20:35.774076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.908 qpair failed and we were unable to recover it. 00:26:27.908 [2024-07-26 14:20:35.774220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.908 [2024-07-26 14:20:35.774246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.908 qpair failed and we were unable to recover it. 00:26:27.908 [2024-07-26 14:20:35.774378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.908 [2024-07-26 14:20:35.774418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.908 qpair failed and we were unable to recover it. 00:26:27.908 [2024-07-26 14:20:35.774507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.908 [2024-07-26 14:20:35.774543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.908 qpair failed and we were unable to recover it. 00:26:27.908 [2024-07-26 14:20:35.774669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.908 [2024-07-26 14:20:35.774695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.908 qpair failed and we were unable to recover it. 00:26:27.908 [2024-07-26 14:20:35.774773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.908 [2024-07-26 14:20:35.774799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.908 qpair failed and we were unable to recover it. 00:26:27.908 [2024-07-26 14:20:35.774910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.908 [2024-07-26 14:20:35.774937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.908 qpair failed and we were unable to recover it. 00:26:27.908 [2024-07-26 14:20:35.775033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.908 [2024-07-26 14:20:35.775060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.908 qpair failed and we were unable to recover it. 00:26:27.908 [2024-07-26 14:20:35.775188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.908 [2024-07-26 14:20:35.775244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.908 qpair failed and we were unable to recover it. 00:26:27.908 [2024-07-26 14:20:35.775324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.908 [2024-07-26 14:20:35.775350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.908 qpair failed and we were unable to recover it. 00:26:27.908 [2024-07-26 14:20:35.775459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.908 [2024-07-26 14:20:35.775485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.908 qpair failed and we were unable to recover it. 00:26:27.908 [2024-07-26 14:20:35.775578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.908 [2024-07-26 14:20:35.775605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.908 qpair failed and we were unable to recover it. 00:26:27.908 [2024-07-26 14:20:35.775697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.908 [2024-07-26 14:20:35.775723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.908 qpair failed and we were unable to recover it. 00:26:27.908 [2024-07-26 14:20:35.775804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.908 [2024-07-26 14:20:35.775831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.908 qpair failed and we were unable to recover it. 00:26:27.908 [2024-07-26 14:20:35.775943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.908 [2024-07-26 14:20:35.775969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.908 qpair failed and we were unable to recover it. 00:26:27.908 [2024-07-26 14:20:35.776107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.908 [2024-07-26 14:20:35.776133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.908 qpair failed and we were unable to recover it. 00:26:27.908 [2024-07-26 14:20:35.776213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.908 [2024-07-26 14:20:35.776239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.908 qpair failed and we were unable to recover it. 00:26:27.908 [2024-07-26 14:20:35.776321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.908 [2024-07-26 14:20:35.776347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.908 qpair failed and we were unable to recover it. 00:26:27.908 [2024-07-26 14:20:35.776544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.908 [2024-07-26 14:20:35.776576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.908 qpair failed and we were unable to recover it. 00:26:27.908 [2024-07-26 14:20:35.776695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.909 [2024-07-26 14:20:35.776723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.909 qpair failed and we were unable to recover it. 00:26:27.909 [2024-07-26 14:20:35.776927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.909 [2024-07-26 14:20:35.776978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.909 qpair failed and we were unable to recover it. 00:26:27.909 [2024-07-26 14:20:35.777190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.909 [2024-07-26 14:20:35.777216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.909 qpair failed and we were unable to recover it. 00:26:27.909 [2024-07-26 14:20:35.777298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.909 [2024-07-26 14:20:35.777326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.909 qpair failed and we were unable to recover it. 00:26:27.909 [2024-07-26 14:20:35.777418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.909 [2024-07-26 14:20:35.777444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.909 qpair failed and we were unable to recover it. 00:26:27.909 [2024-07-26 14:20:35.777553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.909 [2024-07-26 14:20:35.777581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.909 qpair failed and we were unable to recover it. 00:26:27.909 [2024-07-26 14:20:35.777696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.909 [2024-07-26 14:20:35.777723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.909 qpair failed and we were unable to recover it. 00:26:27.909 [2024-07-26 14:20:35.777832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.909 [2024-07-26 14:20:35.777858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.909 qpair failed and we were unable to recover it. 00:26:27.909 [2024-07-26 14:20:35.777971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.909 [2024-07-26 14:20:35.777998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.909 qpair failed and we were unable to recover it. 00:26:27.909 [2024-07-26 14:20:35.778114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.909 [2024-07-26 14:20:35.778142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.909 qpair failed and we were unable to recover it. 00:26:27.909 [2024-07-26 14:20:35.778256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.909 [2024-07-26 14:20:35.778283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.909 qpair failed and we were unable to recover it. 00:26:27.909 [2024-07-26 14:20:35.778435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.909 [2024-07-26 14:20:35.778474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.909 qpair failed and we were unable to recover it. 00:26:27.909 [2024-07-26 14:20:35.778597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.909 [2024-07-26 14:20:35.778625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.909 qpair failed and we were unable to recover it. 00:26:27.909 [2024-07-26 14:20:35.778749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.909 [2024-07-26 14:20:35.778791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.909 qpair failed and we were unable to recover it. 00:26:27.909 [2024-07-26 14:20:35.778904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.909 [2024-07-26 14:20:35.778932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.909 qpair failed and we were unable to recover it. 00:26:27.909 [2024-07-26 14:20:35.779051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.909 [2024-07-26 14:20:35.779077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.909 qpair failed and we were unable to recover it. 00:26:27.909 [2024-07-26 14:20:35.779181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.909 [2024-07-26 14:20:35.779208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.909 qpair failed and we were unable to recover it. 00:26:27.909 [2024-07-26 14:20:35.779321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.909 [2024-07-26 14:20:35.779348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.909 qpair failed and we were unable to recover it. 00:26:27.909 [2024-07-26 14:20:35.779459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.909 [2024-07-26 14:20:35.779487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.909 qpair failed and we were unable to recover it. 00:26:27.909 [2024-07-26 14:20:35.779614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.909 [2024-07-26 14:20:35.779642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.909 qpair failed and we were unable to recover it. 00:26:27.909 [2024-07-26 14:20:35.779755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.909 [2024-07-26 14:20:35.779782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.909 qpair failed and we were unable to recover it. 00:26:27.909 [2024-07-26 14:20:35.779861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.909 [2024-07-26 14:20:35.779888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.909 qpair failed and we were unable to recover it. 00:26:27.909 [2024-07-26 14:20:35.780008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.909 [2024-07-26 14:20:35.780035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.909 qpair failed and we were unable to recover it. 00:26:27.909 [2024-07-26 14:20:35.780127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.909 [2024-07-26 14:20:35.780154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.909 qpair failed and we were unable to recover it. 00:26:27.909 [2024-07-26 14:20:35.780251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.909 [2024-07-26 14:20:35.780279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.909 qpair failed and we were unable to recover it. 00:26:27.909 [2024-07-26 14:20:35.780359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.909 [2024-07-26 14:20:35.780387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.909 qpair failed and we were unable to recover it. 00:26:27.909 [2024-07-26 14:20:35.780496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.909 [2024-07-26 14:20:35.780522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.909 qpair failed and we were unable to recover it. 00:26:27.909 [2024-07-26 14:20:35.780677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.909 [2024-07-26 14:20:35.780704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.909 qpair failed and we were unable to recover it. 00:26:27.909 [2024-07-26 14:20:35.780816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.909 [2024-07-26 14:20:35.780844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.909 qpair failed and we were unable to recover it. 00:26:27.909 [2024-07-26 14:20:35.781011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.909 [2024-07-26 14:20:35.781038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.909 qpair failed and we were unable to recover it. 00:26:27.909 [2024-07-26 14:20:35.781177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.909 [2024-07-26 14:20:35.781230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.909 qpair failed and we were unable to recover it. 00:26:27.909 [2024-07-26 14:20:35.781429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.909 [2024-07-26 14:20:35.781455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.909 qpair failed and we were unable to recover it. 00:26:27.909 [2024-07-26 14:20:35.781589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.909 [2024-07-26 14:20:35.781617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.909 qpair failed and we were unable to recover it. 00:26:27.909 [2024-07-26 14:20:35.781707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.909 [2024-07-26 14:20:35.781734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.909 qpair failed and we were unable to recover it. 00:26:27.909 [2024-07-26 14:20:35.781851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.909 [2024-07-26 14:20:35.781878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.909 qpair failed and we were unable to recover it. 00:26:27.909 [2024-07-26 14:20:35.781985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.910 [2024-07-26 14:20:35.782012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.910 qpair failed and we were unable to recover it. 00:26:27.910 [2024-07-26 14:20:35.782123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.910 [2024-07-26 14:20:35.782149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.910 qpair failed and we were unable to recover it. 00:26:27.910 [2024-07-26 14:20:35.782262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.910 [2024-07-26 14:20:35.782289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.910 qpair failed and we were unable to recover it. 00:26:27.910 [2024-07-26 14:20:35.782407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.910 [2024-07-26 14:20:35.782434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.910 qpair failed and we were unable to recover it. 00:26:27.910 [2024-07-26 14:20:35.782579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.910 [2024-07-26 14:20:35.782606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.910 qpair failed and we were unable to recover it. 00:26:27.910 [2024-07-26 14:20:35.782714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.910 [2024-07-26 14:20:35.782740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.910 qpair failed and we were unable to recover it. 00:26:27.910 [2024-07-26 14:20:35.782826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.910 [2024-07-26 14:20:35.782858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.910 qpair failed and we were unable to recover it. 00:26:27.910 [2024-07-26 14:20:35.782976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.910 [2024-07-26 14:20:35.783002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.910 qpair failed and we were unable to recover it. 00:26:27.910 [2024-07-26 14:20:35.783143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.910 [2024-07-26 14:20:35.783170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.910 qpair failed and we were unable to recover it. 00:26:27.910 [2024-07-26 14:20:35.783293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.910 [2024-07-26 14:20:35.783333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.910 qpair failed and we were unable to recover it. 00:26:27.910 [2024-07-26 14:20:35.783432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.910 [2024-07-26 14:20:35.783471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.910 qpair failed and we were unable to recover it. 00:26:27.910 [2024-07-26 14:20:35.783570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.910 [2024-07-26 14:20:35.783599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.910 qpair failed and we were unable to recover it. 00:26:27.910 [2024-07-26 14:20:35.783695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.910 [2024-07-26 14:20:35.783721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.910 qpair failed and we were unable to recover it. 00:26:27.910 [2024-07-26 14:20:35.783867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.910 [2024-07-26 14:20:35.783893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.910 qpair failed and we were unable to recover it. 00:26:27.910 [2024-07-26 14:20:35.783986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.910 [2024-07-26 14:20:35.784013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.910 qpair failed and we were unable to recover it. 00:26:27.910 [2024-07-26 14:20:35.784141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.910 [2024-07-26 14:20:35.784191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.910 qpair failed and we were unable to recover it. 00:26:27.910 [2024-07-26 14:20:35.784306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.910 [2024-07-26 14:20:35.784333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.910 qpair failed and we were unable to recover it. 00:26:27.910 [2024-07-26 14:20:35.784452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.910 [2024-07-26 14:20:35.784482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.910 qpair failed and we were unable to recover it. 00:26:27.910 [2024-07-26 14:20:35.784634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.910 [2024-07-26 14:20:35.784661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.910 qpair failed and we were unable to recover it. 00:26:27.910 [2024-07-26 14:20:35.784776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.910 [2024-07-26 14:20:35.784801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.910 qpair failed and we were unable to recover it. 00:26:27.910 [2024-07-26 14:20:35.784897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.910 [2024-07-26 14:20:35.784922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.910 qpair failed and we were unable to recover it. 00:26:27.910 [2024-07-26 14:20:35.785037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.910 [2024-07-26 14:20:35.785063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.910 qpair failed and we were unable to recover it. 00:26:27.910 [2024-07-26 14:20:35.785178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.910 [2024-07-26 14:20:35.785203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.910 qpair failed and we were unable to recover it. 00:26:27.910 [2024-07-26 14:20:35.785296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.910 [2024-07-26 14:20:35.785321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.910 qpair failed and we were unable to recover it. 00:26:27.910 [2024-07-26 14:20:35.785422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.910 [2024-07-26 14:20:35.785461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.910 qpair failed and we were unable to recover it. 00:26:27.910 [2024-07-26 14:20:35.785547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.910 [2024-07-26 14:20:35.785575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.910 qpair failed and we were unable to recover it. 00:26:27.910 [2024-07-26 14:20:35.785700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.910 [2024-07-26 14:20:35.785727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.910 qpair failed and we were unable to recover it. 00:26:27.910 [2024-07-26 14:20:35.785866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.910 [2024-07-26 14:20:35.785892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.910 qpair failed and we were unable to recover it. 00:26:27.910 [2024-07-26 14:20:35.786091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.910 [2024-07-26 14:20:35.786117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.910 qpair failed and we were unable to recover it. 00:26:27.910 [2024-07-26 14:20:35.786200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.911 [2024-07-26 14:20:35.786226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.911 qpair failed and we were unable to recover it. 00:26:27.911 [2024-07-26 14:20:35.786320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.911 [2024-07-26 14:20:35.786346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.911 qpair failed and we were unable to recover it. 00:26:27.911 [2024-07-26 14:20:35.786456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.911 [2024-07-26 14:20:35.786482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.911 qpair failed and we were unable to recover it. 00:26:27.911 [2024-07-26 14:20:35.786595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.911 [2024-07-26 14:20:35.786621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.911 qpair failed and we were unable to recover it. 00:26:27.911 [2024-07-26 14:20:35.786703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.911 [2024-07-26 14:20:35.786734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.911 qpair failed and we were unable to recover it. 00:26:27.911 [2024-07-26 14:20:35.786815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.911 [2024-07-26 14:20:35.786842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.911 qpair failed and we were unable to recover it. 00:26:27.911 [2024-07-26 14:20:35.786958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.911 [2024-07-26 14:20:35.786984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.911 qpair failed and we were unable to recover it. 00:26:27.911 [2024-07-26 14:20:35.787071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.911 [2024-07-26 14:20:35.787097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.911 qpair failed and we were unable to recover it. 00:26:27.911 [2024-07-26 14:20:35.787190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.911 [2024-07-26 14:20:35.787231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.911 qpair failed and we were unable to recover it. 00:26:27.911 [2024-07-26 14:20:35.787322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.911 [2024-07-26 14:20:35.787350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.911 qpair failed and we were unable to recover it. 00:26:27.911 [2024-07-26 14:20:35.787464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.911 [2024-07-26 14:20:35.787490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.911 qpair failed and we were unable to recover it. 00:26:27.911 [2024-07-26 14:20:35.787611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.911 [2024-07-26 14:20:35.787639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.911 qpair failed and we were unable to recover it. 00:26:27.911 [2024-07-26 14:20:35.787723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.911 [2024-07-26 14:20:35.787750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.911 qpair failed and we were unable to recover it. 00:26:27.911 [2024-07-26 14:20:35.787842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.911 [2024-07-26 14:20:35.787869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.911 qpair failed and we were unable to recover it. 00:26:27.911 [2024-07-26 14:20:35.787959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.911 [2024-07-26 14:20:35.787987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.911 qpair failed and we were unable to recover it. 00:26:27.911 [2024-07-26 14:20:35.788102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.911 [2024-07-26 14:20:35.788128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.911 qpair failed and we were unable to recover it. 00:26:27.911 [2024-07-26 14:20:35.788238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.911 [2024-07-26 14:20:35.788266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.911 qpair failed and we were unable to recover it. 00:26:27.911 [2024-07-26 14:20:35.788345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.911 [2024-07-26 14:20:35.788371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.911 qpair failed and we were unable to recover it. 00:26:27.911 [2024-07-26 14:20:35.788484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.911 [2024-07-26 14:20:35.788510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.911 qpair failed and we were unable to recover it. 00:26:27.911 [2024-07-26 14:20:35.788601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.911 [2024-07-26 14:20:35.788627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.911 qpair failed and we were unable to recover it. 00:26:27.911 [2024-07-26 14:20:35.788763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.911 [2024-07-26 14:20:35.788789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.911 qpair failed and we were unable to recover it. 00:26:27.911 [2024-07-26 14:20:35.788869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.911 [2024-07-26 14:20:35.788894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.911 qpair failed and we were unable to recover it. 00:26:27.911 [2024-07-26 14:20:35.789054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.911 [2024-07-26 14:20:35.789089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.911 qpair failed and we were unable to recover it. 00:26:27.911 [2024-07-26 14:20:35.789237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.911 [2024-07-26 14:20:35.789285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.911 qpair failed and we were unable to recover it. 00:26:27.911 [2024-07-26 14:20:35.789376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.911 [2024-07-26 14:20:35.789406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.911 qpair failed and we were unable to recover it. 00:26:27.911 [2024-07-26 14:20:35.789496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.911 [2024-07-26 14:20:35.789523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.911 qpair failed and we were unable to recover it. 00:26:27.911 [2024-07-26 14:20:35.789670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.911 [2024-07-26 14:20:35.789697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.911 qpair failed and we were unable to recover it. 00:26:27.911 [2024-07-26 14:20:35.789822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.911 [2024-07-26 14:20:35.789910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.911 qpair failed and we were unable to recover it. 00:26:27.911 [2024-07-26 14:20:35.790109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.911 [2024-07-26 14:20:35.790161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.911 qpair failed and we were unable to recover it. 00:26:27.911 [2024-07-26 14:20:35.790298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.911 [2024-07-26 14:20:35.790323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.911 qpair failed and we were unable to recover it. 00:26:27.911 [2024-07-26 14:20:35.790451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.911 [2024-07-26 14:20:35.790477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.911 qpair failed and we were unable to recover it. 00:26:27.911 [2024-07-26 14:20:35.790566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.911 [2024-07-26 14:20:35.790598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.911 qpair failed and we were unable to recover it. 00:26:27.911 [2024-07-26 14:20:35.790687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.911 [2024-07-26 14:20:35.790713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.911 qpair failed and we were unable to recover it. 00:26:27.911 [2024-07-26 14:20:35.790794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.911 [2024-07-26 14:20:35.790820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.911 qpair failed and we were unable to recover it. 00:26:27.911 [2024-07-26 14:20:35.790957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.911 [2024-07-26 14:20:35.790982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.911 qpair failed and we were unable to recover it. 00:26:27.911 [2024-07-26 14:20:35.791094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.911 [2024-07-26 14:20:35.791121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.911 qpair failed and we were unable to recover it. 00:26:27.911 [2024-07-26 14:20:35.791232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.911 [2024-07-26 14:20:35.791258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.912 qpair failed and we were unable to recover it. 00:26:27.912 [2024-07-26 14:20:35.791400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.912 [2024-07-26 14:20:35.791426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.912 qpair failed and we were unable to recover it. 00:26:27.912 [2024-07-26 14:20:35.791548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.912 [2024-07-26 14:20:35.791575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.912 qpair failed and we were unable to recover it. 00:26:27.912 [2024-07-26 14:20:35.791661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.912 [2024-07-26 14:20:35.791687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.912 qpair failed and we were unable to recover it. 00:26:27.912 [2024-07-26 14:20:35.791773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.912 [2024-07-26 14:20:35.791799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.912 qpair failed and we were unable to recover it. 00:26:27.912 [2024-07-26 14:20:35.791892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.912 [2024-07-26 14:20:35.791920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.912 qpair failed and we were unable to recover it. 00:26:27.912 [2024-07-26 14:20:35.792028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.912 [2024-07-26 14:20:35.792053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.912 qpair failed and we were unable to recover it. 00:26:27.912 [2024-07-26 14:20:35.792164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.912 [2024-07-26 14:20:35.792191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.912 qpair failed and we were unable to recover it. 00:26:27.912 [2024-07-26 14:20:35.792327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.912 [2024-07-26 14:20:35.792353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.912 qpair failed and we were unable to recover it. 00:26:27.912 [2024-07-26 14:20:35.792467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.912 [2024-07-26 14:20:35.792494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.912 qpair failed and we were unable to recover it. 00:26:27.912 [2024-07-26 14:20:35.792618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.912 [2024-07-26 14:20:35.792645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.912 qpair failed and we were unable to recover it. 00:26:27.912 [2024-07-26 14:20:35.792779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.912 [2024-07-26 14:20:35.792805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.912 qpair failed and we were unable to recover it. 00:26:27.912 [2024-07-26 14:20:35.792916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.912 [2024-07-26 14:20:35.792942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.912 qpair failed and we were unable to recover it. 00:26:27.912 [2024-07-26 14:20:35.793034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.912 [2024-07-26 14:20:35.793061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.912 qpair failed and we were unable to recover it. 00:26:27.912 [2024-07-26 14:20:35.793139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.912 [2024-07-26 14:20:35.793164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.912 qpair failed and we were unable to recover it. 00:26:27.912 [2024-07-26 14:20:35.793245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.912 [2024-07-26 14:20:35.793272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.912 qpair failed and we were unable to recover it. 00:26:27.912 [2024-07-26 14:20:35.793386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.912 [2024-07-26 14:20:35.793412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.912 qpair failed and we were unable to recover it. 00:26:27.912 [2024-07-26 14:20:35.793533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.912 [2024-07-26 14:20:35.793560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.912 qpair failed and we were unable to recover it. 00:26:27.912 [2024-07-26 14:20:35.793646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.912 [2024-07-26 14:20:35.793672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.912 qpair failed and we were unable to recover it. 00:26:27.912 [2024-07-26 14:20:35.793756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.912 [2024-07-26 14:20:35.793783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.912 qpair failed and we were unable to recover it. 00:26:27.912 [2024-07-26 14:20:35.793887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.912 [2024-07-26 14:20:35.793913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.912 qpair failed and we were unable to recover it. 00:26:27.912 [2024-07-26 14:20:35.794018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.912 [2024-07-26 14:20:35.794044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.912 qpair failed and we were unable to recover it. 00:26:27.912 [2024-07-26 14:20:35.794143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.912 [2024-07-26 14:20:35.794170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.912 qpair failed and we were unable to recover it. 00:26:27.912 [2024-07-26 14:20:35.794284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.912 [2024-07-26 14:20:35.794310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.912 qpair failed and we were unable to recover it. 00:26:27.912 [2024-07-26 14:20:35.794464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.912 [2024-07-26 14:20:35.794491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.912 qpair failed and we were unable to recover it. 00:26:27.912 [2024-07-26 14:20:35.794636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.912 [2024-07-26 14:20:35.794663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.912 qpair failed and we were unable to recover it. 00:26:27.912 [2024-07-26 14:20:35.794775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.912 [2024-07-26 14:20:35.794801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.912 qpair failed and we were unable to recover it. 00:26:27.912 [2024-07-26 14:20:35.794919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.912 [2024-07-26 14:20:35.794946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.912 qpair failed and we were unable to recover it. 00:26:27.912 [2024-07-26 14:20:35.795070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.912 [2024-07-26 14:20:35.795096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.912 qpair failed and we were unable to recover it. 00:26:27.912 [2024-07-26 14:20:35.795217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.912 [2024-07-26 14:20:35.795257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.912 qpair failed and we were unable to recover it. 00:26:27.912 [2024-07-26 14:20:35.795378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.912 [2024-07-26 14:20:35.795406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.912 qpair failed and we were unable to recover it. 00:26:27.912 [2024-07-26 14:20:35.795523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.912 [2024-07-26 14:20:35.795557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.912 qpair failed and we were unable to recover it. 00:26:27.912 [2024-07-26 14:20:35.795656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.912 [2024-07-26 14:20:35.795683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.912 qpair failed and we were unable to recover it. 00:26:27.912 [2024-07-26 14:20:35.795810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.912 [2024-07-26 14:20:35.795837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.912 qpair failed and we were unable to recover it. 00:26:27.912 [2024-07-26 14:20:35.795981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.912 [2024-07-26 14:20:35.796007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.912 qpair failed and we were unable to recover it. 00:26:27.912 [2024-07-26 14:20:35.796117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.912 [2024-07-26 14:20:35.796149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.912 qpair failed and we were unable to recover it. 00:26:27.912 [2024-07-26 14:20:35.796288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.912 [2024-07-26 14:20:35.796314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.912 qpair failed and we were unable to recover it. 00:26:27.912 [2024-07-26 14:20:35.796439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.913 [2024-07-26 14:20:35.796478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.913 qpair failed and we were unable to recover it. 00:26:27.913 [2024-07-26 14:20:35.796578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.913 [2024-07-26 14:20:35.796607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.913 qpair failed and we were unable to recover it. 00:26:27.913 [2024-07-26 14:20:35.796697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.913 [2024-07-26 14:20:35.796723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.913 qpair failed and we were unable to recover it. 00:26:27.913 [2024-07-26 14:20:35.796872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.913 [2024-07-26 14:20:35.796918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.913 qpair failed and we were unable to recover it. 00:26:27.913 [2024-07-26 14:20:35.797088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.913 [2024-07-26 14:20:35.797115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.913 qpair failed and we were unable to recover it. 00:26:27.913 [2024-07-26 14:20:35.797291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.913 [2024-07-26 14:20:35.797358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.913 qpair failed and we were unable to recover it. 00:26:27.913 [2024-07-26 14:20:35.797468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.913 [2024-07-26 14:20:35.797494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.913 qpair failed and we were unable to recover it. 00:26:27.913 [2024-07-26 14:20:35.797590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.913 [2024-07-26 14:20:35.797617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.913 qpair failed and we were unable to recover it. 00:26:27.913 [2024-07-26 14:20:35.797699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.913 [2024-07-26 14:20:35.797725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.913 qpair failed and we were unable to recover it. 00:26:27.913 [2024-07-26 14:20:35.797827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.913 [2024-07-26 14:20:35.797854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.913 qpair failed and we were unable to recover it. 00:26:27.913 [2024-07-26 14:20:35.797966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.913 [2024-07-26 14:20:35.797992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.913 qpair failed and we were unable to recover it. 00:26:27.913 [2024-07-26 14:20:35.798073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.913 [2024-07-26 14:20:35.798100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.913 qpair failed and we were unable to recover it. 00:26:27.913 [2024-07-26 14:20:35.798198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.913 [2024-07-26 14:20:35.798226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.913 qpair failed and we were unable to recover it. 00:26:27.913 [2024-07-26 14:20:35.798368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.913 [2024-07-26 14:20:35.798398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.913 qpair failed and we were unable to recover it. 00:26:27.913 [2024-07-26 14:20:35.798486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.913 [2024-07-26 14:20:35.798514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.913 qpair failed and we were unable to recover it. 00:26:27.913 [2024-07-26 14:20:35.798632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.913 [2024-07-26 14:20:35.798659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.913 qpair failed and we were unable to recover it. 00:26:27.913 [2024-07-26 14:20:35.798798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.913 [2024-07-26 14:20:35.798825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.913 qpair failed and we were unable to recover it. 00:26:27.913 [2024-07-26 14:20:35.798941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.913 [2024-07-26 14:20:35.798968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.913 qpair failed and we were unable to recover it. 00:26:27.913 [2024-07-26 14:20:35.799092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.913 [2024-07-26 14:20:35.799118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.913 qpair failed and we were unable to recover it. 00:26:27.913 [2024-07-26 14:20:35.799202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.913 [2024-07-26 14:20:35.799229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.913 qpair failed and we were unable to recover it. 00:26:27.913 [2024-07-26 14:20:35.799346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.913 [2024-07-26 14:20:35.799371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.913 qpair failed and we were unable to recover it. 00:26:27.913 [2024-07-26 14:20:35.799464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.913 [2024-07-26 14:20:35.799491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.913 qpair failed and we were unable to recover it. 00:26:27.913 [2024-07-26 14:20:35.799595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.913 [2024-07-26 14:20:35.799623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.913 qpair failed and we were unable to recover it. 00:26:27.913 [2024-07-26 14:20:35.799733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.913 [2024-07-26 14:20:35.799760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.913 qpair failed and we were unable to recover it. 00:26:27.913 [2024-07-26 14:20:35.799917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.913 [2024-07-26 14:20:35.799962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.913 qpair failed and we were unable to recover it. 00:26:27.913 [2024-07-26 14:20:35.800093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.913 [2024-07-26 14:20:35.800145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.913 qpair failed and we were unable to recover it. 00:26:27.913 [2024-07-26 14:20:35.800256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.913 [2024-07-26 14:20:35.800281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.913 qpair failed and we were unable to recover it. 00:26:27.913 [2024-07-26 14:20:35.800425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.913 [2024-07-26 14:20:35.800452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.913 qpair failed and we were unable to recover it. 00:26:27.913 [2024-07-26 14:20:35.800563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.913 [2024-07-26 14:20:35.800589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.913 qpair failed and we were unable to recover it. 00:26:27.913 [2024-07-26 14:20:35.800704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.913 [2024-07-26 14:20:35.800729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.913 qpair failed and we were unable to recover it. 00:26:27.913 [2024-07-26 14:20:35.800838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.913 [2024-07-26 14:20:35.800864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.913 qpair failed and we were unable to recover it. 00:26:27.913 [2024-07-26 14:20:35.800985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.913 [2024-07-26 14:20:35.801015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.913 qpair failed and we were unable to recover it. 00:26:27.913 [2024-07-26 14:20:35.801145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.913 [2024-07-26 14:20:35.801172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.913 qpair failed and we were unable to recover it. 00:26:27.913 [2024-07-26 14:20:35.801320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.913 [2024-07-26 14:20:35.801348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.913 qpair failed and we were unable to recover it. 00:26:27.913 [2024-07-26 14:20:35.801451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.913 [2024-07-26 14:20:35.801490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.913 qpair failed and we were unable to recover it. 00:26:27.913 [2024-07-26 14:20:35.801705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.913 [2024-07-26 14:20:35.801734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.913 qpair failed and we were unable to recover it. 00:26:27.913 [2024-07-26 14:20:35.801853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.913 [2024-07-26 14:20:35.801880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.913 qpair failed and we were unable to recover it. 00:26:27.914 [2024-07-26 14:20:35.801958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.914 [2024-07-26 14:20:35.801985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.914 qpair failed and we were unable to recover it. 00:26:27.914 [2024-07-26 14:20:35.802074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.914 [2024-07-26 14:20:35.802100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.914 qpair failed and we were unable to recover it. 00:26:27.914 [2024-07-26 14:20:35.802189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.914 [2024-07-26 14:20:35.802217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.914 qpair failed and we were unable to recover it. 00:26:27.914 [2024-07-26 14:20:35.802312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.914 [2024-07-26 14:20:35.802351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.914 qpair failed and we were unable to recover it. 00:26:27.914 [2024-07-26 14:20:35.802497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.914 [2024-07-26 14:20:35.802525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.914 qpair failed and we were unable to recover it. 00:26:27.914 [2024-07-26 14:20:35.802667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.914 [2024-07-26 14:20:35.802694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.914 qpair failed and we were unable to recover it. 00:26:27.914 [2024-07-26 14:20:35.802807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.914 [2024-07-26 14:20:35.802833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.914 qpair failed and we were unable to recover it. 00:26:27.914 [2024-07-26 14:20:35.802914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.914 [2024-07-26 14:20:35.802941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.914 qpair failed and we were unable to recover it. 00:26:27.914 [2024-07-26 14:20:35.803075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.914 [2024-07-26 14:20:35.803119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.914 qpair failed and we were unable to recover it. 00:26:27.914 [2024-07-26 14:20:35.803238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.914 [2024-07-26 14:20:35.803264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.914 qpair failed and we were unable to recover it. 00:26:27.914 [2024-07-26 14:20:35.803346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.914 [2024-07-26 14:20:35.803372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.914 qpair failed and we were unable to recover it. 00:26:27.914 [2024-07-26 14:20:35.803470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.914 [2024-07-26 14:20:35.803498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.914 qpair failed and we were unable to recover it. 00:26:27.914 [2024-07-26 14:20:35.803625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.914 [2024-07-26 14:20:35.803651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.914 qpair failed and we were unable to recover it. 00:26:27.914 [2024-07-26 14:20:35.803761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.914 [2024-07-26 14:20:35.803788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.914 qpair failed and we were unable to recover it. 00:26:27.914 [2024-07-26 14:20:35.803903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.914 [2024-07-26 14:20:35.803930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.914 qpair failed and we were unable to recover it. 00:26:27.914 [2024-07-26 14:20:35.804042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.914 [2024-07-26 14:20:35.804072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.914 qpair failed and we were unable to recover it. 00:26:27.914 [2024-07-26 14:20:35.804176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.914 [2024-07-26 14:20:35.804203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.914 qpair failed and we were unable to recover it. 00:26:27.914 [2024-07-26 14:20:35.804319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.914 [2024-07-26 14:20:35.804346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.914 qpair failed and we were unable to recover it. 00:26:27.914 [2024-07-26 14:20:35.804460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.914 [2024-07-26 14:20:35.804487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.914 qpair failed and we were unable to recover it. 00:26:27.914 [2024-07-26 14:20:35.804631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.914 [2024-07-26 14:20:35.804658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.914 qpair failed and we were unable to recover it. 00:26:27.914 [2024-07-26 14:20:35.804763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.914 [2024-07-26 14:20:35.804790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.914 qpair failed and we were unable to recover it. 00:26:27.914 [2024-07-26 14:20:35.804886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.914 [2024-07-26 14:20:35.804912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.914 qpair failed and we were unable to recover it. 00:26:27.914 [2024-07-26 14:20:35.805056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.914 [2024-07-26 14:20:35.805082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.914 qpair failed and we were unable to recover it. 00:26:27.914 [2024-07-26 14:20:35.805204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.914 [2024-07-26 14:20:35.805231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.914 qpair failed and we were unable to recover it. 00:26:27.914 [2024-07-26 14:20:35.805322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.914 [2024-07-26 14:20:35.805349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.914 qpair failed and we were unable to recover it. 00:26:27.914 [2024-07-26 14:20:35.805431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.914 [2024-07-26 14:20:35.805459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.914 qpair failed and we were unable to recover it. 00:26:27.914 [2024-07-26 14:20:35.805587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.914 [2024-07-26 14:20:35.805627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.914 qpair failed and we were unable to recover it. 00:26:27.914 [2024-07-26 14:20:35.805747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.914 [2024-07-26 14:20:35.805774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.914 qpair failed and we were unable to recover it. 00:26:27.914 [2024-07-26 14:20:35.805858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.914 [2024-07-26 14:20:35.805885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.914 qpair failed and we were unable to recover it. 00:26:27.914 [2024-07-26 14:20:35.805998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.914 [2024-07-26 14:20:35.806024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.914 qpair failed and we were unable to recover it. 00:26:27.914 [2024-07-26 14:20:35.806101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.914 [2024-07-26 14:20:35.806127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.914 qpair failed and we were unable to recover it. 00:26:27.914 [2024-07-26 14:20:35.806315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.914 [2024-07-26 14:20:35.806341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.914 qpair failed and we were unable to recover it. 00:26:27.914 [2024-07-26 14:20:35.806461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.914 [2024-07-26 14:20:35.806489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.914 qpair failed and we were unable to recover it. 00:26:27.914 [2024-07-26 14:20:35.806585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.914 [2024-07-26 14:20:35.806614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.914 qpair failed and we were unable to recover it. 00:26:27.914 [2024-07-26 14:20:35.806706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.914 [2024-07-26 14:20:35.806733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.914 qpair failed and we were unable to recover it. 00:26:27.914 [2024-07-26 14:20:35.806824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.914 [2024-07-26 14:20:35.806850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.914 qpair failed and we were unable to recover it. 00:26:27.914 [2024-07-26 14:20:35.806935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.915 [2024-07-26 14:20:35.806961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.915 qpair failed and we were unable to recover it. 00:26:27.915 [2024-07-26 14:20:35.807043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.915 [2024-07-26 14:20:35.807069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.915 qpair failed and we were unable to recover it. 00:26:27.915 [2024-07-26 14:20:35.807173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.915 [2024-07-26 14:20:35.807198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.915 qpair failed and we were unable to recover it. 00:26:27.915 [2024-07-26 14:20:35.807311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.915 [2024-07-26 14:20:35.807338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.915 qpair failed and we were unable to recover it. 00:26:27.915 [2024-07-26 14:20:35.807424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.915 [2024-07-26 14:20:35.807450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.915 qpair failed and we were unable to recover it. 00:26:27.915 [2024-07-26 14:20:35.807588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.915 [2024-07-26 14:20:35.807616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.915 qpair failed and we were unable to recover it. 00:26:27.915 [2024-07-26 14:20:35.807706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.915 [2024-07-26 14:20:35.807733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.915 qpair failed and we were unable to recover it. 00:26:27.915 [2024-07-26 14:20:35.807838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.915 [2024-07-26 14:20:35.807865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.915 qpair failed and we were unable to recover it. 00:26:27.915 [2024-07-26 14:20:35.807972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.915 [2024-07-26 14:20:35.807997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.915 qpair failed and we were unable to recover it. 00:26:27.915 [2024-07-26 14:20:35.808108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.915 [2024-07-26 14:20:35.808134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.915 qpair failed and we were unable to recover it. 00:26:27.915 [2024-07-26 14:20:35.808253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.915 [2024-07-26 14:20:35.808278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.915 qpair failed and we were unable to recover it. 00:26:27.915 [2024-07-26 14:20:35.808379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.915 [2024-07-26 14:20:35.808405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.915 qpair failed and we were unable to recover it. 00:26:27.915 [2024-07-26 14:20:35.808518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.915 [2024-07-26 14:20:35.808549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.915 qpair failed and we were unable to recover it. 00:26:27.915 [2024-07-26 14:20:35.808687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.915 [2024-07-26 14:20:35.808712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.915 qpair failed and we were unable to recover it. 00:26:27.915 [2024-07-26 14:20:35.808830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.915 [2024-07-26 14:20:35.808857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.915 qpair failed and we were unable to recover it. 00:26:27.915 [2024-07-26 14:20:35.808976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.915 [2024-07-26 14:20:35.809015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.915 qpair failed and we were unable to recover it. 00:26:27.915 [2024-07-26 14:20:35.809118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.915 [2024-07-26 14:20:35.809145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.915 qpair failed and we were unable to recover it. 00:26:27.915 [2024-07-26 14:20:35.809253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.915 [2024-07-26 14:20:35.809279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.915 qpair failed and we were unable to recover it. 00:26:27.915 [2024-07-26 14:20:35.809419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.915 [2024-07-26 14:20:35.809446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.915 qpair failed and we were unable to recover it. 00:26:27.915 [2024-07-26 14:20:35.809557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.915 [2024-07-26 14:20:35.809589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.915 qpair failed and we were unable to recover it. 00:26:27.915 [2024-07-26 14:20:35.809677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.915 [2024-07-26 14:20:35.809703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.915 qpair failed and we were unable to recover it. 00:26:27.915 [2024-07-26 14:20:35.809819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.915 [2024-07-26 14:20:35.809845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.915 qpair failed and we were unable to recover it. 00:26:27.915 [2024-07-26 14:20:35.809961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.915 [2024-07-26 14:20:35.809988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.915 qpair failed and we were unable to recover it. 00:26:27.915 [2024-07-26 14:20:35.810106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.915 [2024-07-26 14:20:35.810132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.915 qpair failed and we were unable to recover it. 00:26:27.915 [2024-07-26 14:20:35.810221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.915 [2024-07-26 14:20:35.810247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.915 qpair failed and we were unable to recover it. 00:26:27.915 [2024-07-26 14:20:35.810329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.915 [2024-07-26 14:20:35.810355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.915 qpair failed and we were unable to recover it. 00:26:27.915 [2024-07-26 14:20:35.810429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.915 [2024-07-26 14:20:35.810455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.915 qpair failed and we were unable to recover it. 00:26:27.915 [2024-07-26 14:20:35.810546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.915 [2024-07-26 14:20:35.810572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.915 qpair failed and we were unable to recover it. 00:26:27.915 [2024-07-26 14:20:35.810656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.915 [2024-07-26 14:20:35.810683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.915 qpair failed and we were unable to recover it. 00:26:27.915 [2024-07-26 14:20:35.810797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.915 [2024-07-26 14:20:35.810822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.915 qpair failed and we were unable to recover it. 00:26:27.915 [2024-07-26 14:20:35.810938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.915 [2024-07-26 14:20:35.810965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.916 qpair failed and we were unable to recover it. 00:26:27.916 [2024-07-26 14:20:35.811102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.916 [2024-07-26 14:20:35.811127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.916 qpair failed and we were unable to recover it. 00:26:27.916 [2024-07-26 14:20:35.811244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.916 [2024-07-26 14:20:35.811270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.916 qpair failed and we were unable to recover it. 00:26:27.916 [2024-07-26 14:20:35.811365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.916 [2024-07-26 14:20:35.811391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.916 qpair failed and we were unable to recover it. 00:26:27.916 [2024-07-26 14:20:35.811502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.916 [2024-07-26 14:20:35.811544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.916 qpair failed and we were unable to recover it. 00:26:27.916 [2024-07-26 14:20:35.811638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.916 [2024-07-26 14:20:35.811665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.916 qpair failed and we were unable to recover it. 00:26:27.916 [2024-07-26 14:20:35.811746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.916 [2024-07-26 14:20:35.811772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.916 qpair failed and we were unable to recover it. 00:26:27.916 [2024-07-26 14:20:35.811880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.916 [2024-07-26 14:20:35.811905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.916 qpair failed and we were unable to recover it. 00:26:27.916 [2024-07-26 14:20:35.811987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.916 [2024-07-26 14:20:35.812013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.916 qpair failed and we were unable to recover it. 00:26:27.916 [2024-07-26 14:20:35.812099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.916 [2024-07-26 14:20:35.812124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.916 qpair failed and we were unable to recover it. 00:26:27.916 [2024-07-26 14:20:35.812237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.916 [2024-07-26 14:20:35.812264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.916 qpair failed and we were unable to recover it. 00:26:27.916 [2024-07-26 14:20:35.812467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.916 [2024-07-26 14:20:35.812506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.916 qpair failed and we were unable to recover it. 00:26:27.916 [2024-07-26 14:20:35.812665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.916 [2024-07-26 14:20:35.812705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.916 qpair failed and we were unable to recover it. 00:26:27.916 [2024-07-26 14:20:35.812831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.916 [2024-07-26 14:20:35.812858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.916 qpair failed and we were unable to recover it. 00:26:27.916 [2024-07-26 14:20:35.812978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.916 [2024-07-26 14:20:35.813005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.916 qpair failed and we were unable to recover it. 00:26:27.916 [2024-07-26 14:20:35.813093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.916 [2024-07-26 14:20:35.813121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.916 qpair failed and we were unable to recover it. 00:26:27.916 [2024-07-26 14:20:35.813241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.916 [2024-07-26 14:20:35.813268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.916 qpair failed and we were unable to recover it. 00:26:27.916 [2024-07-26 14:20:35.813389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.916 [2024-07-26 14:20:35.813417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.916 qpair failed and we were unable to recover it. 00:26:27.916 [2024-07-26 14:20:35.813554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.916 [2024-07-26 14:20:35.813581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.916 qpair failed and we were unable to recover it. 00:26:27.916 [2024-07-26 14:20:35.813670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.916 [2024-07-26 14:20:35.813696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.916 qpair failed and we were unable to recover it. 00:26:27.916 [2024-07-26 14:20:35.813803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.916 [2024-07-26 14:20:35.813828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.916 qpair failed and we were unable to recover it. 00:26:27.916 [2024-07-26 14:20:35.813935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.916 [2024-07-26 14:20:35.813960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.916 qpair failed and we were unable to recover it. 00:26:27.916 [2024-07-26 14:20:35.814042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.916 [2024-07-26 14:20:35.814067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.916 qpair failed and we were unable to recover it. 00:26:27.916 [2024-07-26 14:20:35.814185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.916 [2024-07-26 14:20:35.814213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.916 qpair failed and we were unable to recover it. 00:26:27.916 [2024-07-26 14:20:35.814318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.916 [2024-07-26 14:20:35.814357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.916 qpair failed and we were unable to recover it. 00:26:27.916 [2024-07-26 14:20:35.814458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.916 [2024-07-26 14:20:35.814486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.916 qpair failed and we were unable to recover it. 00:26:27.916 [2024-07-26 14:20:35.814620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.916 [2024-07-26 14:20:35.814648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.916 qpair failed and we were unable to recover it. 00:26:27.916 [2024-07-26 14:20:35.814759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.916 [2024-07-26 14:20:35.814785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.916 qpair failed and we were unable to recover it. 00:26:27.916 [2024-07-26 14:20:35.814900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.916 [2024-07-26 14:20:35.814926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.916 qpair failed and we were unable to recover it. 00:26:27.916 [2024-07-26 14:20:35.815008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.916 [2024-07-26 14:20:35.815039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.916 qpair failed and we were unable to recover it. 00:26:27.916 [2024-07-26 14:20:35.815230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.916 [2024-07-26 14:20:35.815256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.916 qpair failed and we were unable to recover it. 00:26:27.916 [2024-07-26 14:20:35.815358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.916 [2024-07-26 14:20:35.815385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.916 qpair failed and we were unable to recover it. 00:26:27.916 [2024-07-26 14:20:35.815520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.916 [2024-07-26 14:20:35.815555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.916 qpair failed and we were unable to recover it. 00:26:27.916 [2024-07-26 14:20:35.815668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.916 [2024-07-26 14:20:35.815693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.916 qpair failed and we were unable to recover it. 00:26:27.916 [2024-07-26 14:20:35.815803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.916 [2024-07-26 14:20:35.815828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.916 qpair failed and we were unable to recover it. 00:26:27.916 [2024-07-26 14:20:35.815941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.916 [2024-07-26 14:20:35.815966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.916 qpair failed and we were unable to recover it. 00:26:27.916 [2024-07-26 14:20:35.816052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.916 [2024-07-26 14:20:35.816078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.916 qpair failed and we were unable to recover it. 00:26:27.917 [2024-07-26 14:20:35.816157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.917 [2024-07-26 14:20:35.816182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.917 qpair failed and we were unable to recover it. 00:26:27.917 [2024-07-26 14:20:35.816273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.917 [2024-07-26 14:20:35.816300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.917 qpair failed and we were unable to recover it. 00:26:27.917 [2024-07-26 14:20:35.816390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.917 [2024-07-26 14:20:35.816416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.917 qpair failed and we were unable to recover it. 00:26:27.917 [2024-07-26 14:20:35.816523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.917 [2024-07-26 14:20:35.816555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.917 qpair failed and we were unable to recover it. 00:26:27.917 [2024-07-26 14:20:35.816671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.917 [2024-07-26 14:20:35.816697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.917 qpair failed and we were unable to recover it. 00:26:27.917 [2024-07-26 14:20:35.816781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.917 [2024-07-26 14:20:35.816807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.917 qpair failed and we were unable to recover it. 00:26:27.917 [2024-07-26 14:20:35.816894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.917 [2024-07-26 14:20:35.816921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.917 qpair failed and we were unable to recover it. 00:26:27.917 [2024-07-26 14:20:35.817060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.917 [2024-07-26 14:20:35.817088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.917 qpair failed and we were unable to recover it. 00:26:27.917 [2024-07-26 14:20:35.817181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.917 [2024-07-26 14:20:35.817211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.917 qpair failed and we were unable to recover it. 00:26:27.917 [2024-07-26 14:20:35.817314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.917 [2024-07-26 14:20:35.817340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.917 qpair failed and we were unable to recover it. 00:26:27.917 [2024-07-26 14:20:35.817550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.917 [2024-07-26 14:20:35.817577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.917 qpair failed and we were unable to recover it. 00:26:27.917 [2024-07-26 14:20:35.817692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.917 [2024-07-26 14:20:35.817719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.917 qpair failed and we were unable to recover it. 00:26:27.917 [2024-07-26 14:20:35.817809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.917 [2024-07-26 14:20:35.817836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.917 qpair failed and we were unable to recover it. 00:26:27.917 [2024-07-26 14:20:35.817976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.917 [2024-07-26 14:20:35.818001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.917 qpair failed and we were unable to recover it. 00:26:27.917 [2024-07-26 14:20:35.818100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.917 [2024-07-26 14:20:35.818128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.917 qpair failed and we were unable to recover it. 00:26:27.917 [2024-07-26 14:20:35.818243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.917 [2024-07-26 14:20:35.818269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.917 qpair failed and we were unable to recover it. 00:26:27.917 [2024-07-26 14:20:35.818358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.917 [2024-07-26 14:20:35.818386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.917 qpair failed and we were unable to recover it. 00:26:27.917 [2024-07-26 14:20:35.818502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.917 [2024-07-26 14:20:35.818534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.917 qpair failed and we were unable to recover it. 00:26:27.917 [2024-07-26 14:20:35.818625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.917 [2024-07-26 14:20:35.818652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.917 qpair failed and we were unable to recover it. 00:26:27.917 [2024-07-26 14:20:35.818765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.917 [2024-07-26 14:20:35.818798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.917 qpair failed and we were unable to recover it. 00:26:27.917 [2024-07-26 14:20:35.818987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.917 [2024-07-26 14:20:35.819013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.917 qpair failed and we were unable to recover it. 00:26:27.917 [2024-07-26 14:20:35.819122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.917 [2024-07-26 14:20:35.819147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.917 qpair failed and we were unable to recover it. 00:26:27.917 [2024-07-26 14:20:35.819264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.917 [2024-07-26 14:20:35.819290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.917 qpair failed and we were unable to recover it. 00:26:27.917 [2024-07-26 14:20:35.819424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.917 [2024-07-26 14:20:35.819464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.917 qpair failed and we were unable to recover it. 00:26:27.917 [2024-07-26 14:20:35.819563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.917 [2024-07-26 14:20:35.819592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.917 qpair failed and we were unable to recover it. 00:26:27.917 [2024-07-26 14:20:35.819732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.917 [2024-07-26 14:20:35.819758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.917 qpair failed and we were unable to recover it. 00:26:27.917 [2024-07-26 14:20:35.819844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.917 [2024-07-26 14:20:35.819870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.917 qpair failed and we were unable to recover it. 00:26:27.917 [2024-07-26 14:20:35.820015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.917 [2024-07-26 14:20:35.820063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.917 qpair failed and we were unable to recover it. 00:26:27.917 [2024-07-26 14:20:35.820151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.917 [2024-07-26 14:20:35.820178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.917 qpair failed and we were unable to recover it. 00:26:27.917 [2024-07-26 14:20:35.820271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.917 [2024-07-26 14:20:35.820298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.917 qpair failed and we were unable to recover it. 00:26:27.917 [2024-07-26 14:20:35.820417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.917 [2024-07-26 14:20:35.820443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.917 qpair failed and we were unable to recover it. 00:26:27.917 [2024-07-26 14:20:35.820537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.917 [2024-07-26 14:20:35.820564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.917 qpair failed and we were unable to recover it. 00:26:27.917 [2024-07-26 14:20:35.820676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.917 [2024-07-26 14:20:35.820702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.917 qpair failed and we were unable to recover it. 00:26:27.917 [2024-07-26 14:20:35.820793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.917 [2024-07-26 14:20:35.820819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.917 qpair failed and we were unable to recover it. 00:26:27.917 [2024-07-26 14:20:35.820956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.917 [2024-07-26 14:20:35.821007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.917 qpair failed and we were unable to recover it. 00:26:27.917 [2024-07-26 14:20:35.821119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.917 [2024-07-26 14:20:35.821145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.917 qpair failed and we were unable to recover it. 00:26:27.917 [2024-07-26 14:20:35.821222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.918 [2024-07-26 14:20:35.821248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.918 qpair failed and we were unable to recover it. 00:26:27.918 [2024-07-26 14:20:35.821372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.918 [2024-07-26 14:20:35.821411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.918 qpair failed and we were unable to recover it. 00:26:27.918 [2024-07-26 14:20:35.821509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.918 [2024-07-26 14:20:35.821551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.918 qpair failed and we were unable to recover it. 00:26:27.918 [2024-07-26 14:20:35.821650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.918 [2024-07-26 14:20:35.821676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.918 qpair failed and we were unable to recover it. 00:26:27.918 [2024-07-26 14:20:35.821818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.918 [2024-07-26 14:20:35.821845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.918 qpair failed and we were unable to recover it. 00:26:27.918 [2024-07-26 14:20:35.821960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.918 [2024-07-26 14:20:35.821985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.918 qpair failed and we were unable to recover it. 00:26:27.918 [2024-07-26 14:20:35.822126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.918 [2024-07-26 14:20:35.822153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.918 qpair failed and we were unable to recover it. 00:26:27.918 [2024-07-26 14:20:35.822266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.918 [2024-07-26 14:20:35.822292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.918 qpair failed and we were unable to recover it. 00:26:27.918 [2024-07-26 14:20:35.822383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.918 [2024-07-26 14:20:35.822411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.918 qpair failed and we were unable to recover it. 00:26:27.918 [2024-07-26 14:20:35.822504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.918 [2024-07-26 14:20:35.822547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.918 qpair failed and we were unable to recover it. 00:26:27.918 [2024-07-26 14:20:35.822633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.918 [2024-07-26 14:20:35.822664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.918 qpair failed and we were unable to recover it. 00:26:27.918 [2024-07-26 14:20:35.822778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.918 [2024-07-26 14:20:35.822805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.918 qpair failed and we were unable to recover it. 00:26:27.918 [2024-07-26 14:20:35.822920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.918 [2024-07-26 14:20:35.822970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.918 qpair failed and we were unable to recover it. 00:26:27.918 [2024-07-26 14:20:35.823100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.918 [2024-07-26 14:20:35.823147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.918 qpair failed and we were unable to recover it. 00:26:27.918 [2024-07-26 14:20:35.823258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.918 [2024-07-26 14:20:35.823284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.918 qpair failed and we were unable to recover it. 00:26:27.918 [2024-07-26 14:20:35.823418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.918 [2024-07-26 14:20:35.823444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.918 qpair failed and we were unable to recover it. 00:26:27.918 [2024-07-26 14:20:35.823538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.918 [2024-07-26 14:20:35.823567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.918 qpair failed and we were unable to recover it. 00:26:27.918 [2024-07-26 14:20:35.823708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.918 [2024-07-26 14:20:35.823733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.918 qpair failed and we were unable to recover it. 00:26:27.918 [2024-07-26 14:20:35.823847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.918 [2024-07-26 14:20:35.823874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.918 qpair failed and we were unable to recover it. 00:26:27.918 [2024-07-26 14:20:35.823961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.918 [2024-07-26 14:20:35.823986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.918 qpair failed and we were unable to recover it. 00:26:27.918 [2024-07-26 14:20:35.824176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.918 [2024-07-26 14:20:35.824203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.918 qpair failed and we were unable to recover it. 00:26:27.918 [2024-07-26 14:20:35.824318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.918 [2024-07-26 14:20:35.824343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.918 qpair failed and we were unable to recover it. 00:26:27.918 [2024-07-26 14:20:35.824463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.918 [2024-07-26 14:20:35.824490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.918 qpair failed and we were unable to recover it. 00:26:27.918 [2024-07-26 14:20:35.824611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.918 [2024-07-26 14:20:35.824638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.918 qpair failed and we were unable to recover it. 00:26:27.918 [2024-07-26 14:20:35.824782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.918 [2024-07-26 14:20:35.824808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.918 qpair failed and we were unable to recover it. 00:26:27.918 [2024-07-26 14:20:35.824917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.918 [2024-07-26 14:20:35.824944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.918 qpair failed and we were unable to recover it. 00:26:27.918 [2024-07-26 14:20:35.825040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.918 [2024-07-26 14:20:35.825065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.918 qpair failed and we were unable to recover it. 00:26:27.918 [2024-07-26 14:20:35.825157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.918 [2024-07-26 14:20:35.825183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.918 qpair failed and we were unable to recover it. 00:26:27.918 [2024-07-26 14:20:35.825266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.918 [2024-07-26 14:20:35.825292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.918 qpair failed and we were unable to recover it. 00:26:27.918 [2024-07-26 14:20:35.825403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.918 [2024-07-26 14:20:35.825429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.918 qpair failed and we were unable to recover it. 00:26:27.918 [2024-07-26 14:20:35.825514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.918 [2024-07-26 14:20:35.825549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.918 qpair failed and we were unable to recover it. 00:26:27.918 [2024-07-26 14:20:35.825687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.918 [2024-07-26 14:20:35.825714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.918 qpair failed and we were unable to recover it. 00:26:27.918 [2024-07-26 14:20:35.825825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.918 [2024-07-26 14:20:35.825850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.918 qpair failed and we were unable to recover it. 00:26:27.918 [2024-07-26 14:20:35.825990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.918 [2024-07-26 14:20:35.826017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.918 qpair failed and we were unable to recover it. 00:26:27.918 [2024-07-26 14:20:35.826132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.918 [2024-07-26 14:20:35.826157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.918 qpair failed and we were unable to recover it. 00:26:27.918 [2024-07-26 14:20:35.826236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.918 [2024-07-26 14:20:35.826263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.918 qpair failed and we were unable to recover it. 00:26:27.918 [2024-07-26 14:20:35.826339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.918 [2024-07-26 14:20:35.826365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.918 qpair failed and we were unable to recover it. 00:26:27.919 [2024-07-26 14:20:35.826484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.919 [2024-07-26 14:20:35.826510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.919 qpair failed and we were unable to recover it. 00:26:27.919 [2024-07-26 14:20:35.826635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.919 [2024-07-26 14:20:35.826661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.919 qpair failed and we were unable to recover it. 00:26:27.919 [2024-07-26 14:20:35.826740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.919 [2024-07-26 14:20:35.826766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.919 qpair failed and we were unable to recover it. 00:26:27.919 [2024-07-26 14:20:35.826879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.919 [2024-07-26 14:20:35.826905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.919 qpair failed and we were unable to recover it. 00:26:27.919 [2024-07-26 14:20:35.827038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.919 [2024-07-26 14:20:35.827065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.919 qpair failed and we were unable to recover it. 00:26:27.919 [2024-07-26 14:20:35.827155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.919 [2024-07-26 14:20:35.827181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.919 qpair failed and we were unable to recover it. 00:26:27.919 [2024-07-26 14:20:35.827403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.919 [2024-07-26 14:20:35.827429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.919 qpair failed and we were unable to recover it. 00:26:27.919 [2024-07-26 14:20:35.827537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.919 [2024-07-26 14:20:35.827564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.919 qpair failed and we were unable to recover it. 00:26:27.919 [2024-07-26 14:20:35.827697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.919 [2024-07-26 14:20:35.827723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.919 qpair failed and we were unable to recover it. 00:26:27.919 [2024-07-26 14:20:35.827842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.919 [2024-07-26 14:20:35.827868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.919 qpair failed and we were unable to recover it. 00:26:27.919 [2024-07-26 14:20:35.827953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.919 [2024-07-26 14:20:35.827979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.919 qpair failed and we were unable to recover it. 00:26:27.919 [2024-07-26 14:20:35.828070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.919 [2024-07-26 14:20:35.828097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.919 qpair failed and we were unable to recover it. 00:26:27.919 [2024-07-26 14:20:35.828185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.919 [2024-07-26 14:20:35.828210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.919 qpair failed and we were unable to recover it. 00:26:27.919 [2024-07-26 14:20:35.828320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.919 [2024-07-26 14:20:35.828352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.919 qpair failed and we were unable to recover it. 00:26:27.919 [2024-07-26 14:20:35.828469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.919 [2024-07-26 14:20:35.828496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.919 qpair failed and we were unable to recover it. 00:26:27.919 [2024-07-26 14:20:35.828622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.919 [2024-07-26 14:20:35.828648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.919 qpair failed and we were unable to recover it. 00:26:27.919 [2024-07-26 14:20:35.828731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.919 [2024-07-26 14:20:35.828758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.919 qpair failed and we were unable to recover it. 00:26:27.919 [2024-07-26 14:20:35.828894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.919 [2024-07-26 14:20:35.828919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.919 qpair failed and we were unable to recover it. 00:26:27.919 [2024-07-26 14:20:35.829029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.919 [2024-07-26 14:20:35.829057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.919 qpair failed and we were unable to recover it. 00:26:27.919 [2024-07-26 14:20:35.829136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.919 [2024-07-26 14:20:35.829162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.919 qpair failed and we were unable to recover it. 00:26:27.919 [2024-07-26 14:20:35.829267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.919 [2024-07-26 14:20:35.829294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.919 qpair failed and we were unable to recover it. 00:26:27.919 [2024-07-26 14:20:35.829436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.919 [2024-07-26 14:20:35.829462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.919 qpair failed and we were unable to recover it. 00:26:27.919 [2024-07-26 14:20:35.829602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.919 [2024-07-26 14:20:35.829629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.919 qpair failed and we were unable to recover it. 00:26:27.919 [2024-07-26 14:20:35.829722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.919 [2024-07-26 14:20:35.829749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.919 qpair failed and we were unable to recover it. 00:26:27.919 [2024-07-26 14:20:35.829825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.919 [2024-07-26 14:20:35.829851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.919 qpair failed and we were unable to recover it. 00:26:27.919 [2024-07-26 14:20:35.829960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.919 [2024-07-26 14:20:35.829986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.919 qpair failed and we were unable to recover it. 00:26:27.919 [2024-07-26 14:20:35.830099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.919 [2024-07-26 14:20:35.830126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.919 qpair failed and we were unable to recover it. 00:26:27.919 [2024-07-26 14:20:35.830215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.919 [2024-07-26 14:20:35.830241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.919 qpair failed and we were unable to recover it. 00:26:27.919 [2024-07-26 14:20:35.830389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.919 [2024-07-26 14:20:35.830428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.919 qpair failed and we were unable to recover it. 00:26:27.919 [2024-07-26 14:20:35.830547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.919 [2024-07-26 14:20:35.830575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.919 qpair failed and we were unable to recover it. 00:26:27.919 [2024-07-26 14:20:35.830664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.919 [2024-07-26 14:20:35.830691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.919 qpair failed and we were unable to recover it. 00:26:27.919 [2024-07-26 14:20:35.830770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.919 [2024-07-26 14:20:35.830796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.919 qpair failed and we were unable to recover it. 00:26:27.919 [2024-07-26 14:20:35.830881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.919 [2024-07-26 14:20:35.830908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.919 qpair failed and we were unable to recover it. 00:26:27.919 [2024-07-26 14:20:35.830992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.919 [2024-07-26 14:20:35.831019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.919 qpair failed and we were unable to recover it. 00:26:27.919 [2024-07-26 14:20:35.831129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.919 [2024-07-26 14:20:35.831155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.919 qpair failed and we were unable to recover it. 00:26:27.919 [2024-07-26 14:20:35.831271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.919 [2024-07-26 14:20:35.831298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.919 qpair failed and we were unable to recover it. 00:26:27.920 [2024-07-26 14:20:35.831398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.920 [2024-07-26 14:20:35.831438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.920 qpair failed and we were unable to recover it. 00:26:27.920 [2024-07-26 14:20:35.831545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.920 [2024-07-26 14:20:35.831574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.920 qpair failed and we were unable to recover it. 00:26:27.920 [2024-07-26 14:20:35.831658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.920 [2024-07-26 14:20:35.831685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.920 qpair failed and we were unable to recover it. 00:26:27.920 [2024-07-26 14:20:35.831769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.920 [2024-07-26 14:20:35.831795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.920 qpair failed and we were unable to recover it. 00:26:27.920 [2024-07-26 14:20:35.831974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.920 [2024-07-26 14:20:35.832002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.920 qpair failed and we were unable to recover it. 00:26:27.920 [2024-07-26 14:20:35.832116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.920 [2024-07-26 14:20:35.832142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.920 qpair failed and we were unable to recover it. 00:26:27.920 [2024-07-26 14:20:35.832222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.920 [2024-07-26 14:20:35.832249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.920 qpair failed and we were unable to recover it. 00:26:27.920 [2024-07-26 14:20:35.832337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.920 [2024-07-26 14:20:35.832363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.920 qpair failed and we were unable to recover it. 00:26:27.920 [2024-07-26 14:20:35.832438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.920 [2024-07-26 14:20:35.832465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.920 qpair failed and we were unable to recover it. 00:26:27.920 [2024-07-26 14:20:35.832550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.920 [2024-07-26 14:20:35.832577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.920 qpair failed and we were unable to recover it. 00:26:27.920 [2024-07-26 14:20:35.832661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.920 [2024-07-26 14:20:35.832687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.920 qpair failed and we were unable to recover it. 00:26:27.920 [2024-07-26 14:20:35.832802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.920 [2024-07-26 14:20:35.832828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.920 qpair failed and we were unable to recover it. 00:26:27.920 [2024-07-26 14:20:35.832922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.920 [2024-07-26 14:20:35.832947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.920 qpair failed and we were unable to recover it. 00:26:27.920 [2024-07-26 14:20:35.833062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.920 [2024-07-26 14:20:35.833088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.920 qpair failed and we were unable to recover it. 00:26:27.920 [2024-07-26 14:20:35.833179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.920 [2024-07-26 14:20:35.833205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.920 qpair failed and we were unable to recover it. 00:26:27.920 [2024-07-26 14:20:35.833323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.920 [2024-07-26 14:20:35.833363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.920 qpair failed and we were unable to recover it. 00:26:27.920 [2024-07-26 14:20:35.833487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.920 [2024-07-26 14:20:35.833534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.920 qpair failed and we were unable to recover it. 00:26:27.920 [2024-07-26 14:20:35.833653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.920 [2024-07-26 14:20:35.833686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.920 qpair failed and we were unable to recover it. 00:26:27.920 [2024-07-26 14:20:35.833771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.920 [2024-07-26 14:20:35.833799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.920 qpair failed and we were unable to recover it. 00:26:27.920 [2024-07-26 14:20:35.833914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.920 [2024-07-26 14:20:35.833940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.920 qpair failed and we were unable to recover it. 00:26:27.920 [2024-07-26 14:20:35.834021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.920 [2024-07-26 14:20:35.834047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.920 qpair failed and we were unable to recover it. 00:26:27.920 [2024-07-26 14:20:35.834154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.920 [2024-07-26 14:20:35.834182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.920 qpair failed and we were unable to recover it. 00:26:27.920 [2024-07-26 14:20:35.834319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.920 [2024-07-26 14:20:35.834344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.920 qpair failed and we were unable to recover it. 00:26:27.920 [2024-07-26 14:20:35.834448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.920 [2024-07-26 14:20:35.834473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.920 qpair failed and we were unable to recover it. 00:26:27.920 [2024-07-26 14:20:35.834566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.920 [2024-07-26 14:20:35.834593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.920 qpair failed and we were unable to recover it. 00:26:27.920 [2024-07-26 14:20:35.834683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.920 [2024-07-26 14:20:35.834708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.920 qpair failed and we were unable to recover it. 00:26:27.920 [2024-07-26 14:20:35.834786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.920 [2024-07-26 14:20:35.834811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.920 qpair failed and we were unable to recover it. 00:26:27.920 [2024-07-26 14:20:35.834919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.920 [2024-07-26 14:20:35.834946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.920 qpair failed and we were unable to recover it. 00:26:27.920 [2024-07-26 14:20:35.835082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.920 [2024-07-26 14:20:35.835107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.920 qpair failed and we were unable to recover it. 00:26:27.920 [2024-07-26 14:20:35.835195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.920 [2024-07-26 14:20:35.835222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.920 qpair failed and we were unable to recover it. 00:26:27.920 [2024-07-26 14:20:35.835351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.920 [2024-07-26 14:20:35.835378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.920 qpair failed and we were unable to recover it. 00:26:27.920 [2024-07-26 14:20:35.835489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.921 [2024-07-26 14:20:35.835543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.921 qpair failed and we were unable to recover it. 00:26:27.921 [2024-07-26 14:20:35.835643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.921 [2024-07-26 14:20:35.835670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.921 qpair failed and we were unable to recover it. 00:26:27.921 [2024-07-26 14:20:35.835788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.921 [2024-07-26 14:20:35.835816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.921 qpair failed and we were unable to recover it. 00:26:27.921 [2024-07-26 14:20:35.835954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.921 [2024-07-26 14:20:35.836019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.921 qpair failed and we were unable to recover it. 00:26:27.921 [2024-07-26 14:20:35.836126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.921 [2024-07-26 14:20:35.836193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.921 qpair failed and we were unable to recover it. 00:26:27.921 [2024-07-26 14:20:35.836329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.921 [2024-07-26 14:20:35.836356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.921 qpair failed and we were unable to recover it. 00:26:27.921 [2024-07-26 14:20:35.836488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.921 [2024-07-26 14:20:35.836535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.921 qpair failed and we were unable to recover it. 00:26:27.921 [2024-07-26 14:20:35.836625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.921 [2024-07-26 14:20:35.836652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.921 qpair failed and we were unable to recover it. 00:26:27.921 [2024-07-26 14:20:35.836746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.921 [2024-07-26 14:20:35.836772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.921 qpair failed and we were unable to recover it. 00:26:27.921 [2024-07-26 14:20:35.836852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.921 [2024-07-26 14:20:35.836878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.921 qpair failed and we were unable to recover it. 00:26:27.921 [2024-07-26 14:20:35.836985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.921 [2024-07-26 14:20:35.837010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.921 qpair failed and we were unable to recover it. 00:26:27.921 [2024-07-26 14:20:35.837114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.921 [2024-07-26 14:20:35.837140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.921 qpair failed and we were unable to recover it. 00:26:27.921 [2024-07-26 14:20:35.837282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.921 [2024-07-26 14:20:35.837311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.921 qpair failed and we were unable to recover it. 00:26:27.921 [2024-07-26 14:20:35.837404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.921 [2024-07-26 14:20:35.837435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.921 qpair failed and we were unable to recover it. 00:26:27.921 [2024-07-26 14:20:35.837553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.921 [2024-07-26 14:20:35.837579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.921 qpair failed and we were unable to recover it. 00:26:27.921 [2024-07-26 14:20:35.837658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.921 [2024-07-26 14:20:35.837683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.921 qpair failed and we were unable to recover it. 00:26:27.921 [2024-07-26 14:20:35.837781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.921 [2024-07-26 14:20:35.837806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.921 qpair failed and we were unable to recover it. 00:26:27.921 [2024-07-26 14:20:35.837921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.921 [2024-07-26 14:20:35.837951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.921 qpair failed and we were unable to recover it. 00:26:27.921 [2024-07-26 14:20:35.838068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.921 [2024-07-26 14:20:35.838095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.921 qpair failed and we were unable to recover it. 00:26:27.921 [2024-07-26 14:20:35.838172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.921 [2024-07-26 14:20:35.838198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.921 qpair failed and we were unable to recover it. 00:26:27.921 [2024-07-26 14:20:35.838310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.921 [2024-07-26 14:20:35.838336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.921 qpair failed and we were unable to recover it. 00:26:27.921 [2024-07-26 14:20:35.838442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.921 [2024-07-26 14:20:35.838470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.921 qpair failed and we were unable to recover it. 00:26:27.921 [2024-07-26 14:20:35.838585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.921 [2024-07-26 14:20:35.838611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.921 qpair failed and we were unable to recover it. 00:26:27.921 [2024-07-26 14:20:35.838723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.921 [2024-07-26 14:20:35.838750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.921 qpair failed and we were unable to recover it. 00:26:27.921 [2024-07-26 14:20:35.838838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.921 [2024-07-26 14:20:35.838863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.921 qpair failed and we were unable to recover it. 00:26:27.921 [2024-07-26 14:20:35.838976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.921 [2024-07-26 14:20:35.839003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.921 qpair failed and we were unable to recover it. 00:26:27.921 [2024-07-26 14:20:35.839090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.921 [2024-07-26 14:20:35.839116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.921 qpair failed and we were unable to recover it. 00:26:27.921 [2024-07-26 14:20:35.839228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.921 [2024-07-26 14:20:35.839255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.921 qpair failed and we were unable to recover it. 00:26:27.921 [2024-07-26 14:20:35.839362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.921 [2024-07-26 14:20:35.839387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.921 qpair failed and we were unable to recover it. 00:26:27.921 [2024-07-26 14:20:35.839585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.921 [2024-07-26 14:20:35.839613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.921 qpair failed and we were unable to recover it. 00:26:27.921 [2024-07-26 14:20:35.839706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.921 [2024-07-26 14:20:35.839732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.921 qpair failed and we were unable to recover it. 00:26:27.921 [2024-07-26 14:20:35.839844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.921 [2024-07-26 14:20:35.839870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.921 qpair failed and we were unable to recover it. 00:26:27.921 [2024-07-26 14:20:35.839971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.921 [2024-07-26 14:20:35.839997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.921 qpair failed and we were unable to recover it. 00:26:27.921 [2024-07-26 14:20:35.840122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.921 [2024-07-26 14:20:35.840148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.921 qpair failed and we were unable to recover it. 00:26:27.921 [2024-07-26 14:20:35.840283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.921 [2024-07-26 14:20:35.840309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.921 qpair failed and we were unable to recover it. 00:26:27.921 [2024-07-26 14:20:35.840432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.921 [2024-07-26 14:20:35.840458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.921 qpair failed and we were unable to recover it. 00:26:27.921 [2024-07-26 14:20:35.840594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.921 [2024-07-26 14:20:35.840623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.921 qpair failed and we were unable to recover it. 00:26:27.921 [2024-07-26 14:20:35.840736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.921 [2024-07-26 14:20:35.840762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.921 qpair failed and we were unable to recover it. 00:26:27.922 [2024-07-26 14:20:35.840847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.922 [2024-07-26 14:20:35.840872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.922 qpair failed and we were unable to recover it. 00:26:27.922 [2024-07-26 14:20:35.840982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.922 [2024-07-26 14:20:35.841008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.922 qpair failed and we were unable to recover it. 00:26:27.922 [2024-07-26 14:20:35.841145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.922 [2024-07-26 14:20:35.841186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.922 qpair failed and we were unable to recover it. 00:26:27.922 [2024-07-26 14:20:35.841308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.922 [2024-07-26 14:20:35.841336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.922 qpair failed and we were unable to recover it. 00:26:27.922 [2024-07-26 14:20:35.841424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.922 [2024-07-26 14:20:35.841450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.922 qpair failed and we were unable to recover it. 00:26:27.922 [2024-07-26 14:20:35.841602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.922 [2024-07-26 14:20:35.841629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.922 qpair failed and we were unable to recover it. 00:26:27.922 [2024-07-26 14:20:35.841715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.922 [2024-07-26 14:20:35.841742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.922 qpair failed and we were unable to recover it. 00:26:27.922 [2024-07-26 14:20:35.841837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.922 [2024-07-26 14:20:35.841862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.922 qpair failed and we were unable to recover it. 00:26:27.922 [2024-07-26 14:20:35.841974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.922 [2024-07-26 14:20:35.842002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.922 qpair failed and we were unable to recover it. 00:26:27.922 [2024-07-26 14:20:35.842109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.922 [2024-07-26 14:20:35.842135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.922 qpair failed and we were unable to recover it. 00:26:27.922 [2024-07-26 14:20:35.842218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.922 [2024-07-26 14:20:35.842245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.922 qpair failed and we were unable to recover it. 00:26:27.922 [2024-07-26 14:20:35.842327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.922 [2024-07-26 14:20:35.842353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.922 qpair failed and we were unable to recover it. 00:26:27.922 [2024-07-26 14:20:35.842470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.922 [2024-07-26 14:20:35.842496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.922 qpair failed and we were unable to recover it. 00:26:27.922 [2024-07-26 14:20:35.842590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.922 [2024-07-26 14:20:35.842618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.922 qpair failed and we were unable to recover it. 00:26:27.922 [2024-07-26 14:20:35.842758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.922 [2024-07-26 14:20:35.842784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.922 qpair failed and we were unable to recover it. 00:26:27.922 [2024-07-26 14:20:35.842862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.922 [2024-07-26 14:20:35.842892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.922 qpair failed and we were unable to recover it. 00:26:27.922 [2024-07-26 14:20:35.843069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.922 [2024-07-26 14:20:35.843095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.922 qpair failed and we were unable to recover it. 00:26:27.922 [2024-07-26 14:20:35.843310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.922 [2024-07-26 14:20:35.843367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.922 qpair failed and we were unable to recover it. 00:26:27.922 [2024-07-26 14:20:35.843475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.922 [2024-07-26 14:20:35.843501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.922 qpair failed and we were unable to recover it. 00:26:27.922 [2024-07-26 14:20:35.843595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.922 [2024-07-26 14:20:35.843623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.922 qpair failed and we were unable to recover it. 00:26:27.922 [2024-07-26 14:20:35.843766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.922 [2024-07-26 14:20:35.843792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.922 qpair failed and we were unable to recover it. 00:26:27.922 [2024-07-26 14:20:35.843936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.922 [2024-07-26 14:20:35.843962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.922 qpair failed and we were unable to recover it. 00:26:27.922 [2024-07-26 14:20:35.844076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.922 [2024-07-26 14:20:35.844102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.922 qpair failed and we were unable to recover it. 00:26:27.922 [2024-07-26 14:20:35.844188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.922 [2024-07-26 14:20:35.844216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.922 qpair failed and we were unable to recover it. 00:26:27.922 [2024-07-26 14:20:35.844361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.922 [2024-07-26 14:20:35.844386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.922 qpair failed and we were unable to recover it. 00:26:27.922 [2024-07-26 14:20:35.844482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.922 [2024-07-26 14:20:35.844509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.922 qpair failed and we were unable to recover it. 00:26:27.922 [2024-07-26 14:20:35.844622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.922 [2024-07-26 14:20:35.844647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.922 qpair failed and we were unable to recover it. 00:26:27.922 [2024-07-26 14:20:35.844756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.922 [2024-07-26 14:20:35.844782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.922 qpair failed and we were unable to recover it. 00:26:27.922 [2024-07-26 14:20:35.844899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.922 [2024-07-26 14:20:35.844925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.922 qpair failed and we were unable to recover it. 00:26:27.922 [2024-07-26 14:20:35.845067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.922 [2024-07-26 14:20:35.845094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.922 qpair failed and we were unable to recover it. 00:26:27.922 [2024-07-26 14:20:35.845211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.922 [2024-07-26 14:20:35.845237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.922 qpair failed and we were unable to recover it. 00:26:27.922 [2024-07-26 14:20:35.845347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.922 [2024-07-26 14:20:35.845373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.922 qpair failed and we were unable to recover it. 00:26:27.922 [2024-07-26 14:20:35.845455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.922 [2024-07-26 14:20:35.845481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.922 qpair failed and we were unable to recover it. 00:26:27.922 [2024-07-26 14:20:35.845612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.922 [2024-07-26 14:20:35.845652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.922 qpair failed and we were unable to recover it. 00:26:27.922 [2024-07-26 14:20:35.845749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.922 [2024-07-26 14:20:35.845778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.922 qpair failed and we were unable to recover it. 00:26:27.922 [2024-07-26 14:20:35.845983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.922 [2024-07-26 14:20:35.846010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.922 qpair failed and we were unable to recover it. 00:26:27.922 [2024-07-26 14:20:35.846124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.922 [2024-07-26 14:20:35.846151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.922 qpair failed and we were unable to recover it. 00:26:27.922 [2024-07-26 14:20:35.846294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.923 [2024-07-26 14:20:35.846320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.923 qpair failed and we were unable to recover it. 00:26:27.923 [2024-07-26 14:20:35.846409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.923 [2024-07-26 14:20:35.846435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.923 qpair failed and we were unable to recover it. 00:26:27.923 [2024-07-26 14:20:35.846543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.923 [2024-07-26 14:20:35.846569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.923 qpair failed and we were unable to recover it. 00:26:27.923 [2024-07-26 14:20:35.846681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.923 [2024-07-26 14:20:35.846706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.923 qpair failed and we were unable to recover it. 00:26:27.923 [2024-07-26 14:20:35.846785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.923 [2024-07-26 14:20:35.846811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.923 qpair failed and we were unable to recover it. 00:26:27.923 [2024-07-26 14:20:35.846938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.923 [2024-07-26 14:20:35.846969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.923 qpair failed and we were unable to recover it. 00:26:27.923 [2024-07-26 14:20:35.847104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.923 [2024-07-26 14:20:35.847130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.923 qpair failed and we were unable to recover it. 00:26:27.923 [2024-07-26 14:20:35.847212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.923 [2024-07-26 14:20:35.847238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.923 qpair failed and we were unable to recover it. 00:26:27.923 [2024-07-26 14:20:35.847349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.923 [2024-07-26 14:20:35.847375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.923 qpair failed and we were unable to recover it. 00:26:27.923 [2024-07-26 14:20:35.847453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.923 [2024-07-26 14:20:35.847479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.923 qpair failed and we were unable to recover it. 00:26:27.923 [2024-07-26 14:20:35.847616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.923 [2024-07-26 14:20:35.847656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.923 qpair failed and we were unable to recover it. 00:26:27.923 [2024-07-26 14:20:35.847756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.923 [2024-07-26 14:20:35.847784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.923 qpair failed and we were unable to recover it. 00:26:27.923 [2024-07-26 14:20:35.847908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.923 [2024-07-26 14:20:35.847936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.923 qpair failed and we were unable to recover it. 00:26:27.923 [2024-07-26 14:20:35.848095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.923 [2024-07-26 14:20:35.848175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.923 qpair failed and we were unable to recover it. 00:26:27.923 [2024-07-26 14:20:35.848347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.923 [2024-07-26 14:20:35.848375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.923 qpair failed and we were unable to recover it. 00:26:27.923 [2024-07-26 14:20:35.848496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.923 [2024-07-26 14:20:35.848544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.923 qpair failed and we were unable to recover it. 00:26:27.923 [2024-07-26 14:20:35.848667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.923 [2024-07-26 14:20:35.848695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.923 qpair failed and we were unable to recover it. 00:26:27.923 [2024-07-26 14:20:35.848814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.923 [2024-07-26 14:20:35.848839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.923 qpair failed and we were unable to recover it. 00:26:27.923 [2024-07-26 14:20:35.848944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.923 [2024-07-26 14:20:35.848970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.923 qpair failed and we were unable to recover it. 00:26:27.923 [2024-07-26 14:20:35.849060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.923 [2024-07-26 14:20:35.849086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.923 qpair failed and we were unable to recover it. 00:26:27.923 [2024-07-26 14:20:35.849173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.923 [2024-07-26 14:20:35.849198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.923 qpair failed and we were unable to recover it. 00:26:27.923 [2024-07-26 14:20:35.849310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.923 [2024-07-26 14:20:35.849337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.923 qpair failed and we were unable to recover it. 00:26:27.923 [2024-07-26 14:20:35.849421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.923 [2024-07-26 14:20:35.849449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.923 qpair failed and we were unable to recover it. 00:26:27.923 [2024-07-26 14:20:35.849558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.923 [2024-07-26 14:20:35.849585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.923 qpair failed and we were unable to recover it. 00:26:27.923 [2024-07-26 14:20:35.849670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.923 [2024-07-26 14:20:35.849697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.923 qpair failed and we were unable to recover it. 00:26:27.923 [2024-07-26 14:20:35.849829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.923 [2024-07-26 14:20:35.849856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.923 qpair failed and we were unable to recover it. 00:26:27.923 [2024-07-26 14:20:35.849992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.923 [2024-07-26 14:20:35.850018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.923 qpair failed and we were unable to recover it. 00:26:27.923 [2024-07-26 14:20:35.850145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.923 [2024-07-26 14:20:35.850184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.923 qpair failed and we were unable to recover it. 00:26:27.923 [2024-07-26 14:20:35.850279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.923 [2024-07-26 14:20:35.850308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.923 qpair failed and we were unable to recover it. 00:26:27.923 [2024-07-26 14:20:35.850391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.923 [2024-07-26 14:20:35.850418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.923 qpair failed and we were unable to recover it. 00:26:27.923 [2024-07-26 14:20:35.850534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.923 [2024-07-26 14:20:35.850561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.923 qpair failed and we were unable to recover it. 00:26:27.923 [2024-07-26 14:20:35.850643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.923 [2024-07-26 14:20:35.850669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.923 qpair failed and we were unable to recover it. 00:26:27.923 [2024-07-26 14:20:35.850772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.923 [2024-07-26 14:20:35.850801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.923 qpair failed and we were unable to recover it. 00:26:27.923 [2024-07-26 14:20:35.850893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.923 [2024-07-26 14:20:35.850920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.923 qpair failed and we were unable to recover it. 00:26:27.923 [2024-07-26 14:20:35.851036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.923 [2024-07-26 14:20:35.851064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.923 qpair failed and we were unable to recover it. 00:26:27.923 [2024-07-26 14:20:35.851178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.923 [2024-07-26 14:20:35.851205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.923 qpair failed and we were unable to recover it. 00:26:27.923 [2024-07-26 14:20:35.851320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.923 [2024-07-26 14:20:35.851347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.923 qpair failed and we were unable to recover it. 00:26:27.923 [2024-07-26 14:20:35.851461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.923 [2024-07-26 14:20:35.851487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.923 qpair failed and we were unable to recover it. 00:26:27.924 [2024-07-26 14:20:35.851609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.924 [2024-07-26 14:20:35.851636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.924 qpair failed and we were unable to recover it. 00:26:27.924 [2024-07-26 14:20:35.851792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.924 [2024-07-26 14:20:35.851852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.924 qpair failed and we were unable to recover it. 00:26:27.924 [2024-07-26 14:20:35.852122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.924 [2024-07-26 14:20:35.852184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.924 qpair failed and we were unable to recover it. 00:26:27.924 [2024-07-26 14:20:35.852354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.924 [2024-07-26 14:20:35.852380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.924 qpair failed and we were unable to recover it. 00:26:27.924 [2024-07-26 14:20:35.852493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.924 [2024-07-26 14:20:35.852521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.924 qpair failed and we were unable to recover it. 00:26:27.924 [2024-07-26 14:20:35.852645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.924 [2024-07-26 14:20:35.852671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.924 qpair failed and we were unable to recover it. 00:26:27.924 [2024-07-26 14:20:35.852745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.924 [2024-07-26 14:20:35.852771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.924 qpair failed and we were unable to recover it. 00:26:27.924 [2024-07-26 14:20:35.852936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.924 [2024-07-26 14:20:35.852988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.924 qpair failed and we were unable to recover it. 00:26:27.924 [2024-07-26 14:20:35.853170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.924 [2024-07-26 14:20:35.853226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.924 qpair failed and we were unable to recover it. 00:26:27.924 [2024-07-26 14:20:35.853335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.924 [2024-07-26 14:20:35.853361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.924 qpair failed and we were unable to recover it. 00:26:27.924 [2024-07-26 14:20:35.853466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.924 [2024-07-26 14:20:35.853493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.924 qpair failed and we were unable to recover it. 00:26:27.924 [2024-07-26 14:20:35.853592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.924 [2024-07-26 14:20:35.853631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.924 qpair failed and we were unable to recover it. 00:26:27.924 [2024-07-26 14:20:35.853749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.924 [2024-07-26 14:20:35.853776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.924 qpair failed and we were unable to recover it. 00:26:27.924 [2024-07-26 14:20:35.853897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.924 [2024-07-26 14:20:35.853923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.924 qpair failed and we were unable to recover it. 00:26:27.924 [2024-07-26 14:20:35.854088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.924 [2024-07-26 14:20:35.854141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.924 qpair failed and we were unable to recover it. 00:26:27.924 [2024-07-26 14:20:35.854264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.924 [2024-07-26 14:20:35.854313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.924 qpair failed and we were unable to recover it. 00:26:27.924 [2024-07-26 14:20:35.854456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.924 [2024-07-26 14:20:35.854482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.924 qpair failed and we were unable to recover it. 00:26:27.924 [2024-07-26 14:20:35.854579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.924 [2024-07-26 14:20:35.854606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.924 qpair failed and we were unable to recover it. 00:26:27.924 [2024-07-26 14:20:35.854695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.924 [2024-07-26 14:20:35.854724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.924 qpair failed and we were unable to recover it. 00:26:27.924 [2024-07-26 14:20:35.854836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.924 [2024-07-26 14:20:35.854862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.924 qpair failed and we were unable to recover it. 00:26:27.924 [2024-07-26 14:20:35.854984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.924 [2024-07-26 14:20:35.855012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.924 qpair failed and we were unable to recover it. 00:26:27.924 [2024-07-26 14:20:35.855180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.924 [2024-07-26 14:20:35.855234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.924 qpair failed and we were unable to recover it. 00:26:27.924 [2024-07-26 14:20:35.855323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.924 [2024-07-26 14:20:35.855350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.924 qpair failed and we were unable to recover it. 00:26:27.924 [2024-07-26 14:20:35.855436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.924 [2024-07-26 14:20:35.855462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.924 qpair failed and we were unable to recover it. 00:26:27.924 [2024-07-26 14:20:35.855572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.924 [2024-07-26 14:20:35.855600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.924 qpair failed and we were unable to recover it. 00:26:27.924 [2024-07-26 14:20:35.855701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.924 [2024-07-26 14:20:35.855728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.924 qpair failed and we were unable to recover it. 00:26:27.924 [2024-07-26 14:20:35.855846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.924 [2024-07-26 14:20:35.855872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.924 qpair failed and we were unable to recover it. 00:26:27.924 [2024-07-26 14:20:35.855989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.924 [2024-07-26 14:20:35.856016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.924 qpair failed and we were unable to recover it. 00:26:27.924 [2024-07-26 14:20:35.856117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.924 [2024-07-26 14:20:35.856144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.924 qpair failed and we were unable to recover it. 00:26:27.924 [2024-07-26 14:20:35.856233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.924 [2024-07-26 14:20:35.856260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.924 qpair failed and we were unable to recover it. 00:26:27.924 [2024-07-26 14:20:35.856374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.924 [2024-07-26 14:20:35.856400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.924 qpair failed and we were unable to recover it. 00:26:27.924 [2024-07-26 14:20:35.856484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.924 [2024-07-26 14:20:35.856511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.924 qpair failed and we were unable to recover it. 00:26:27.924 [2024-07-26 14:20:35.856634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.924 [2024-07-26 14:20:35.856661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.924 qpair failed and we were unable to recover it. 00:26:27.924 [2024-07-26 14:20:35.856771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.924 [2024-07-26 14:20:35.856798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.924 qpair failed and we were unable to recover it. 00:26:27.924 [2024-07-26 14:20:35.856910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.924 [2024-07-26 14:20:35.856940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.924 qpair failed and we were unable to recover it. 00:26:27.924 [2024-07-26 14:20:35.857054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.924 [2024-07-26 14:20:35.857080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.924 qpair failed and we were unable to recover it. 00:26:27.924 [2024-07-26 14:20:35.857164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.924 [2024-07-26 14:20:35.857191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.924 qpair failed and we were unable to recover it. 00:26:27.924 [2024-07-26 14:20:35.857280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.925 [2024-07-26 14:20:35.857319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.925 qpair failed and we were unable to recover it. 00:26:27.925 [2024-07-26 14:20:35.857476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.925 [2024-07-26 14:20:35.857516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.925 qpair failed and we were unable to recover it. 00:26:27.925 [2024-07-26 14:20:35.857619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.925 [2024-07-26 14:20:35.857648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.925 qpair failed and we were unable to recover it. 00:26:27.925 [2024-07-26 14:20:35.857772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.925 [2024-07-26 14:20:35.857798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.925 qpair failed and we were unable to recover it. 00:26:27.925 [2024-07-26 14:20:35.857882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.925 [2024-07-26 14:20:35.857908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.925 qpair failed and we were unable to recover it. 00:26:27.925 [2024-07-26 14:20:35.858046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.925 [2024-07-26 14:20:35.858072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.925 qpair failed and we were unable to recover it. 00:26:27.925 [2024-07-26 14:20:35.858190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.925 [2024-07-26 14:20:35.858218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.925 qpair failed and we were unable to recover it. 00:26:27.925 [2024-07-26 14:20:35.858307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.925 [2024-07-26 14:20:35.858333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.925 qpair failed and we were unable to recover it. 00:26:27.925 [2024-07-26 14:20:35.858489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.925 [2024-07-26 14:20:35.858542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.925 qpair failed and we were unable to recover it. 00:26:27.925 [2024-07-26 14:20:35.858667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.925 [2024-07-26 14:20:35.858695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.925 qpair failed and we were unable to recover it. 00:26:27.925 [2024-07-26 14:20:35.858828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.925 [2024-07-26 14:20:35.858854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.925 qpair failed and we were unable to recover it. 00:26:27.925 [2024-07-26 14:20:35.858972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.925 [2024-07-26 14:20:35.858998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.925 qpair failed and we were unable to recover it. 00:26:27.925 [2024-07-26 14:20:35.859082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.925 [2024-07-26 14:20:35.859110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.925 qpair failed and we were unable to recover it. 00:26:27.925 [2024-07-26 14:20:35.859278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.925 [2024-07-26 14:20:35.859332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.925 qpair failed and we were unable to recover it. 00:26:27.925 [2024-07-26 14:20:35.859435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.925 [2024-07-26 14:20:35.859460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.925 qpair failed and we were unable to recover it. 00:26:27.925 [2024-07-26 14:20:35.859546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.925 [2024-07-26 14:20:35.859572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.925 qpair failed and we were unable to recover it. 00:26:27.925 [2024-07-26 14:20:35.859661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.925 [2024-07-26 14:20:35.859687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.925 qpair failed and we were unable to recover it. 00:26:27.925 [2024-07-26 14:20:35.859772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.925 [2024-07-26 14:20:35.859797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.925 qpair failed and we were unable to recover it. 00:26:27.925 [2024-07-26 14:20:35.859886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.925 [2024-07-26 14:20:35.859912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.925 qpair failed and we were unable to recover it. 00:26:27.925 [2024-07-26 14:20:35.860016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.925 [2024-07-26 14:20:35.860042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.925 qpair failed and we were unable to recover it. 00:26:27.925 [2024-07-26 14:20:35.860162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.925 [2024-07-26 14:20:35.860191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.925 qpair failed and we were unable to recover it. 00:26:27.925 [2024-07-26 14:20:35.860282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.925 [2024-07-26 14:20:35.860309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.925 qpair failed and we were unable to recover it. 00:26:27.925 [2024-07-26 14:20:35.860460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.925 [2024-07-26 14:20:35.860499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.925 qpair failed and we were unable to recover it. 00:26:27.925 [2024-07-26 14:20:35.860599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.925 [2024-07-26 14:20:35.860627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.925 qpair failed and we were unable to recover it. 00:26:27.925 [2024-07-26 14:20:35.860752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.925 [2024-07-26 14:20:35.860790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.925 qpair failed and we were unable to recover it. 00:26:27.925 [2024-07-26 14:20:35.860918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.925 [2024-07-26 14:20:35.860945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.925 qpair failed and we were unable to recover it. 00:26:27.925 [2024-07-26 14:20:35.861088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.925 [2024-07-26 14:20:35.861114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.925 qpair failed and we were unable to recover it. 00:26:27.925 [2024-07-26 14:20:35.861238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.925 [2024-07-26 14:20:35.861264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.925 qpair failed and we were unable to recover it. 00:26:27.925 [2024-07-26 14:20:35.861377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.925 [2024-07-26 14:20:35.861403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.925 qpair failed and we were unable to recover it. 00:26:27.925 [2024-07-26 14:20:35.861497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.925 [2024-07-26 14:20:35.861524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.925 qpair failed and we were unable to recover it. 00:26:27.925 [2024-07-26 14:20:35.861640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.925 [2024-07-26 14:20:35.861666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.925 qpair failed and we were unable to recover it. 00:26:27.925 [2024-07-26 14:20:35.861781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.925 [2024-07-26 14:20:35.861807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.925 qpair failed and we were unable to recover it. 00:26:27.926 [2024-07-26 14:20:35.861899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.926 [2024-07-26 14:20:35.861927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.926 qpair failed and we were unable to recover it. 00:26:27.926 [2024-07-26 14:20:35.862010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.926 [2024-07-26 14:20:35.862037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.926 qpair failed and we were unable to recover it. 00:26:27.926 [2024-07-26 14:20:35.862133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.926 [2024-07-26 14:20:35.862173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.926 qpair failed and we were unable to recover it. 00:26:27.926 [2024-07-26 14:20:35.862293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.926 [2024-07-26 14:20:35.862321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.926 qpair failed and we were unable to recover it. 00:26:27.926 [2024-07-26 14:20:35.862435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.926 [2024-07-26 14:20:35.862461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.926 qpair failed and we were unable to recover it. 00:26:27.926 [2024-07-26 14:20:35.862582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.926 [2024-07-26 14:20:35.862609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.926 qpair failed and we were unable to recover it. 00:26:27.926 [2024-07-26 14:20:35.862707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.926 [2024-07-26 14:20:35.862734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.926 qpair failed and we were unable to recover it. 00:26:27.926 [2024-07-26 14:20:35.862853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.926 [2024-07-26 14:20:35.862880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.926 qpair failed and we were unable to recover it. 00:26:27.926 [2024-07-26 14:20:35.862975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.926 [2024-07-26 14:20:35.863000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.926 qpair failed and we were unable to recover it. 00:26:27.926 [2024-07-26 14:20:35.863092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.926 [2024-07-26 14:20:35.863118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.926 qpair failed and we were unable to recover it. 00:26:27.926 [2024-07-26 14:20:35.863258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.926 [2024-07-26 14:20:35.863284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.926 qpair failed and we were unable to recover it. 00:26:27.926 [2024-07-26 14:20:35.863405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.926 [2024-07-26 14:20:35.863433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.926 qpair failed and we were unable to recover it. 00:26:27.926 [2024-07-26 14:20:35.863566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.926 [2024-07-26 14:20:35.863606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.926 qpair failed and we were unable to recover it. 00:26:27.926 [2024-07-26 14:20:35.863707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.926 [2024-07-26 14:20:35.863745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.926 qpair failed and we were unable to recover it. 00:26:27.926 [2024-07-26 14:20:35.863919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.926 [2024-07-26 14:20:35.863948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.926 qpair failed and we were unable to recover it. 00:26:27.926 [2024-07-26 14:20:35.864061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.926 [2024-07-26 14:20:35.864087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.926 qpair failed and we were unable to recover it. 00:26:27.926 [2024-07-26 14:20:35.864192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.926 [2024-07-26 14:20:35.864218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.926 qpair failed and we were unable to recover it. 00:26:27.926 [2024-07-26 14:20:35.864354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.926 [2024-07-26 14:20:35.864381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.926 qpair failed and we were unable to recover it. 00:26:27.926 [2024-07-26 14:20:35.864504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.926 [2024-07-26 14:20:35.864534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.926 qpair failed and we were unable to recover it. 00:26:27.926 [2024-07-26 14:20:35.864651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.926 [2024-07-26 14:20:35.864677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.926 qpair failed and we were unable to recover it. 00:26:27.926 [2024-07-26 14:20:35.864812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.926 [2024-07-26 14:20:35.864838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.926 qpair failed and we were unable to recover it. 00:26:27.926 [2024-07-26 14:20:35.864916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.926 [2024-07-26 14:20:35.864941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.926 qpair failed and we were unable to recover it. 00:26:27.926 [2024-07-26 14:20:35.865030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.926 [2024-07-26 14:20:35.865056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.926 qpair failed and we were unable to recover it. 00:26:27.926 [2024-07-26 14:20:35.865193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.926 [2024-07-26 14:20:35.865219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.926 qpair failed and we were unable to recover it. 00:26:27.926 [2024-07-26 14:20:35.865314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.926 [2024-07-26 14:20:35.865340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.926 qpair failed and we were unable to recover it. 00:26:27.926 [2024-07-26 14:20:35.865429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.926 [2024-07-26 14:20:35.865459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.926 qpair failed and we were unable to recover it. 00:26:27.926 [2024-07-26 14:20:35.865551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.926 [2024-07-26 14:20:35.865580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.926 qpair failed and we were unable to recover it. 00:26:27.926 [2024-07-26 14:20:35.865693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.926 [2024-07-26 14:20:35.865719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.926 qpair failed and we were unable to recover it. 00:26:27.926 [2024-07-26 14:20:35.865868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.926 [2024-07-26 14:20:35.865930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.926 qpair failed and we were unable to recover it. 00:26:27.926 [2024-07-26 14:20:35.866237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.926 [2024-07-26 14:20:35.866297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.926 qpair failed and we were unable to recover it. 00:26:27.926 [2024-07-26 14:20:35.866467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.926 [2024-07-26 14:20:35.866494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.926 qpair failed and we were unable to recover it. 00:26:27.926 [2024-07-26 14:20:35.866587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.926 [2024-07-26 14:20:35.866614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.926 qpair failed and we were unable to recover it. 00:26:27.926 [2024-07-26 14:20:35.866730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.926 [2024-07-26 14:20:35.866762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.926 qpair failed and we were unable to recover it. 00:26:27.926 [2024-07-26 14:20:35.866855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.926 [2024-07-26 14:20:35.866881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.926 qpair failed and we were unable to recover it. 00:26:27.926 [2024-07-26 14:20:35.866969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.926 [2024-07-26 14:20:35.866996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.926 qpair failed and we were unable to recover it. 00:26:27.926 [2024-07-26 14:20:35.867110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.926 [2024-07-26 14:20:35.867137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.926 qpair failed and we were unable to recover it. 00:26:27.926 [2024-07-26 14:20:35.867242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.926 [2024-07-26 14:20:35.867268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.926 qpair failed and we were unable to recover it. 00:26:27.927 [2024-07-26 14:20:35.867374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.927 [2024-07-26 14:20:35.867402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.927 qpair failed and we were unable to recover it. 00:26:27.927 [2024-07-26 14:20:35.867487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.927 [2024-07-26 14:20:35.867513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.927 qpair failed and we were unable to recover it. 00:26:27.927 [2024-07-26 14:20:35.867606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.927 [2024-07-26 14:20:35.867633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.927 qpair failed and we were unable to recover it. 00:26:27.927 [2024-07-26 14:20:35.867738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.927 [2024-07-26 14:20:35.867764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.927 qpair failed and we were unable to recover it. 00:26:27.927 [2024-07-26 14:20:35.867939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.927 [2024-07-26 14:20:35.867993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.927 qpair failed and we were unable to recover it. 00:26:27.927 [2024-07-26 14:20:35.868151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.927 [2024-07-26 14:20:35.868202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.927 qpair failed and we were unable to recover it. 00:26:27.927 [2024-07-26 14:20:35.868286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.927 [2024-07-26 14:20:35.868313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.927 qpair failed and we were unable to recover it. 00:26:27.927 [2024-07-26 14:20:35.868453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.927 [2024-07-26 14:20:35.868479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.927 qpair failed and we were unable to recover it. 00:26:27.927 [2024-07-26 14:20:35.868603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.927 [2024-07-26 14:20:35.868642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.927 qpair failed and we were unable to recover it. 00:26:27.927 [2024-07-26 14:20:35.868767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.927 [2024-07-26 14:20:35.868796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.927 qpair failed and we were unable to recover it. 00:26:27.927 [2024-07-26 14:20:35.869056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.927 [2024-07-26 14:20:35.869134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.927 qpair failed and we were unable to recover it. 00:26:27.927 [2024-07-26 14:20:35.869352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.927 [2024-07-26 14:20:35.869413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.927 qpair failed and we were unable to recover it. 00:26:27.927 [2024-07-26 14:20:35.869559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.927 [2024-07-26 14:20:35.869587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.927 qpair failed and we were unable to recover it. 00:26:27.927 [2024-07-26 14:20:35.869676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.927 [2024-07-26 14:20:35.869702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.927 qpair failed and we were unable to recover it. 00:26:27.927 [2024-07-26 14:20:35.869791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.927 [2024-07-26 14:20:35.869818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.927 qpair failed and we were unable to recover it. 00:26:27.927 [2024-07-26 14:20:35.869994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.927 [2024-07-26 14:20:35.870020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.927 qpair failed and we were unable to recover it. 00:26:27.927 [2024-07-26 14:20:35.870184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.927 [2024-07-26 14:20:35.870246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.927 qpair failed and we were unable to recover it. 00:26:27.927 [2024-07-26 14:20:35.870454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.927 [2024-07-26 14:20:35.870481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.927 qpair failed and we were unable to recover it. 00:26:27.927 [2024-07-26 14:20:35.870563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.927 [2024-07-26 14:20:35.870590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.927 qpair failed and we were unable to recover it. 00:26:27.927 [2024-07-26 14:20:35.870701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.927 [2024-07-26 14:20:35.870728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.927 qpair failed and we were unable to recover it. 00:26:27.927 [2024-07-26 14:20:35.870910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.927 [2024-07-26 14:20:35.870968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.927 qpair failed and we were unable to recover it. 00:26:27.927 [2024-07-26 14:20:35.871200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.927 [2024-07-26 14:20:35.871259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.927 qpair failed and we were unable to recover it. 00:26:27.927 [2024-07-26 14:20:35.871436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.927 [2024-07-26 14:20:35.871470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.927 qpair failed and we were unable to recover it. 00:26:27.927 [2024-07-26 14:20:35.871596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.927 [2024-07-26 14:20:35.871625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.927 qpair failed and we were unable to recover it. 00:26:27.927 [2024-07-26 14:20:35.871716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.927 [2024-07-26 14:20:35.871742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.927 qpair failed and we were unable to recover it. 00:26:27.927 [2024-07-26 14:20:35.871833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.927 [2024-07-26 14:20:35.871873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.927 qpair failed and we were unable to recover it. 00:26:27.927 [2024-07-26 14:20:35.872031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.927 [2024-07-26 14:20:35.872083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.927 qpair failed and we were unable to recover it. 00:26:27.927 [2024-07-26 14:20:35.872166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.927 [2024-07-26 14:20:35.872193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.927 qpair failed and we were unable to recover it. 00:26:27.927 [2024-07-26 14:20:35.872304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.927 [2024-07-26 14:20:35.872330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.927 qpair failed and we were unable to recover it. 00:26:27.927 [2024-07-26 14:20:35.872440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.927 [2024-07-26 14:20:35.872467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.927 qpair failed and we were unable to recover it. 00:26:27.927 [2024-07-26 14:20:35.872595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.927 [2024-07-26 14:20:35.872636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.927 qpair failed and we were unable to recover it. 00:26:27.927 [2024-07-26 14:20:35.872724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.927 [2024-07-26 14:20:35.872751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.927 qpair failed and we were unable to recover it. 00:26:27.927 [2024-07-26 14:20:35.872872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.927 [2024-07-26 14:20:35.872898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.927 qpair failed and we were unable to recover it. 00:26:27.927 [2024-07-26 14:20:35.872975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.927 [2024-07-26 14:20:35.873001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.927 qpair failed and we were unable to recover it. 00:26:27.927 [2024-07-26 14:20:35.873079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.927 [2024-07-26 14:20:35.873105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.927 qpair failed and we were unable to recover it. 00:26:27.927 [2024-07-26 14:20:35.873321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.927 [2024-07-26 14:20:35.873380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.927 qpair failed and we were unable to recover it. 00:26:27.927 [2024-07-26 14:20:35.873499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.927 [2024-07-26 14:20:35.873525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.927 qpair failed and we were unable to recover it. 00:26:27.927 [2024-07-26 14:20:35.873647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.927 [2024-07-26 14:20:35.873673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.927 qpair failed and we were unable to recover it. 00:26:27.928 [2024-07-26 14:20:35.873807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.928 [2024-07-26 14:20:35.873833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.928 qpair failed and we were unable to recover it. 00:26:27.928 [2024-07-26 14:20:35.873975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.928 [2024-07-26 14:20:35.874002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.928 qpair failed and we were unable to recover it. 00:26:27.928 [2024-07-26 14:20:35.874188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.928 [2024-07-26 14:20:35.874214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.928 qpair failed and we were unable to recover it. 00:26:27.928 [2024-07-26 14:20:35.874305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.928 [2024-07-26 14:20:35.874334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.928 qpair failed and we were unable to recover it. 00:26:27.928 [2024-07-26 14:20:35.874479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.928 [2024-07-26 14:20:35.874506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.928 qpair failed and we were unable to recover it. 00:26:27.928 [2024-07-26 14:20:35.874599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.928 [2024-07-26 14:20:35.874626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.928 qpair failed and we were unable to recover it. 00:26:27.928 [2024-07-26 14:20:35.874711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.928 [2024-07-26 14:20:35.874739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.928 qpair failed and we were unable to recover it. 00:26:27.928 [2024-07-26 14:20:35.874820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.928 [2024-07-26 14:20:35.874847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.928 qpair failed and we were unable to recover it. 00:26:27.928 [2024-07-26 14:20:35.874937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.928 [2024-07-26 14:20:35.874964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.928 qpair failed and we were unable to recover it. 00:26:27.928 [2024-07-26 14:20:35.875107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.928 [2024-07-26 14:20:35.875134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.928 qpair failed and we were unable to recover it. 00:26:27.928 [2024-07-26 14:20:35.875221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.928 [2024-07-26 14:20:35.875249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.928 qpair failed and we were unable to recover it. 00:26:27.928 [2024-07-26 14:20:35.875371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.928 [2024-07-26 14:20:35.875401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.928 qpair failed and we were unable to recover it. 00:26:27.928 [2024-07-26 14:20:35.875495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.928 [2024-07-26 14:20:35.875522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.928 qpair failed and we were unable to recover it. 00:26:27.928 [2024-07-26 14:20:35.875636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.928 [2024-07-26 14:20:35.875662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.928 qpair failed and we were unable to recover it. 00:26:27.928 [2024-07-26 14:20:35.875749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.928 [2024-07-26 14:20:35.875775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.928 qpair failed and we were unable to recover it. 00:26:27.928 [2024-07-26 14:20:35.875883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.928 [2024-07-26 14:20:35.875909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.928 qpair failed and we were unable to recover it. 00:26:27.928 [2024-07-26 14:20:35.876071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.928 [2024-07-26 14:20:35.876124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.928 qpair failed and we were unable to recover it. 00:26:27.928 [2024-07-26 14:20:35.876279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.928 [2024-07-26 14:20:35.876372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.928 qpair failed and we were unable to recover it. 00:26:27.928 [2024-07-26 14:20:35.876520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.928 [2024-07-26 14:20:35.876552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.928 qpair failed and we were unable to recover it. 00:26:27.928 [2024-07-26 14:20:35.876681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.928 [2024-07-26 14:20:35.876708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.928 qpair failed and we were unable to recover it. 00:26:27.928 [2024-07-26 14:20:35.876897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.928 [2024-07-26 14:20:35.876981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.928 qpair failed and we were unable to recover it. 00:26:27.928 [2024-07-26 14:20:35.877249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.928 [2024-07-26 14:20:35.877326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.928 qpair failed and we were unable to recover it. 00:26:27.928 [2024-07-26 14:20:35.877471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.928 [2024-07-26 14:20:35.877499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.928 qpair failed and we were unable to recover it. 00:26:27.928 [2024-07-26 14:20:35.877642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.928 [2024-07-26 14:20:35.877669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.928 qpair failed and we were unable to recover it. 00:26:27.928 [2024-07-26 14:20:35.877762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.928 [2024-07-26 14:20:35.877790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.928 qpair failed and we were unable to recover it. 00:26:27.928 [2024-07-26 14:20:35.877888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.928 [2024-07-26 14:20:35.877915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.928 qpair failed and we were unable to recover it. 00:26:27.928 [2024-07-26 14:20:35.878086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.928 [2024-07-26 14:20:35.878142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.928 qpair failed and we were unable to recover it. 00:26:27.928 [2024-07-26 14:20:35.878304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.928 [2024-07-26 14:20:35.878345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.928 qpair failed and we were unable to recover it. 00:26:27.928 [2024-07-26 14:20:35.878466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.928 [2024-07-26 14:20:35.878494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.928 qpair failed and we were unable to recover it. 00:26:27.928 [2024-07-26 14:20:35.878587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.928 [2024-07-26 14:20:35.878614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.928 qpair failed and we were unable to recover it. 00:26:27.928 [2024-07-26 14:20:35.878751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.928 [2024-07-26 14:20:35.878778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.928 qpair failed and we were unable to recover it. 00:26:27.928 [2024-07-26 14:20:35.878919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.928 [2024-07-26 14:20:35.878944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.928 qpair failed and we were unable to recover it. 00:26:27.928 [2024-07-26 14:20:35.879062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.928 [2024-07-26 14:20:35.879088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.928 qpair failed and we were unable to recover it. 00:26:27.928 [2024-07-26 14:20:35.879296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.928 [2024-07-26 14:20:35.879353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.928 qpair failed and we were unable to recover it. 00:26:27.928 [2024-07-26 14:20:35.879460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.928 [2024-07-26 14:20:35.879487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.928 qpair failed and we were unable to recover it. 00:26:27.928 [2024-07-26 14:20:35.879585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.928 [2024-07-26 14:20:35.879613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.928 qpair failed and we were unable to recover it. 00:26:27.928 [2024-07-26 14:20:35.879698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.928 [2024-07-26 14:20:35.879725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.928 qpair failed and we were unable to recover it. 00:26:27.928 [2024-07-26 14:20:35.879848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.929 [2024-07-26 14:20:35.879874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.929 qpair failed and we were unable to recover it. 00:26:27.929 [2024-07-26 14:20:35.879963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.929 [2024-07-26 14:20:35.879990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.929 qpair failed and we were unable to recover it. 00:26:27.929 [2024-07-26 14:20:35.880099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.929 [2024-07-26 14:20:35.880126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.929 qpair failed and we were unable to recover it. 00:26:27.929 [2024-07-26 14:20:35.880221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.929 [2024-07-26 14:20:35.880247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.929 qpair failed and we were unable to recover it. 00:26:27.929 [2024-07-26 14:20:35.880347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.929 [2024-07-26 14:20:35.880373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.929 qpair failed and we were unable to recover it. 00:26:27.929 [2024-07-26 14:20:35.880476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.929 [2024-07-26 14:20:35.880503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.929 qpair failed and we were unable to recover it. 00:26:27.929 [2024-07-26 14:20:35.880620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.929 [2024-07-26 14:20:35.880647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.929 qpair failed and we were unable to recover it. 00:26:27.929 [2024-07-26 14:20:35.880731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.929 [2024-07-26 14:20:35.880757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.929 qpair failed and we were unable to recover it. 00:26:27.929 [2024-07-26 14:20:35.880864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.929 [2024-07-26 14:20:35.880890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.929 qpair failed and we were unable to recover it. 00:26:27.929 [2024-07-26 14:20:35.880999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.929 [2024-07-26 14:20:35.881026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.929 qpair failed and we were unable to recover it. 00:26:27.929 [2024-07-26 14:20:35.881111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.929 [2024-07-26 14:20:35.881137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.929 qpair failed and we were unable to recover it. 00:26:27.929 [2024-07-26 14:20:35.881223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.929 [2024-07-26 14:20:35.881250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.929 qpair failed and we were unable to recover it. 00:26:27.929 [2024-07-26 14:20:35.881375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.929 [2024-07-26 14:20:35.881414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.929 qpair failed and we were unable to recover it. 00:26:27.929 [2024-07-26 14:20:35.881526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.929 [2024-07-26 14:20:35.881559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.929 qpair failed and we were unable to recover it. 00:26:27.929 [2024-07-26 14:20:35.881646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.929 [2024-07-26 14:20:35.881677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.929 qpair failed and we were unable to recover it. 00:26:27.929 [2024-07-26 14:20:35.881796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.929 [2024-07-26 14:20:35.881823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.929 qpair failed and we were unable to recover it. 00:26:27.929 [2024-07-26 14:20:35.881929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.929 [2024-07-26 14:20:35.881954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.929 qpair failed and we were unable to recover it. 00:26:27.929 [2024-07-26 14:20:35.882092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.929 [2024-07-26 14:20:35.882118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.929 qpair failed and we were unable to recover it. 00:26:27.929 [2024-07-26 14:20:35.882231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.929 [2024-07-26 14:20:35.882259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.929 qpair failed and we were unable to recover it. 00:26:27.929 [2024-07-26 14:20:35.882344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.929 [2024-07-26 14:20:35.882371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.929 qpair failed and we were unable to recover it. 00:26:27.929 [2024-07-26 14:20:35.882482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.929 [2024-07-26 14:20:35.882508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.929 qpair failed and we were unable to recover it. 00:26:27.929 [2024-07-26 14:20:35.882596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.929 [2024-07-26 14:20:35.882623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.929 qpair failed and we were unable to recover it. 00:26:27.929 [2024-07-26 14:20:35.882742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.929 [2024-07-26 14:20:35.882769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.929 qpair failed and we were unable to recover it. 00:26:27.929 [2024-07-26 14:20:35.882860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.929 [2024-07-26 14:20:35.882886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.929 qpair failed and we were unable to recover it. 00:26:27.929 [2024-07-26 14:20:35.882977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.929 [2024-07-26 14:20:35.883005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.929 qpair failed and we were unable to recover it. 00:26:27.929 [2024-07-26 14:20:35.883115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.929 [2024-07-26 14:20:35.883142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.929 qpair failed and we were unable to recover it. 00:26:27.929 [2024-07-26 14:20:35.883224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.929 [2024-07-26 14:20:35.883252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.929 qpair failed and we were unable to recover it. 00:26:27.929 [2024-07-26 14:20:35.883366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.929 [2024-07-26 14:20:35.883394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.929 qpair failed and we were unable to recover it. 00:26:27.929 [2024-07-26 14:20:35.883478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.929 [2024-07-26 14:20:35.883504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.929 qpair failed and we were unable to recover it. 00:26:27.929 [2024-07-26 14:20:35.883629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.929 [2024-07-26 14:20:35.883669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.929 qpair failed and we were unable to recover it. 00:26:27.929 [2024-07-26 14:20:35.883816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.929 [2024-07-26 14:20:35.883845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.929 qpair failed and we were unable to recover it. 00:26:27.929 [2024-07-26 14:20:35.883926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.929 [2024-07-26 14:20:35.883953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.929 qpair failed and we were unable to recover it. 00:26:27.929 [2024-07-26 14:20:35.884091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.930 [2024-07-26 14:20:35.884118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.930 qpair failed and we were unable to recover it. 00:26:27.930 [2024-07-26 14:20:35.884258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.930 [2024-07-26 14:20:35.884286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.930 qpair failed and we were unable to recover it. 00:26:27.930 [2024-07-26 14:20:35.884483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.930 [2024-07-26 14:20:35.884523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.930 qpair failed and we were unable to recover it. 00:26:27.930 [2024-07-26 14:20:35.884637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.930 [2024-07-26 14:20:35.884666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.930 qpair failed and we were unable to recover it. 00:26:27.930 [2024-07-26 14:20:35.884781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.930 [2024-07-26 14:20:35.884807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.930 qpair failed and we were unable to recover it. 00:26:27.930 [2024-07-26 14:20:35.884918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.930 [2024-07-26 14:20:35.884945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.930 qpair failed and we were unable to recover it. 00:26:27.930 [2024-07-26 14:20:35.885055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.930 [2024-07-26 14:20:35.885080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.930 qpair failed and we were unable to recover it. 00:26:27.930 [2024-07-26 14:20:35.885194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.930 [2024-07-26 14:20:35.885219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.930 qpair failed and we were unable to recover it. 00:26:27.930 [2024-07-26 14:20:35.885298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.930 [2024-07-26 14:20:35.885324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.930 qpair failed and we were unable to recover it. 00:26:27.930 [2024-07-26 14:20:35.885463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.930 [2024-07-26 14:20:35.885503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.930 qpair failed and we were unable to recover it. 00:26:27.930 [2024-07-26 14:20:35.885601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.930 [2024-07-26 14:20:35.885629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.930 qpair failed and we were unable to recover it. 00:26:27.930 [2024-07-26 14:20:35.885724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.930 [2024-07-26 14:20:35.885753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.930 qpair failed and we were unable to recover it. 00:26:27.930 [2024-07-26 14:20:35.885849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.930 [2024-07-26 14:20:35.885877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.930 qpair failed and we were unable to recover it. 00:26:27.930 [2024-07-26 14:20:35.885962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.930 [2024-07-26 14:20:35.885989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.930 qpair failed and we were unable to recover it. 00:26:27.930 [2024-07-26 14:20:35.886091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.930 [2024-07-26 14:20:35.886143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.930 qpair failed and we were unable to recover it. 00:26:27.930 [2024-07-26 14:20:35.886419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.930 [2024-07-26 14:20:35.886473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.930 qpair failed and we were unable to recover it. 00:26:27.930 [2024-07-26 14:20:35.886597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.930 [2024-07-26 14:20:35.886627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.930 qpair failed and we were unable to recover it. 00:26:27.930 [2024-07-26 14:20:35.886751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.930 [2024-07-26 14:20:35.886779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.930 qpair failed and we were unable to recover it. 00:26:27.930 [2024-07-26 14:20:35.886917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.930 [2024-07-26 14:20:35.886944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.930 qpair failed and we were unable to recover it. 00:26:27.930 [2024-07-26 14:20:35.887165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.930 [2024-07-26 14:20:35.887221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.930 qpair failed and we were unable to recover it. 00:26:27.930 [2024-07-26 14:20:35.887361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.930 [2024-07-26 14:20:35.887388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.930 qpair failed and we were unable to recover it. 00:26:27.930 [2024-07-26 14:20:35.887466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.930 [2024-07-26 14:20:35.887492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.930 qpair failed and we were unable to recover it. 00:26:27.930 [2024-07-26 14:20:35.887585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.930 [2024-07-26 14:20:35.887619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.930 qpair failed and we were unable to recover it. 00:26:27.930 [2024-07-26 14:20:35.887727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.930 [2024-07-26 14:20:35.887753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.930 qpair failed and we were unable to recover it. 00:26:27.930 [2024-07-26 14:20:35.887890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.930 [2024-07-26 14:20:35.887917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.930 qpair failed and we were unable to recover it. 00:26:27.930 [2024-07-26 14:20:35.888032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.930 [2024-07-26 14:20:35.888060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.930 qpair failed and we were unable to recover it. 00:26:27.930 [2024-07-26 14:20:35.888177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.930 [2024-07-26 14:20:35.888203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.930 qpair failed and we were unable to recover it. 00:26:27.930 [2024-07-26 14:20:35.888316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.930 [2024-07-26 14:20:35.888343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.930 qpair failed and we were unable to recover it. 00:26:27.930 [2024-07-26 14:20:35.888462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.930 [2024-07-26 14:20:35.888490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.930 qpair failed and we were unable to recover it. 00:26:27.930 [2024-07-26 14:20:35.888635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.930 [2024-07-26 14:20:35.888663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.930 qpair failed and we were unable to recover it. 00:26:27.930 [2024-07-26 14:20:35.888776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.930 [2024-07-26 14:20:35.888803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.930 qpair failed and we were unable to recover it. 00:26:27.930 [2024-07-26 14:20:35.888937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.930 [2024-07-26 14:20:35.888963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.930 qpair failed and we were unable to recover it. 00:26:27.930 [2024-07-26 14:20:35.889130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.930 [2024-07-26 14:20:35.889191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.930 qpair failed and we were unable to recover it. 00:26:27.930 [2024-07-26 14:20:35.889363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.930 [2024-07-26 14:20:35.889423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.930 qpair failed and we were unable to recover it. 00:26:27.930 [2024-07-26 14:20:35.889612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.930 [2024-07-26 14:20:35.889639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.930 qpair failed and we were unable to recover it. 00:26:27.930 [2024-07-26 14:20:35.889719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.930 [2024-07-26 14:20:35.889746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.930 qpair failed and we were unable to recover it. 00:26:27.930 [2024-07-26 14:20:35.889828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.930 [2024-07-26 14:20:35.889855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.930 qpair failed and we were unable to recover it. 00:26:27.930 [2024-07-26 14:20:35.889989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.930 [2024-07-26 14:20:35.890016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.930 qpair failed and we were unable to recover it. 00:26:27.930 [2024-07-26 14:20:35.890157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.931 [2024-07-26 14:20:35.890184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.931 qpair failed and we were unable to recover it. 00:26:27.931 [2024-07-26 14:20:35.890352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.931 [2024-07-26 14:20:35.890380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.931 qpair failed and we were unable to recover it. 00:26:27.931 [2024-07-26 14:20:35.890489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.931 [2024-07-26 14:20:35.890516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.931 qpair failed and we were unable to recover it. 00:26:27.931 [2024-07-26 14:20:35.890651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.931 [2024-07-26 14:20:35.890690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.931 qpair failed and we were unable to recover it. 00:26:27.931 [2024-07-26 14:20:35.890790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.931 [2024-07-26 14:20:35.890829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.931 qpair failed and we were unable to recover it. 00:26:27.931 [2024-07-26 14:20:35.890950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.931 [2024-07-26 14:20:35.890977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.931 qpair failed and we were unable to recover it. 00:26:27.931 [2024-07-26 14:20:35.891145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.931 [2024-07-26 14:20:35.891194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.931 qpair failed and we were unable to recover it. 00:26:27.931 [2024-07-26 14:20:35.891328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.931 [2024-07-26 14:20:35.891391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.931 qpair failed and we were unable to recover it. 00:26:27.931 [2024-07-26 14:20:35.891480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.931 [2024-07-26 14:20:35.891506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.931 qpair failed and we were unable to recover it. 00:26:27.931 [2024-07-26 14:20:35.891603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.931 [2024-07-26 14:20:35.891629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.931 qpair failed and we were unable to recover it. 00:26:27.931 [2024-07-26 14:20:35.891709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.931 [2024-07-26 14:20:35.891735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.931 qpair failed and we were unable to recover it. 00:26:27.931 [2024-07-26 14:20:35.891883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.931 [2024-07-26 14:20:35.891910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.931 qpair failed and we were unable to recover it. 00:26:27.931 [2024-07-26 14:20:35.891999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.931 [2024-07-26 14:20:35.892024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.931 qpair failed and we were unable to recover it. 00:26:27.931 [2024-07-26 14:20:35.892116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.931 [2024-07-26 14:20:35.892141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.931 qpair failed and we were unable to recover it. 00:26:27.931 [2024-07-26 14:20:35.892286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.931 [2024-07-26 14:20:35.892316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.931 qpair failed and we were unable to recover it. 00:26:27.931 [2024-07-26 14:20:35.892438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.931 [2024-07-26 14:20:35.892466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.931 qpair failed and we were unable to recover it. 00:26:27.931 [2024-07-26 14:20:35.892600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.931 [2024-07-26 14:20:35.892640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.931 qpair failed and we were unable to recover it. 00:26:27.931 [2024-07-26 14:20:35.892758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.931 [2024-07-26 14:20:35.892785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.931 qpair failed and we were unable to recover it. 00:26:27.931 [2024-07-26 14:20:35.892952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.931 [2024-07-26 14:20:35.893005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.931 qpair failed and we were unable to recover it. 00:26:27.931 [2024-07-26 14:20:35.893164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.931 [2024-07-26 14:20:35.893219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.931 qpair failed and we were unable to recover it. 00:26:27.931 [2024-07-26 14:20:35.893394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.931 [2024-07-26 14:20:35.893458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.931 qpair failed and we were unable to recover it. 00:26:27.931 [2024-07-26 14:20:35.893597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.931 [2024-07-26 14:20:35.893623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.931 qpair failed and we were unable to recover it. 00:26:27.931 [2024-07-26 14:20:35.893733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.931 [2024-07-26 14:20:35.893759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.931 qpair failed and we were unable to recover it. 00:26:27.931 [2024-07-26 14:20:35.893872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.931 [2024-07-26 14:20:35.893898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.931 qpair failed and we were unable to recover it. 00:26:27.931 [2024-07-26 14:20:35.893981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.931 [2024-07-26 14:20:35.894012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.931 qpair failed and we were unable to recover it. 00:26:27.931 [2024-07-26 14:20:35.894177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.931 [2024-07-26 14:20:35.894231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.931 qpair failed and we were unable to recover it. 00:26:27.931 [2024-07-26 14:20:35.894347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.931 [2024-07-26 14:20:35.894376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.931 qpair failed and we were unable to recover it. 00:26:27.931 [2024-07-26 14:20:35.894475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.931 [2024-07-26 14:20:35.894515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.931 qpair failed and we were unable to recover it. 00:26:27.931 [2024-07-26 14:20:35.894672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.931 [2024-07-26 14:20:35.894700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.931 qpair failed and we were unable to recover it. 00:26:27.931 [2024-07-26 14:20:35.894814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.931 [2024-07-26 14:20:35.894840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.931 qpair failed and we were unable to recover it. 00:26:27.931 [2024-07-26 14:20:35.895082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.931 [2024-07-26 14:20:35.895108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.931 qpair failed and we were unable to recover it. 00:26:27.931 [2024-07-26 14:20:35.895277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.931 [2024-07-26 14:20:35.895337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.931 qpair failed and we were unable to recover it. 00:26:27.931 [2024-07-26 14:20:35.895505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.931 [2024-07-26 14:20:35.895536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.931 qpair failed and we were unable to recover it. 00:26:27.931 [2024-07-26 14:20:35.895633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.931 [2024-07-26 14:20:35.895660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.931 qpair failed and we were unable to recover it. 00:26:27.931 [2024-07-26 14:20:35.895773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.931 [2024-07-26 14:20:35.895800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.931 qpair failed and we were unable to recover it. 00:26:27.931 [2024-07-26 14:20:35.895914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.931 [2024-07-26 14:20:35.895974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.931 qpair failed and we were unable to recover it. 00:26:27.931 [2024-07-26 14:20:35.896232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.931 [2024-07-26 14:20:35.896258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.931 qpair failed and we were unable to recover it. 00:26:27.931 [2024-07-26 14:20:35.896374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.932 [2024-07-26 14:20:35.896401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.932 qpair failed and we were unable to recover it. 00:26:27.932 [2024-07-26 14:20:35.896522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.932 [2024-07-26 14:20:35.896555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.932 qpair failed and we were unable to recover it. 00:26:27.932 [2024-07-26 14:20:35.896704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.932 [2024-07-26 14:20:35.896731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.932 qpair failed and we were unable to recover it. 00:26:27.932 [2024-07-26 14:20:35.896846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.932 [2024-07-26 14:20:35.896874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.932 qpair failed and we were unable to recover it. 00:26:27.932 [2024-07-26 14:20:35.896984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.932 [2024-07-26 14:20:35.897011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.932 qpair failed and we were unable to recover it. 00:26:27.932 [2024-07-26 14:20:35.897102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.932 [2024-07-26 14:20:35.897129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.932 qpair failed and we were unable to recover it. 00:26:27.932 [2024-07-26 14:20:35.897305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.932 [2024-07-26 14:20:35.897364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.932 qpair failed and we were unable to recover it. 00:26:27.932 [2024-07-26 14:20:35.897534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.932 [2024-07-26 14:20:35.897561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.932 qpair failed and we were unable to recover it. 00:26:27.932 [2024-07-26 14:20:35.897669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.932 [2024-07-26 14:20:35.897696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.932 qpair failed and we were unable to recover it. 00:26:27.932 [2024-07-26 14:20:35.897814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.932 [2024-07-26 14:20:35.897841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.932 qpair failed and we were unable to recover it. 00:26:27.932 [2024-07-26 14:20:35.897996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.932 [2024-07-26 14:20:35.898056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.932 qpair failed and we were unable to recover it. 00:26:27.932 [2024-07-26 14:20:35.898360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.932 [2024-07-26 14:20:35.898419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.932 qpair failed and we were unable to recover it. 00:26:27.932 [2024-07-26 14:20:35.898617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.932 [2024-07-26 14:20:35.898645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.932 qpair failed and we were unable to recover it. 00:26:27.932 [2024-07-26 14:20:35.898743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.932 [2024-07-26 14:20:35.898769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.932 qpair failed and we were unable to recover it. 00:26:27.932 [2024-07-26 14:20:35.898892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.932 [2024-07-26 14:20:35.898919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.932 qpair failed and we were unable to recover it. 00:26:27.932 [2024-07-26 14:20:35.899060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.932 [2024-07-26 14:20:35.899086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.932 qpair failed and we were unable to recover it. 00:26:27.932 [2024-07-26 14:20:35.899295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.932 [2024-07-26 14:20:35.899354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.932 qpair failed and we were unable to recover it. 00:26:27.932 [2024-07-26 14:20:35.899462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.932 [2024-07-26 14:20:35.899502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.932 qpair failed and we were unable to recover it. 00:26:27.932 [2024-07-26 14:20:35.899645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.932 [2024-07-26 14:20:35.899686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.932 qpair failed and we were unable to recover it. 00:26:27.932 [2024-07-26 14:20:35.899800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.932 [2024-07-26 14:20:35.899828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.932 qpair failed and we were unable to recover it. 00:26:27.932 [2024-07-26 14:20:35.900033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.932 [2024-07-26 14:20:35.900060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.932 qpair failed and we were unable to recover it. 00:26:27.932 [2024-07-26 14:20:35.900234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.932 [2024-07-26 14:20:35.900280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.932 qpair failed and we were unable to recover it. 00:26:27.932 [2024-07-26 14:20:35.900390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.932 [2024-07-26 14:20:35.900416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.932 qpair failed and we were unable to recover it. 00:26:27.932 [2024-07-26 14:20:35.900557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.932 [2024-07-26 14:20:35.900584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.932 qpair failed and we were unable to recover it. 00:26:27.932 [2024-07-26 14:20:35.900671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.932 [2024-07-26 14:20:35.900697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.932 qpair failed and we were unable to recover it. 00:26:27.932 [2024-07-26 14:20:35.900787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.932 [2024-07-26 14:20:35.900814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.932 qpair failed and we were unable to recover it. 00:26:27.932 [2024-07-26 14:20:35.900918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.932 [2024-07-26 14:20:35.900945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.932 qpair failed and we were unable to recover it. 00:26:27.932 [2024-07-26 14:20:35.901023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.932 [2024-07-26 14:20:35.901050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.932 qpair failed and we were unable to recover it. 00:26:27.932 [2024-07-26 14:20:35.901194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.932 [2024-07-26 14:20:35.901221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.932 qpair failed and we were unable to recover it. 00:26:27.932 [2024-07-26 14:20:35.901354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.932 [2024-07-26 14:20:35.901381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:27.932 qpair failed and we were unable to recover it. 00:26:27.932 [2024-07-26 14:20:35.901480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.932 [2024-07-26 14:20:35.901518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.932 qpair failed and we were unable to recover it. 00:26:27.932 [2024-07-26 14:20:35.901624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.932 [2024-07-26 14:20:35.901651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.932 qpair failed and we were unable to recover it. 00:26:27.932 [2024-07-26 14:20:35.901768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.932 [2024-07-26 14:20:35.901794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.932 qpair failed and we were unable to recover it. 00:26:27.932 [2024-07-26 14:20:35.901902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.932 [2024-07-26 14:20:35.901929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.932 qpair failed and we were unable to recover it. 00:26:27.932 [2024-07-26 14:20:35.902081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.932 [2024-07-26 14:20:35.902136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.932 qpair failed and we were unable to recover it. 00:26:27.932 [2024-07-26 14:20:35.902241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.932 [2024-07-26 14:20:35.902267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.932 qpair failed and we were unable to recover it. 00:26:27.932 [2024-07-26 14:20:35.902371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.932 [2024-07-26 14:20:35.902396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.932 qpair failed and we were unable to recover it. 00:26:27.932 [2024-07-26 14:20:35.902519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.933 [2024-07-26 14:20:35.902550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.933 qpair failed and we were unable to recover it. 00:26:27.933 [2024-07-26 14:20:35.902668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.933 [2024-07-26 14:20:35.902695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.933 qpair failed and we were unable to recover it. 00:26:27.933 [2024-07-26 14:20:35.902824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.933 [2024-07-26 14:20:35.902849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.933 qpair failed and we were unable to recover it. 00:26:27.933 [2024-07-26 14:20:35.902962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.933 [2024-07-26 14:20:35.902988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.933 qpair failed and we were unable to recover it. 00:26:27.933 [2024-07-26 14:20:35.903076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.933 [2024-07-26 14:20:35.903102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.933 qpair failed and we were unable to recover it. 00:26:27.933 [2024-07-26 14:20:35.903187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.933 [2024-07-26 14:20:35.903213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.933 qpair failed and we were unable to recover it. 00:26:27.933 [2024-07-26 14:20:35.903293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.933 [2024-07-26 14:20:35.903319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:27.933 qpair failed and we were unable to recover it. 00:26:27.933 [2024-07-26 14:20:35.903449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.933 [2024-07-26 14:20:35.903489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.933 qpair failed and we were unable to recover it. 00:26:27.933 [2024-07-26 14:20:35.903578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.933 [2024-07-26 14:20:35.903606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.933 qpair failed and we were unable to recover it. 00:26:27.933 [2024-07-26 14:20:35.903717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.933 [2024-07-26 14:20:35.903756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.933 qpair failed and we were unable to recover it. 00:26:27.933 [2024-07-26 14:20:35.903875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.933 [2024-07-26 14:20:35.903903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.933 qpair failed and we were unable to recover it. 00:26:27.933 [2024-07-26 14:20:35.904022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.933 [2024-07-26 14:20:35.904049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.933 qpair failed and we were unable to recover it. 00:26:27.933 [2024-07-26 14:20:35.904191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.933 [2024-07-26 14:20:35.904272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.933 qpair failed and we were unable to recover it. 00:26:27.933 [2024-07-26 14:20:35.904451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.933 [2024-07-26 14:20:35.904478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.933 qpair failed and we were unable to recover it. 00:26:27.933 [2024-07-26 14:20:35.904598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.933 [2024-07-26 14:20:35.904625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.933 qpair failed and we were unable to recover it. 00:26:27.933 [2024-07-26 14:20:35.904708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.933 [2024-07-26 14:20:35.904735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.933 qpair failed and we were unable to recover it. 00:26:27.933 [2024-07-26 14:20:35.904856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.933 [2024-07-26 14:20:35.904883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.933 qpair failed and we were unable to recover it. 00:26:27.933 [2024-07-26 14:20:35.904985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.933 [2024-07-26 14:20:35.905018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:27.933 qpair failed and we were unable to recover it. 00:26:27.933 [2024-07-26 14:20:35.905198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.933 [2024-07-26 14:20:35.905270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.933 qpair failed and we were unable to recover it. 00:26:27.933 [2024-07-26 14:20:35.905386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.933 [2024-07-26 14:20:35.905412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.933 qpair failed and we were unable to recover it. 00:26:27.933 [2024-07-26 14:20:35.905520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.933 [2024-07-26 14:20:35.905556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.933 qpair failed and we were unable to recover it. 00:26:27.933 [2024-07-26 14:20:35.905636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.933 [2024-07-26 14:20:35.905663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.933 qpair failed and we were unable to recover it. 00:26:27.933 [2024-07-26 14:20:35.905739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.933 [2024-07-26 14:20:35.905764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.933 qpair failed and we were unable to recover it. 00:26:27.933 [2024-07-26 14:20:35.905951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.933 [2024-07-26 14:20:35.905977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.933 qpair failed and we were unable to recover it. 00:26:27.933 [2024-07-26 14:20:35.906050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.933 [2024-07-26 14:20:35.906077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.933 qpair failed and we were unable to recover it. 00:26:27.933 [2024-07-26 14:20:35.906187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.933 [2024-07-26 14:20:35.906213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.933 qpair failed and we were unable to recover it. 00:26:27.933 [2024-07-26 14:20:35.906324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.933 [2024-07-26 14:20:35.906350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.933 qpair failed and we were unable to recover it. 00:26:27.933 [2024-07-26 14:20:35.906452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.933 [2024-07-26 14:20:35.906478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.933 qpair failed and we were unable to recover it. 00:26:27.933 [2024-07-26 14:20:35.906566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.933 [2024-07-26 14:20:35.906602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.933 qpair failed and we were unable to recover it. 00:26:27.933 [2024-07-26 14:20:35.906735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.933 [2024-07-26 14:20:35.906762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.933 qpair failed and we were unable to recover it. 00:26:27.933 [2024-07-26 14:20:35.906918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.933 [2024-07-26 14:20:35.906973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.933 qpair failed and we were unable to recover it. 00:26:27.933 [2024-07-26 14:20:35.907181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.933 [2024-07-26 14:20:35.907235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.933 qpair failed and we were unable to recover it. 00:26:27.933 [2024-07-26 14:20:35.907343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.933 [2024-07-26 14:20:35.907369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.933 qpair failed and we were unable to recover it. 00:26:27.933 [2024-07-26 14:20:35.907505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.933 [2024-07-26 14:20:35.907536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.933 qpair failed and we were unable to recover it. 00:26:27.933 [2024-07-26 14:20:35.907653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.933 [2024-07-26 14:20:35.907680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.933 qpair failed and we were unable to recover it. 00:26:27.933 [2024-07-26 14:20:35.907796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.933 [2024-07-26 14:20:35.907822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.933 qpair failed and we were unable to recover it. 00:26:27.933 [2024-07-26 14:20:35.907935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.933 [2024-07-26 14:20:35.907961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.933 qpair failed and we were unable to recover it. 00:26:27.933 [2024-07-26 14:20:35.908045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.933 [2024-07-26 14:20:35.908071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.933 qpair failed and we were unable to recover it. 00:26:27.934 [2024-07-26 14:20:35.908177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.934 [2024-07-26 14:20:35.908203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.934 qpair failed and we were unable to recover it. 00:26:27.934 [2024-07-26 14:20:35.908313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.934 [2024-07-26 14:20:35.908339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.934 qpair failed and we were unable to recover it. 00:26:27.934 [2024-07-26 14:20:35.908443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.934 [2024-07-26 14:20:35.908469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.934 qpair failed and we were unable to recover it. 00:26:27.934 [2024-07-26 14:20:35.908579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.934 [2024-07-26 14:20:35.908605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.934 qpair failed and we were unable to recover it. 00:26:27.934 [2024-07-26 14:20:35.908686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.934 [2024-07-26 14:20:35.908713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.934 qpair failed and we were unable to recover it. 00:26:27.934 [2024-07-26 14:20:35.908803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.934 [2024-07-26 14:20:35.908829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.934 qpair failed and we were unable to recover it. 00:26:27.934 [2024-07-26 14:20:35.908919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.934 [2024-07-26 14:20:35.908949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.934 qpair failed and we were unable to recover it. 00:26:27.934 [2024-07-26 14:20:35.909039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.934 [2024-07-26 14:20:35.909065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.934 qpair failed and we were unable to recover it. 00:26:27.934 [2024-07-26 14:20:35.909167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.934 [2024-07-26 14:20:35.909192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.934 qpair failed and we were unable to recover it. 00:26:27.934 [2024-07-26 14:20:35.909279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.934 [2024-07-26 14:20:35.909305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.934 qpair failed and we were unable to recover it. 00:26:27.934 [2024-07-26 14:20:35.909411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.934 [2024-07-26 14:20:35.909437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.934 qpair failed and we were unable to recover it. 00:26:27.934 [2024-07-26 14:20:35.909565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.934 [2024-07-26 14:20:35.909591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.934 qpair failed and we were unable to recover it. 00:26:27.934 [2024-07-26 14:20:35.909670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.934 [2024-07-26 14:20:35.909696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:27.934 qpair failed and we were unable to recover it. 00:26:28.213 [2024-07-26 14:20:35.909830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.213 [2024-07-26 14:20:35.909857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.213 qpair failed and we were unable to recover it. 00:26:28.213 [2024-07-26 14:20:35.909996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.213 [2024-07-26 14:20:35.910022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.213 qpair failed and we were unable to recover it. 00:26:28.213 [2024-07-26 14:20:35.910110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.213 [2024-07-26 14:20:35.910135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.213 qpair failed and we were unable to recover it. 00:26:28.213 [2024-07-26 14:20:35.910225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.213 [2024-07-26 14:20:35.910252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.213 qpair failed and we were unable to recover it. 00:26:28.213 [2024-07-26 14:20:35.910382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.213 [2024-07-26 14:20:35.910421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.213 qpair failed and we were unable to recover it. 00:26:28.213 [2024-07-26 14:20:35.910521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.213 [2024-07-26 14:20:35.910556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.213 qpair failed and we were unable to recover it. 00:26:28.213 [2024-07-26 14:20:35.910655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.213 [2024-07-26 14:20:35.910682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.213 qpair failed and we were unable to recover it. 00:26:28.213 [2024-07-26 14:20:35.910914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.213 [2024-07-26 14:20:35.910969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.214 qpair failed and we were unable to recover it. 00:26:28.214 [2024-07-26 14:20:35.911181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.214 [2024-07-26 14:20:35.911233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.214 qpair failed and we were unable to recover it. 00:26:28.214 [2024-07-26 14:20:35.911313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.214 [2024-07-26 14:20:35.911338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.214 qpair failed and we were unable to recover it. 00:26:28.214 [2024-07-26 14:20:35.911454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.214 [2024-07-26 14:20:35.911480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.214 qpair failed and we were unable to recover it. 00:26:28.214 [2024-07-26 14:20:35.911591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.214 [2024-07-26 14:20:35.911617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.214 qpair failed and we were unable to recover it. 00:26:28.214 [2024-07-26 14:20:35.911707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.214 [2024-07-26 14:20:35.911732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.214 qpair failed and we were unable to recover it. 00:26:28.214 [2024-07-26 14:20:35.911850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.214 [2024-07-26 14:20:35.911877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.214 qpair failed and we were unable to recover it. 00:26:28.214 [2024-07-26 14:20:35.911966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.214 [2024-07-26 14:20:35.911991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.214 qpair failed and we were unable to recover it. 00:26:28.214 [2024-07-26 14:20:35.912106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.214 [2024-07-26 14:20:35.912131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.214 qpair failed and we were unable to recover it. 00:26:28.214 [2024-07-26 14:20:35.912242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.214 [2024-07-26 14:20:35.912268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.214 qpair failed and we were unable to recover it. 00:26:28.214 [2024-07-26 14:20:35.912361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.214 [2024-07-26 14:20:35.912388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.214 qpair failed and we were unable to recover it. 00:26:28.214 [2024-07-26 14:20:35.912491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.214 [2024-07-26 14:20:35.912542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.214 qpair failed and we were unable to recover it. 00:26:28.214 [2024-07-26 14:20:35.912674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.214 [2024-07-26 14:20:35.912704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.214 qpair failed and we were unable to recover it. 00:26:28.214 [2024-07-26 14:20:35.912818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.214 [2024-07-26 14:20:35.912851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.214 qpair failed and we were unable to recover it. 00:26:28.214 [2024-07-26 14:20:35.912993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.214 [2024-07-26 14:20:35.913019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.214 qpair failed and we were unable to recover it. 00:26:28.214 [2024-07-26 14:20:35.913126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.214 [2024-07-26 14:20:35.913152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.214 qpair failed and we were unable to recover it. 00:26:28.214 [2024-07-26 14:20:35.913237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.214 [2024-07-26 14:20:35.913264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.214 qpair failed and we were unable to recover it. 00:26:28.214 [2024-07-26 14:20:35.913361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.214 [2024-07-26 14:20:35.913389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.214 qpair failed and we were unable to recover it. 00:26:28.214 [2024-07-26 14:20:35.913503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.214 [2024-07-26 14:20:35.913534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.214 qpair failed and we were unable to recover it. 00:26:28.214 [2024-07-26 14:20:35.913651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.214 [2024-07-26 14:20:35.913677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.214 qpair failed and we were unable to recover it. 00:26:28.214 [2024-07-26 14:20:35.913760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.214 [2024-07-26 14:20:35.913786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.214 qpair failed and we were unable to recover it. 00:26:28.214 [2024-07-26 14:20:35.913899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.214 [2024-07-26 14:20:35.913925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.214 qpair failed and we were unable to recover it. 00:26:28.214 [2024-07-26 14:20:35.914061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.214 [2024-07-26 14:20:35.914087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.214 qpair failed and we were unable to recover it. 00:26:28.214 [2024-07-26 14:20:35.914199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.214 [2024-07-26 14:20:35.914224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.214 qpair failed and we were unable to recover it. 00:26:28.214 [2024-07-26 14:20:35.914333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.214 [2024-07-26 14:20:35.914358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.214 qpair failed and we were unable to recover it. 00:26:28.214 [2024-07-26 14:20:35.914472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.214 [2024-07-26 14:20:35.914498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.214 qpair failed and we were unable to recover it. 00:26:28.214 [2024-07-26 14:20:35.914594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.214 [2024-07-26 14:20:35.914623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.214 qpair failed and we were unable to recover it. 00:26:28.214 [2024-07-26 14:20:35.914746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.214 [2024-07-26 14:20:35.914773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.214 qpair failed and we were unable to recover it. 00:26:28.214 [2024-07-26 14:20:35.914882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.214 [2024-07-26 14:20:35.914908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.214 qpair failed and we were unable to recover it. 00:26:28.214 [2024-07-26 14:20:35.915021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.214 [2024-07-26 14:20:35.915047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.214 qpair failed and we were unable to recover it. 00:26:28.214 [2024-07-26 14:20:35.915247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.214 [2024-07-26 14:20:35.915327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.214 qpair failed and we were unable to recover it. 00:26:28.214 [2024-07-26 14:20:35.915501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.214 [2024-07-26 14:20:35.915585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.214 qpair failed and we were unable to recover it. 00:26:28.214 [2024-07-26 14:20:35.915707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.214 [2024-07-26 14:20:35.915734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.214 qpair failed and we were unable to recover it. 00:26:28.214 [2024-07-26 14:20:35.915846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.214 [2024-07-26 14:20:35.915873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.214 qpair failed and we were unable to recover it. 00:26:28.214 [2024-07-26 14:20:35.915981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.214 [2024-07-26 14:20:35.916008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.214 qpair failed and we were unable to recover it. 00:26:28.214 [2024-07-26 14:20:35.916088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.214 [2024-07-26 14:20:35.916115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.214 qpair failed and we were unable to recover it. 00:26:28.214 [2024-07-26 14:20:35.916319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.214 [2024-07-26 14:20:35.916379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.214 qpair failed and we were unable to recover it. 00:26:28.214 [2024-07-26 14:20:35.916588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.214 [2024-07-26 14:20:35.916615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.214 qpair failed and we were unable to recover it. 00:26:28.215 [2024-07-26 14:20:35.916753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.215 [2024-07-26 14:20:35.916779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.215 qpair failed and we were unable to recover it. 00:26:28.215 [2024-07-26 14:20:35.916891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.215 [2024-07-26 14:20:35.916917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.215 qpair failed and we were unable to recover it. 00:26:28.215 [2024-07-26 14:20:35.917036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.215 [2024-07-26 14:20:35.917063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.215 qpair failed and we were unable to recover it. 00:26:28.215 [2024-07-26 14:20:35.917182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.215 [2024-07-26 14:20:35.917210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.215 qpair failed and we were unable to recover it. 00:26:28.215 [2024-07-26 14:20:35.917374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.215 [2024-07-26 14:20:35.917435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.215 qpair failed and we were unable to recover it. 00:26:28.215 [2024-07-26 14:20:35.917662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.215 [2024-07-26 14:20:35.917690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.215 qpair failed and we were unable to recover it. 00:26:28.215 [2024-07-26 14:20:35.917833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.215 [2024-07-26 14:20:35.917862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.215 qpair failed and we were unable to recover it. 00:26:28.215 [2024-07-26 14:20:35.917938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.215 [2024-07-26 14:20:35.917964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.215 qpair failed and we were unable to recover it. 00:26:28.215 [2024-07-26 14:20:35.918096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.215 [2024-07-26 14:20:35.918122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.215 qpair failed and we were unable to recover it. 00:26:28.215 [2024-07-26 14:20:35.918236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.215 [2024-07-26 14:20:35.918262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.215 qpair failed and we were unable to recover it. 00:26:28.215 [2024-07-26 14:20:35.918348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.215 [2024-07-26 14:20:35.918373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.215 qpair failed and we were unable to recover it. 00:26:28.215 [2024-07-26 14:20:35.918491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.215 [2024-07-26 14:20:35.918516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.215 qpair failed and we were unable to recover it. 00:26:28.215 [2024-07-26 14:20:35.918634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.215 [2024-07-26 14:20:35.918660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.215 qpair failed and we were unable to recover it. 00:26:28.215 [2024-07-26 14:20:35.918781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.215 [2024-07-26 14:20:35.918807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.215 qpair failed and we were unable to recover it. 00:26:28.215 [2024-07-26 14:20:35.918920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.215 [2024-07-26 14:20:35.918945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.215 qpair failed and we were unable to recover it. 00:26:28.215 [2024-07-26 14:20:35.919059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.215 [2024-07-26 14:20:35.919091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.215 qpair failed and we were unable to recover it. 00:26:28.215 [2024-07-26 14:20:35.919170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.215 [2024-07-26 14:20:35.919196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.215 qpair failed and we were unable to recover it. 00:26:28.215 [2024-07-26 14:20:35.919293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.215 [2024-07-26 14:20:35.919319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.215 qpair failed and we were unable to recover it. 00:26:28.215 [2024-07-26 14:20:35.919459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.215 [2024-07-26 14:20:35.919484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.215 qpair failed and we were unable to recover it. 00:26:28.215 [2024-07-26 14:20:35.919604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.215 [2024-07-26 14:20:35.919631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.215 qpair failed and we were unable to recover it. 00:26:28.215 [2024-07-26 14:20:35.919722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.215 [2024-07-26 14:20:35.919747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.215 qpair failed and we were unable to recover it. 00:26:28.215 [2024-07-26 14:20:35.919855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.215 [2024-07-26 14:20:35.919881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.215 qpair failed and we were unable to recover it. 00:26:28.215 [2024-07-26 14:20:35.920020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.215 [2024-07-26 14:20:35.920045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.215 qpair failed and we were unable to recover it. 00:26:28.215 [2024-07-26 14:20:35.920164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.215 [2024-07-26 14:20:35.920190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.215 qpair failed and we were unable to recover it. 00:26:28.215 [2024-07-26 14:20:35.920280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.215 [2024-07-26 14:20:35.920309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.215 qpair failed and we were unable to recover it. 00:26:28.215 [2024-07-26 14:20:35.920449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.215 [2024-07-26 14:20:35.920476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.215 qpair failed and we were unable to recover it. 00:26:28.215 [2024-07-26 14:20:35.920594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.215 [2024-07-26 14:20:35.920622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.215 qpair failed and we were unable to recover it. 00:26:28.215 [2024-07-26 14:20:35.920733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.215 [2024-07-26 14:20:35.920759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.215 qpair failed and we were unable to recover it. 00:26:28.215 [2024-07-26 14:20:35.920874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.215 [2024-07-26 14:20:35.920900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.215 qpair failed and we were unable to recover it. 00:26:28.215 [2024-07-26 14:20:35.920987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.215 [2024-07-26 14:20:35.921014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.215 qpair failed and we were unable to recover it. 00:26:28.215 [2024-07-26 14:20:35.921131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.215 [2024-07-26 14:20:35.921158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.215 qpair failed and we were unable to recover it. 00:26:28.215 [2024-07-26 14:20:35.921369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.215 [2024-07-26 14:20:35.921429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.215 qpair failed and we were unable to recover it. 00:26:28.215 [2024-07-26 14:20:35.921648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.215 [2024-07-26 14:20:35.921676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.215 qpair failed and we were unable to recover it. 00:26:28.215 [2024-07-26 14:20:35.921773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.215 [2024-07-26 14:20:35.921800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.215 qpair failed and we were unable to recover it. 00:26:28.215 [2024-07-26 14:20:35.921911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.215 [2024-07-26 14:20:35.921938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.215 qpair failed and we were unable to recover it. 00:26:28.215 [2024-07-26 14:20:35.922018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.215 [2024-07-26 14:20:35.922045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.215 qpair failed and we were unable to recover it. 00:26:28.215 [2024-07-26 14:20:35.922159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.215 [2024-07-26 14:20:35.922186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.216 qpair failed and we were unable to recover it. 00:26:28.216 [2024-07-26 14:20:35.922413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.216 [2024-07-26 14:20:35.922479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.216 qpair failed and we were unable to recover it. 00:26:28.216 [2024-07-26 14:20:35.922610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.216 [2024-07-26 14:20:35.922639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.216 qpair failed and we were unable to recover it. 00:26:28.216 [2024-07-26 14:20:35.922733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.216 [2024-07-26 14:20:35.922759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.216 qpair failed and we were unable to recover it. 00:26:28.216 [2024-07-26 14:20:35.922903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.216 [2024-07-26 14:20:35.922930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.216 qpair failed and we were unable to recover it. 00:26:28.216 [2024-07-26 14:20:35.923057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.216 [2024-07-26 14:20:35.923110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.216 qpair failed and we were unable to recover it. 00:26:28.216 [2024-07-26 14:20:35.923227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.216 [2024-07-26 14:20:35.923253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.216 qpair failed and we were unable to recover it. 00:26:28.216 [2024-07-26 14:20:35.923340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.216 [2024-07-26 14:20:35.923367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.216 qpair failed and we were unable to recover it. 00:26:28.216 [2024-07-26 14:20:35.923479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.216 [2024-07-26 14:20:35.923506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.216 qpair failed and we were unable to recover it. 00:26:28.216 [2024-07-26 14:20:35.923623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.216 [2024-07-26 14:20:35.923650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.216 qpair failed and we were unable to recover it. 00:26:28.216 [2024-07-26 14:20:35.923761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.216 [2024-07-26 14:20:35.923788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.216 qpair failed and we were unable to recover it. 00:26:28.216 [2024-07-26 14:20:35.923903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.216 [2024-07-26 14:20:35.923930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.216 qpair failed and we were unable to recover it. 00:26:28.216 [2024-07-26 14:20:35.924051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.216 [2024-07-26 14:20:35.924079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.216 qpair failed and we were unable to recover it. 00:26:28.216 [2024-07-26 14:20:35.924316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.216 [2024-07-26 14:20:35.924375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.216 qpair failed and we were unable to recover it. 00:26:28.216 [2024-07-26 14:20:35.924603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.216 [2024-07-26 14:20:35.924631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.216 qpair failed and we were unable to recover it. 00:26:28.216 [2024-07-26 14:20:35.924743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.216 [2024-07-26 14:20:35.924770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.216 qpair failed and we were unable to recover it. 00:26:28.216 [2024-07-26 14:20:35.924883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.216 [2024-07-26 14:20:35.924911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.216 qpair failed and we were unable to recover it. 00:26:28.216 [2024-07-26 14:20:35.925018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.216 [2024-07-26 14:20:35.925045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.216 qpair failed and we were unable to recover it. 00:26:28.216 [2024-07-26 14:20:35.925174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.216 [2024-07-26 14:20:35.925235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.216 qpair failed and we were unable to recover it. 00:26:28.216 [2024-07-26 14:20:35.925441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.216 [2024-07-26 14:20:35.925472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.216 qpair failed and we were unable to recover it. 00:26:28.216 [2024-07-26 14:20:35.925611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.216 [2024-07-26 14:20:35.925639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.216 qpair failed and we were unable to recover it. 00:26:28.216 [2024-07-26 14:20:35.925749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.216 [2024-07-26 14:20:35.925776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.216 qpair failed and we were unable to recover it. 00:26:28.216 [2024-07-26 14:20:35.925883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.216 [2024-07-26 14:20:35.925910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.216 qpair failed and we were unable to recover it. 00:26:28.216 [2024-07-26 14:20:35.926049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.216 [2024-07-26 14:20:35.926076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.216 qpair failed and we were unable to recover it. 00:26:28.216 [2024-07-26 14:20:35.926248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.216 [2024-07-26 14:20:35.926307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.216 qpair failed and we were unable to recover it. 00:26:28.216 [2024-07-26 14:20:35.926508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.216 [2024-07-26 14:20:35.926540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.216 qpair failed and we were unable to recover it. 00:26:28.216 [2024-07-26 14:20:35.926624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.216 [2024-07-26 14:20:35.926653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.216 qpair failed and we were unable to recover it. 00:26:28.216 [2024-07-26 14:20:35.926757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.216 [2024-07-26 14:20:35.926796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.216 qpair failed and we were unable to recover it. 00:26:28.216 [2024-07-26 14:20:35.926979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.216 [2024-07-26 14:20:35.927031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.216 qpair failed and we were unable to recover it. 00:26:28.216 [2024-07-26 14:20:35.927180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.216 [2024-07-26 14:20:35.927236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.216 qpair failed and we were unable to recover it. 00:26:28.216 [2024-07-26 14:20:35.927322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.216 [2024-07-26 14:20:35.927350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.216 qpair failed and we were unable to recover it. 00:26:28.216 [2024-07-26 14:20:35.927460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.216 [2024-07-26 14:20:35.927486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.216 qpair failed and we were unable to recover it. 00:26:28.216 [2024-07-26 14:20:35.927572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.216 [2024-07-26 14:20:35.927600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.216 qpair failed and we were unable to recover it. 00:26:28.216 [2024-07-26 14:20:35.927742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.216 [2024-07-26 14:20:35.927768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.216 qpair failed and we were unable to recover it. 00:26:28.216 [2024-07-26 14:20:35.927890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.216 [2024-07-26 14:20:35.927917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.216 qpair failed and we were unable to recover it. 00:26:28.216 [2024-07-26 14:20:35.928032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.216 [2024-07-26 14:20:35.928058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.216 qpair failed and we were unable to recover it. 00:26:28.216 [2024-07-26 14:20:35.928195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.216 [2024-07-26 14:20:35.928220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.216 qpair failed and we were unable to recover it. 00:26:28.216 [2024-07-26 14:20:35.928330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.216 [2024-07-26 14:20:35.928355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.216 qpair failed and we were unable to recover it. 00:26:28.216 [2024-07-26 14:20:35.928473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.217 [2024-07-26 14:20:35.928498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.217 qpair failed and we were unable to recover it. 00:26:28.217 [2024-07-26 14:20:35.928618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.217 [2024-07-26 14:20:35.928645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.217 qpair failed and we were unable to recover it. 00:26:28.217 [2024-07-26 14:20:35.928759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.217 [2024-07-26 14:20:35.928785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.217 qpair failed and we were unable to recover it. 00:26:28.217 [2024-07-26 14:20:35.928913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.217 [2024-07-26 14:20:35.928940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.217 qpair failed and we were unable to recover it. 00:26:28.217 [2024-07-26 14:20:35.929074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.217 [2024-07-26 14:20:35.929100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.217 qpair failed and we were unable to recover it. 00:26:28.217 [2024-07-26 14:20:35.929179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.217 [2024-07-26 14:20:35.929205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.217 qpair failed and we were unable to recover it. 00:26:28.217 [2024-07-26 14:20:35.929303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.217 [2024-07-26 14:20:35.929329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.217 qpair failed and we were unable to recover it. 00:26:28.217 [2024-07-26 14:20:35.929425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.217 [2024-07-26 14:20:35.929465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.217 qpair failed and we were unable to recover it. 00:26:28.217 [2024-07-26 14:20:35.929614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.217 [2024-07-26 14:20:35.929643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.217 qpair failed and we were unable to recover it. 00:26:28.217 [2024-07-26 14:20:35.929785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.217 [2024-07-26 14:20:35.929811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.217 qpair failed and we were unable to recover it. 00:26:28.217 [2024-07-26 14:20:35.929948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.217 [2024-07-26 14:20:35.929974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.217 qpair failed and we were unable to recover it. 00:26:28.217 [2024-07-26 14:20:35.930165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.217 [2024-07-26 14:20:35.930252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.217 qpair failed and we were unable to recover it. 00:26:28.217 [2024-07-26 14:20:35.930418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.217 [2024-07-26 14:20:35.930464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.217 qpair failed and we were unable to recover it. 00:26:28.217 [2024-07-26 14:20:35.930560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.217 [2024-07-26 14:20:35.930588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.217 qpair failed and we were unable to recover it. 00:26:28.217 [2024-07-26 14:20:35.930704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.217 [2024-07-26 14:20:35.930731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.217 qpair failed and we were unable to recover it. 00:26:28.217 [2024-07-26 14:20:35.930848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.217 [2024-07-26 14:20:35.930874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.217 qpair failed and we were unable to recover it. 00:26:28.217 [2024-07-26 14:20:35.931012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.217 [2024-07-26 14:20:35.931038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.217 qpair failed and we were unable to recover it. 00:26:28.217 [2024-07-26 14:20:35.931246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.217 [2024-07-26 14:20:35.931323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.217 qpair failed and we were unable to recover it. 00:26:28.217 [2024-07-26 14:20:35.931598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.217 [2024-07-26 14:20:35.931625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.217 qpair failed and we were unable to recover it. 00:26:28.217 [2024-07-26 14:20:35.931763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.217 [2024-07-26 14:20:35.931790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.217 qpair failed and we were unable to recover it. 00:26:28.217 [2024-07-26 14:20:35.931874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.217 [2024-07-26 14:20:35.931901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.217 qpair failed and we were unable to recover it. 00:26:28.217 [2024-07-26 14:20:35.932004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.217 [2024-07-26 14:20:35.932035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.217 qpair failed and we were unable to recover it. 00:26:28.217 [2024-07-26 14:20:35.932123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.217 [2024-07-26 14:20:35.932152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.217 qpair failed and we were unable to recover it. 00:26:28.217 [2024-07-26 14:20:35.932379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.217 [2024-07-26 14:20:35.932418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.217 qpair failed and we were unable to recover it. 00:26:28.217 [2024-07-26 14:20:35.932514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.217 [2024-07-26 14:20:35.932549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.217 qpair failed and we were unable to recover it. 00:26:28.217 [2024-07-26 14:20:35.932665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.217 [2024-07-26 14:20:35.932691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.217 qpair failed and we were unable to recover it. 00:26:28.217 [2024-07-26 14:20:35.932806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.217 [2024-07-26 14:20:35.932833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.217 qpair failed and we were unable to recover it. 00:26:28.217 [2024-07-26 14:20:35.933021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.217 [2024-07-26 14:20:35.933078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.217 qpair failed and we were unable to recover it. 00:26:28.217 [2024-07-26 14:20:35.933166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.217 [2024-07-26 14:20:35.933192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.217 qpair failed and we were unable to recover it. 00:26:28.217 [2024-07-26 14:20:35.933353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.217 [2024-07-26 14:20:35.933404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.217 qpair failed and we were unable to recover it. 00:26:28.217 [2024-07-26 14:20:35.933493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.217 [2024-07-26 14:20:35.933520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.217 qpair failed and we were unable to recover it. 00:26:28.217 [2024-07-26 14:20:35.933639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.217 [2024-07-26 14:20:35.933665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.217 qpair failed and we were unable to recover it. 00:26:28.217 [2024-07-26 14:20:35.933750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.217 [2024-07-26 14:20:35.933777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.217 qpair failed and we were unable to recover it. 00:26:28.217 [2024-07-26 14:20:35.933892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.217 [2024-07-26 14:20:35.933918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.217 qpair failed and we were unable to recover it. 00:26:28.217 [2024-07-26 14:20:35.934145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.217 [2024-07-26 14:20:35.934231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.217 qpair failed and we were unable to recover it. 00:26:28.217 [2024-07-26 14:20:35.934447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.217 [2024-07-26 14:20:35.934474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.217 qpair failed and we were unable to recover it. 00:26:28.217 [2024-07-26 14:20:35.934611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.217 [2024-07-26 14:20:35.934638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.217 qpair failed and we were unable to recover it. 00:26:28.217 [2024-07-26 14:20:35.934751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.217 [2024-07-26 14:20:35.934777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.218 qpair failed and we were unable to recover it. 00:26:28.218 [2024-07-26 14:20:35.934890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.218 [2024-07-26 14:20:35.934917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.218 qpair failed and we were unable to recover it. 00:26:28.218 [2024-07-26 14:20:35.935031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.218 [2024-07-26 14:20:35.935058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.218 qpair failed and we were unable to recover it. 00:26:28.218 [2024-07-26 14:20:35.935173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.218 [2024-07-26 14:20:35.935199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.218 qpair failed and we were unable to recover it. 00:26:28.218 [2024-07-26 14:20:35.935345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.218 [2024-07-26 14:20:35.935372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.218 qpair failed and we were unable to recover it. 00:26:28.218 [2024-07-26 14:20:35.935461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.218 [2024-07-26 14:20:35.935488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.218 qpair failed and we were unable to recover it. 00:26:28.218 [2024-07-26 14:20:35.935607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.218 [2024-07-26 14:20:35.935635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.218 qpair failed and we were unable to recover it. 00:26:28.218 [2024-07-26 14:20:35.935746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.218 [2024-07-26 14:20:35.935774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.218 qpair failed and we were unable to recover it. 00:26:28.218 [2024-07-26 14:20:35.935891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.218 [2024-07-26 14:20:35.935918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.218 qpair failed and we were unable to recover it. 00:26:28.218 [2024-07-26 14:20:35.936013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.218 [2024-07-26 14:20:35.936040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.218 qpair failed and we were unable to recover it. 00:26:28.218 [2024-07-26 14:20:35.936271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.218 [2024-07-26 14:20:35.936332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.218 qpair failed and we were unable to recover it. 00:26:28.218 [2024-07-26 14:20:35.936454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.218 [2024-07-26 14:20:35.936481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.218 qpair failed and we were unable to recover it. 00:26:28.218 [2024-07-26 14:20:35.936600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.218 [2024-07-26 14:20:35.936627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.218 qpair failed and we were unable to recover it. 00:26:28.218 [2024-07-26 14:20:35.936766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.218 [2024-07-26 14:20:35.936792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.218 qpair failed and we were unable to recover it. 00:26:28.218 [2024-07-26 14:20:35.936901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.218 [2024-07-26 14:20:35.936927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.218 qpair failed and we were unable to recover it. 00:26:28.218 [2024-07-26 14:20:35.937005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.218 [2024-07-26 14:20:35.937031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.218 qpair failed and we were unable to recover it. 00:26:28.218 [2024-07-26 14:20:35.937170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.218 [2024-07-26 14:20:35.937196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.218 qpair failed and we were unable to recover it. 00:26:28.218 [2024-07-26 14:20:35.937279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.218 [2024-07-26 14:20:35.937305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.218 qpair failed and we were unable to recover it. 00:26:28.218 [2024-07-26 14:20:35.937416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.218 [2024-07-26 14:20:35.937443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.218 qpair failed and we were unable to recover it. 00:26:28.218 [2024-07-26 14:20:35.937583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.218 [2024-07-26 14:20:35.937609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.218 qpair failed and we were unable to recover it. 00:26:28.218 [2024-07-26 14:20:35.937701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.218 [2024-07-26 14:20:35.937727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.218 qpair failed and we were unable to recover it. 00:26:28.218 [2024-07-26 14:20:35.937844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.218 [2024-07-26 14:20:35.937870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.218 qpair failed and we were unable to recover it. 00:26:28.218 [2024-07-26 14:20:35.937987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.218 [2024-07-26 14:20:35.938015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.218 qpair failed and we were unable to recover it. 00:26:28.218 [2024-07-26 14:20:35.938123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.218 [2024-07-26 14:20:35.938150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.218 qpair failed and we were unable to recover it. 00:26:28.218 [2024-07-26 14:20:35.938237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.218 [2024-07-26 14:20:35.938263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.218 qpair failed and we were unable to recover it. 00:26:28.218 [2024-07-26 14:20:35.938348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.218 [2024-07-26 14:20:35.938375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.218 qpair failed and we were unable to recover it. 00:26:28.218 [2024-07-26 14:20:35.938459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.218 [2024-07-26 14:20:35.938486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.218 qpair failed and we were unable to recover it. 00:26:28.218 [2024-07-26 14:20:35.938603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.218 [2024-07-26 14:20:35.938630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.218 qpair failed and we were unable to recover it. 00:26:28.218 [2024-07-26 14:20:35.938740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.218 [2024-07-26 14:20:35.938766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.218 qpair failed and we were unable to recover it. 00:26:28.218 [2024-07-26 14:20:35.938953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.218 [2024-07-26 14:20:35.939014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.218 qpair failed and we were unable to recover it. 00:26:28.218 [2024-07-26 14:20:35.939275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.218 [2024-07-26 14:20:35.939335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.218 qpair failed and we were unable to recover it. 00:26:28.219 [2024-07-26 14:20:35.939519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.219 [2024-07-26 14:20:35.939551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.219 qpair failed and we were unable to recover it. 00:26:28.219 [2024-07-26 14:20:35.939645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.219 [2024-07-26 14:20:35.939673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.219 qpair failed and we were unable to recover it. 00:26:28.219 [2024-07-26 14:20:35.939796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.219 [2024-07-26 14:20:35.939822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.219 qpair failed and we were unable to recover it. 00:26:28.219 [2024-07-26 14:20:35.939934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.219 [2024-07-26 14:20:35.939960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.219 qpair failed and we were unable to recover it. 00:26:28.219 [2024-07-26 14:20:35.940236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.219 [2024-07-26 14:20:35.940296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.219 qpair failed and we were unable to recover it. 00:26:28.219 [2024-07-26 14:20:35.940468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.219 [2024-07-26 14:20:35.940494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.219 qpair failed and we were unable to recover it. 00:26:28.219 [2024-07-26 14:20:35.940623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.219 [2024-07-26 14:20:35.940650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.219 qpair failed and we were unable to recover it. 00:26:28.219 [2024-07-26 14:20:35.940772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.219 [2024-07-26 14:20:35.940799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.219 qpair failed and we were unable to recover it. 00:26:28.219 [2024-07-26 14:20:35.940886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.219 [2024-07-26 14:20:35.940913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.219 qpair failed and we were unable to recover it. 00:26:28.219 [2024-07-26 14:20:35.941097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.219 [2024-07-26 14:20:35.941151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.219 qpair failed and we were unable to recover it. 00:26:28.219 [2024-07-26 14:20:35.941357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.219 [2024-07-26 14:20:35.941410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.219 qpair failed and we were unable to recover it. 00:26:28.219 [2024-07-26 14:20:35.941522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.219 [2024-07-26 14:20:35.941556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.219 qpair failed and we were unable to recover it. 00:26:28.219 [2024-07-26 14:20:35.941649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.219 [2024-07-26 14:20:35.941676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.219 qpair failed and we were unable to recover it. 00:26:28.219 [2024-07-26 14:20:35.941787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.219 [2024-07-26 14:20:35.941812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.219 qpair failed and we were unable to recover it. 00:26:28.219 [2024-07-26 14:20:35.941945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.219 [2024-07-26 14:20:35.942010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.219 qpair failed and we were unable to recover it. 00:26:28.219 [2024-07-26 14:20:35.942254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.219 [2024-07-26 14:20:35.942335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.219 qpair failed and we were unable to recover it. 00:26:28.219 [2024-07-26 14:20:35.942568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.219 [2024-07-26 14:20:35.942611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.219 qpair failed and we were unable to recover it. 00:26:28.219 [2024-07-26 14:20:35.942719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.219 [2024-07-26 14:20:35.942745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.219 qpair failed and we were unable to recover it. 00:26:28.219 [2024-07-26 14:20:35.942885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.219 [2024-07-26 14:20:35.942911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.219 qpair failed and we were unable to recover it. 00:26:28.219 [2024-07-26 14:20:35.943125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.219 [2024-07-26 14:20:35.943208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.219 qpair failed and we were unable to recover it. 00:26:28.219 [2024-07-26 14:20:35.943511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.219 [2024-07-26 14:20:35.943547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.219 qpair failed and we were unable to recover it. 00:26:28.219 [2024-07-26 14:20:35.943659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.219 [2024-07-26 14:20:35.943686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.219 qpair failed and we were unable to recover it. 00:26:28.219 [2024-07-26 14:20:35.943769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.219 [2024-07-26 14:20:35.943830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.219 qpair failed and we were unable to recover it. 00:26:28.219 [2024-07-26 14:20:35.944036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.219 [2024-07-26 14:20:35.944094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.219 qpair failed and we were unable to recover it. 00:26:28.219 [2024-07-26 14:20:35.944328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.219 [2024-07-26 14:20:35.944390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.219 qpair failed and we were unable to recover it. 00:26:28.219 [2024-07-26 14:20:35.944591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.219 [2024-07-26 14:20:35.944619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.219 qpair failed and we were unable to recover it. 00:26:28.219 [2024-07-26 14:20:35.944758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.219 [2024-07-26 14:20:35.944784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.219 qpair failed and we were unable to recover it. 00:26:28.219 [2024-07-26 14:20:35.944924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.219 [2024-07-26 14:20:35.944951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.219 qpair failed and we were unable to recover it. 00:26:28.219 [2024-07-26 14:20:35.945105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.219 [2024-07-26 14:20:35.945164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.219 qpair failed and we were unable to recover it. 00:26:28.219 [2024-07-26 14:20:35.945379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.219 [2024-07-26 14:20:35.945419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.219 qpair failed and we were unable to recover it. 00:26:28.219 [2024-07-26 14:20:35.945512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.219 [2024-07-26 14:20:35.945546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.219 qpair failed and we were unable to recover it. 00:26:28.219 [2024-07-26 14:20:35.945631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.219 [2024-07-26 14:20:35.945657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.219 qpair failed and we were unable to recover it. 00:26:28.219 [2024-07-26 14:20:35.945797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.219 [2024-07-26 14:20:35.945823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.219 qpair failed and we were unable to recover it. 00:26:28.219 [2024-07-26 14:20:35.945994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.219 [2024-07-26 14:20:35.946043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.219 qpair failed and we were unable to recover it. 00:26:28.219 [2024-07-26 14:20:35.946189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.219 [2024-07-26 14:20:35.946244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.219 qpair failed and we were unable to recover it. 00:26:28.219 [2024-07-26 14:20:35.946322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.219 [2024-07-26 14:20:35.946347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.219 qpair failed and we were unable to recover it. 00:26:28.219 [2024-07-26 14:20:35.946435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.219 [2024-07-26 14:20:35.946461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.219 qpair failed and we were unable to recover it. 00:26:28.219 [2024-07-26 14:20:35.946562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.220 [2024-07-26 14:20:35.946591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.220 qpair failed and we were unable to recover it. 00:26:28.220 [2024-07-26 14:20:35.946706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.220 [2024-07-26 14:20:35.946733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.220 qpair failed and we were unable to recover it. 00:26:28.220 [2024-07-26 14:20:35.946847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.220 [2024-07-26 14:20:35.946874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.220 qpair failed and we were unable to recover it. 00:26:28.220 [2024-07-26 14:20:35.947015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.220 [2024-07-26 14:20:35.947042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.220 qpair failed and we were unable to recover it. 00:26:28.220 [2024-07-26 14:20:35.947155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.220 [2024-07-26 14:20:35.947181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.220 qpair failed and we were unable to recover it. 00:26:28.220 [2024-07-26 14:20:35.947292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.220 [2024-07-26 14:20:35.947319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.220 qpair failed and we were unable to recover it. 00:26:28.220 [2024-07-26 14:20:35.947433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.220 [2024-07-26 14:20:35.947460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.220 qpair failed and we were unable to recover it. 00:26:28.220 [2024-07-26 14:20:35.947580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.220 [2024-07-26 14:20:35.947608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.220 qpair failed and we were unable to recover it. 00:26:28.220 [2024-07-26 14:20:35.947692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.220 [2024-07-26 14:20:35.947719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.220 qpair failed and we were unable to recover it. 00:26:28.220 [2024-07-26 14:20:35.947800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.220 [2024-07-26 14:20:35.947826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.220 qpair failed and we were unable to recover it. 00:26:28.220 [2024-07-26 14:20:35.947976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.220 [2024-07-26 14:20:35.948006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.220 qpair failed and we were unable to recover it. 00:26:28.220 [2024-07-26 14:20:35.948145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.220 [2024-07-26 14:20:35.948172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.220 qpair failed and we were unable to recover it. 00:26:28.220 [2024-07-26 14:20:35.948394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.220 [2024-07-26 14:20:35.948454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.220 qpair failed and we were unable to recover it. 00:26:28.220 [2024-07-26 14:20:35.948613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.220 [2024-07-26 14:20:35.948640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.220 qpair failed and we were unable to recover it. 00:26:28.220 [2024-07-26 14:20:35.948788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.220 [2024-07-26 14:20:35.948814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.220 qpair failed and we were unable to recover it. 00:26:28.220 [2024-07-26 14:20:35.948954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.220 [2024-07-26 14:20:35.948981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.220 qpair failed and we were unable to recover it. 00:26:28.220 [2024-07-26 14:20:35.949156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.220 [2024-07-26 14:20:35.949218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.220 qpair failed and we were unable to recover it. 00:26:28.220 [2024-07-26 14:20:35.949382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.220 [2024-07-26 14:20:35.949457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.220 qpair failed and we were unable to recover it. 00:26:28.220 [2024-07-26 14:20:35.949668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.220 [2024-07-26 14:20:35.949696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.220 qpair failed and we were unable to recover it. 00:26:28.220 [2024-07-26 14:20:35.949784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.220 [2024-07-26 14:20:35.949811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.220 qpair failed and we were unable to recover it. 00:26:28.220 [2024-07-26 14:20:35.949902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.220 [2024-07-26 14:20:35.949929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.220 qpair failed and we were unable to recover it. 00:26:28.220 [2024-07-26 14:20:35.950037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.220 [2024-07-26 14:20:35.950063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.220 qpair failed and we were unable to recover it. 00:26:28.220 [2024-07-26 14:20:35.950174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.220 [2024-07-26 14:20:35.950223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.220 qpair failed and we were unable to recover it. 00:26:28.220 [2024-07-26 14:20:35.950397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.220 [2024-07-26 14:20:35.950448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.220 qpair failed and we were unable to recover it. 00:26:28.220 [2024-07-26 14:20:35.950566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.220 [2024-07-26 14:20:35.950594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.220 qpair failed and we were unable to recover it. 00:26:28.220 [2024-07-26 14:20:35.950686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.220 [2024-07-26 14:20:35.950713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.220 qpair failed and we were unable to recover it. 00:26:28.220 [2024-07-26 14:20:35.950819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.220 [2024-07-26 14:20:35.950845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.220 qpair failed and we were unable to recover it. 00:26:28.220 [2024-07-26 14:20:35.950934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.220 [2024-07-26 14:20:35.950961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.220 qpair failed and we were unable to recover it. 00:26:28.220 [2024-07-26 14:20:35.951168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.220 [2024-07-26 14:20:35.951227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.220 qpair failed and we were unable to recover it. 00:26:28.220 [2024-07-26 14:20:35.951457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.220 [2024-07-26 14:20:35.951516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.220 qpair failed and we were unable to recover it. 00:26:28.220 [2024-07-26 14:20:35.951694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.220 [2024-07-26 14:20:35.951721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.220 qpair failed and we were unable to recover it. 00:26:28.220 [2024-07-26 14:20:35.951823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.220 [2024-07-26 14:20:35.951849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.220 qpair failed and we were unable to recover it. 00:26:28.220 [2024-07-26 14:20:35.951969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.220 [2024-07-26 14:20:35.951996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.220 qpair failed and we were unable to recover it. 00:26:28.220 [2024-07-26 14:20:35.952107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.220 [2024-07-26 14:20:35.952134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.220 qpair failed and we were unable to recover it. 00:26:28.220 [2024-07-26 14:20:35.952246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.220 [2024-07-26 14:20:35.952274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.220 qpair failed and we were unable to recover it. 00:26:28.220 [2024-07-26 14:20:35.952409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.220 [2024-07-26 14:20:35.952436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.220 qpair failed and we were unable to recover it. 00:26:28.220 [2024-07-26 14:20:35.952553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.220 [2024-07-26 14:20:35.952580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.220 qpair failed and we were unable to recover it. 00:26:28.220 [2024-07-26 14:20:35.952722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.220 [2024-07-26 14:20:35.952749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.220 qpair failed and we were unable to recover it. 00:26:28.220 [2024-07-26 14:20:35.952830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.221 [2024-07-26 14:20:35.952857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.221 qpair failed and we were unable to recover it. 00:26:28.221 [2024-07-26 14:20:35.952950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.221 [2024-07-26 14:20:35.952976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.221 qpair failed and we were unable to recover it. 00:26:28.221 [2024-07-26 14:20:35.953145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.221 [2024-07-26 14:20:35.953205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.221 qpair failed and we were unable to recover it. 00:26:28.221 [2024-07-26 14:20:35.953379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.221 [2024-07-26 14:20:35.953439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.221 qpair failed and we were unable to recover it. 00:26:28.221 [2024-07-26 14:20:35.953638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.221 [2024-07-26 14:20:35.953665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.221 qpair failed and we were unable to recover it. 00:26:28.221 [2024-07-26 14:20:35.953805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.221 [2024-07-26 14:20:35.953831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.221 qpair failed and we were unable to recover it. 00:26:28.221 [2024-07-26 14:20:35.953972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.221 [2024-07-26 14:20:35.953998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.221 qpair failed and we were unable to recover it. 00:26:28.221 [2024-07-26 14:20:35.954109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.221 [2024-07-26 14:20:35.954136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.221 qpair failed and we were unable to recover it. 00:26:28.221 [2024-07-26 14:20:35.954230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.221 [2024-07-26 14:20:35.954257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.221 qpair failed and we were unable to recover it. 00:26:28.221 [2024-07-26 14:20:35.954342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.221 [2024-07-26 14:20:35.954368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.221 qpair failed and we were unable to recover it. 00:26:28.221 [2024-07-26 14:20:35.954457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.221 [2024-07-26 14:20:35.954484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.221 qpair failed and we were unable to recover it. 00:26:28.221 [2024-07-26 14:20:35.954571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.221 [2024-07-26 14:20:35.954599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.221 qpair failed and we were unable to recover it. 00:26:28.221 [2024-07-26 14:20:35.954707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.221 [2024-07-26 14:20:35.954737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.221 qpair failed and we were unable to recover it. 00:26:28.221 [2024-07-26 14:20:35.954831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.221 [2024-07-26 14:20:35.954857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.221 qpair failed and we were unable to recover it. 00:26:28.221 [2024-07-26 14:20:35.954994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.221 [2024-07-26 14:20:35.955023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.221 qpair failed and we were unable to recover it. 00:26:28.221 [2024-07-26 14:20:35.955167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.221 [2024-07-26 14:20:35.955220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.221 qpair failed and we were unable to recover it. 00:26:28.221 [2024-07-26 14:20:35.955425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.221 [2024-07-26 14:20:35.955480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.221 qpair failed and we were unable to recover it. 00:26:28.221 [2024-07-26 14:20:35.955597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.221 [2024-07-26 14:20:35.955623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.221 qpair failed and we were unable to recover it. 00:26:28.221 [2024-07-26 14:20:35.955716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.221 [2024-07-26 14:20:35.955742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.221 qpair failed and we were unable to recover it. 00:26:28.221 [2024-07-26 14:20:35.955856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.221 [2024-07-26 14:20:35.955882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.221 qpair failed and we were unable to recover it. 00:26:28.221 [2024-07-26 14:20:35.955996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.221 [2024-07-26 14:20:35.956022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.221 qpair failed and we were unable to recover it. 00:26:28.221 [2024-07-26 14:20:35.956112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.221 [2024-07-26 14:20:35.956138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.221 qpair failed and we were unable to recover it. 00:26:28.221 [2024-07-26 14:20:35.956222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.221 [2024-07-26 14:20:35.956249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.221 qpair failed and we were unable to recover it. 00:26:28.221 [2024-07-26 14:20:35.956368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.221 [2024-07-26 14:20:35.956394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.221 qpair failed and we were unable to recover it. 00:26:28.221 [2024-07-26 14:20:35.956506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.221 [2024-07-26 14:20:35.956543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.221 qpair failed and we were unable to recover it. 00:26:28.221 [2024-07-26 14:20:35.956623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.221 [2024-07-26 14:20:35.956650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.221 qpair failed and we were unable to recover it. 00:26:28.221 [2024-07-26 14:20:35.956756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.221 [2024-07-26 14:20:35.956783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.221 qpair failed and we were unable to recover it. 00:26:28.221 [2024-07-26 14:20:35.956945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.221 [2024-07-26 14:20:35.957040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.221 qpair failed and we were unable to recover it. 00:26:28.221 [2024-07-26 14:20:35.957275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.221 [2024-07-26 14:20:35.957334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.221 qpair failed and we were unable to recover it. 00:26:28.221 [2024-07-26 14:20:35.957604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.221 [2024-07-26 14:20:35.957631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.221 qpair failed and we were unable to recover it. 00:26:28.221 [2024-07-26 14:20:35.957770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.221 [2024-07-26 14:20:35.957796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.221 qpair failed and we were unable to recover it. 00:26:28.221 [2024-07-26 14:20:35.957905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.221 [2024-07-26 14:20:35.957931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.221 qpair failed and we were unable to recover it. 00:26:28.221 [2024-07-26 14:20:35.958047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.221 [2024-07-26 14:20:35.958074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.221 qpair failed and we were unable to recover it. 00:26:28.221 [2024-07-26 14:20:35.958212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.221 [2024-07-26 14:20:35.958238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.221 qpair failed and we were unable to recover it. 00:26:28.221 [2024-07-26 14:20:35.958413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.221 [2024-07-26 14:20:35.958469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.221 qpair failed and we were unable to recover it. 00:26:28.221 [2024-07-26 14:20:35.958555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.221 [2024-07-26 14:20:35.958582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.221 qpair failed and we were unable to recover it. 00:26:28.221 [2024-07-26 14:20:35.958673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.221 [2024-07-26 14:20:35.958699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.221 qpair failed and we were unable to recover it. 00:26:28.221 [2024-07-26 14:20:35.958875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.221 [2024-07-26 14:20:35.958928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.221 qpair failed and we were unable to recover it. 00:26:28.222 [2024-07-26 14:20:35.959076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.222 [2024-07-26 14:20:35.959138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.222 qpair failed and we were unable to recover it. 00:26:28.222 [2024-07-26 14:20:35.959349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.222 [2024-07-26 14:20:35.959396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.222 qpair failed and we were unable to recover it. 00:26:28.222 [2024-07-26 14:20:35.959473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.222 [2024-07-26 14:20:35.959499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.222 qpair failed and we were unable to recover it. 00:26:28.222 [2024-07-26 14:20:35.959660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.222 [2024-07-26 14:20:35.959713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.222 qpair failed and we were unable to recover it. 00:26:28.222 [2024-07-26 14:20:35.959886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.222 [2024-07-26 14:20:35.959936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.222 qpair failed and we were unable to recover it. 00:26:28.222 [2024-07-26 14:20:35.960118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.222 [2024-07-26 14:20:35.960164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.222 qpair failed and we were unable to recover it. 00:26:28.222 [2024-07-26 14:20:35.960302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.222 [2024-07-26 14:20:35.960328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.222 qpair failed and we were unable to recover it. 00:26:28.222 [2024-07-26 14:20:35.960435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.222 [2024-07-26 14:20:35.960461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.222 qpair failed and we were unable to recover it. 00:26:28.222 [2024-07-26 14:20:35.960662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.222 [2024-07-26 14:20:35.960728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.222 qpair failed and we were unable to recover it. 00:26:28.222 [2024-07-26 14:20:35.960967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.222 [2024-07-26 14:20:35.961044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.222 qpair failed and we were unable to recover it. 00:26:28.222 [2024-07-26 14:20:35.961260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.222 [2024-07-26 14:20:35.961336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.222 qpair failed and we were unable to recover it. 00:26:28.222 [2024-07-26 14:20:35.961511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.222 [2024-07-26 14:20:35.961587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.222 qpair failed and we were unable to recover it. 00:26:28.222 [2024-07-26 14:20:35.961776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.222 [2024-07-26 14:20:35.961857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.222 qpair failed and we were unable to recover it. 00:26:28.222 [2024-07-26 14:20:35.962107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.222 [2024-07-26 14:20:35.962185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.222 qpair failed and we were unable to recover it. 00:26:28.222 [2024-07-26 14:20:35.962408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.222 [2024-07-26 14:20:35.962469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.222 qpair failed and we were unable to recover it. 00:26:28.222 [2024-07-26 14:20:35.962685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.222 [2024-07-26 14:20:35.962712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.222 qpair failed and we were unable to recover it. 00:26:28.222 [2024-07-26 14:20:35.962904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.222 [2024-07-26 14:20:35.962964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.222 qpair failed and we were unable to recover it. 00:26:28.222 [2024-07-26 14:20:35.963194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.222 [2024-07-26 14:20:35.963254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.222 qpair failed and we were unable to recover it. 00:26:28.222 [2024-07-26 14:20:35.963472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.222 [2024-07-26 14:20:35.963546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.222 qpair failed and we were unable to recover it. 00:26:28.222 [2024-07-26 14:20:35.963668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.222 [2024-07-26 14:20:35.963695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.222 qpair failed and we were unable to recover it. 00:26:28.222 [2024-07-26 14:20:35.963882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.222 [2024-07-26 14:20:35.963942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.222 qpair failed and we were unable to recover it. 00:26:28.222 [2024-07-26 14:20:35.964202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.222 [2024-07-26 14:20:35.964261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.222 qpair failed and we were unable to recover it. 00:26:28.222 [2024-07-26 14:20:35.964471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.222 [2024-07-26 14:20:35.964542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.222 qpair failed and we were unable to recover it. 00:26:28.222 [2024-07-26 14:20:35.964696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.222 [2024-07-26 14:20:35.964723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.222 qpair failed and we were unable to recover it. 00:26:28.222 [2024-07-26 14:20:35.964858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.222 [2024-07-26 14:20:35.964886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.222 qpair failed and we were unable to recover it. 00:26:28.222 [2024-07-26 14:20:35.964999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.222 [2024-07-26 14:20:35.965025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.222 qpair failed and we were unable to recover it. 00:26:28.222 [2024-07-26 14:20:35.965160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.222 [2024-07-26 14:20:35.965187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.222 qpair failed and we were unable to recover it. 00:26:28.222 [2024-07-26 14:20:35.965301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.222 [2024-07-26 14:20:35.965328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.222 qpair failed and we were unable to recover it. 00:26:28.222 [2024-07-26 14:20:35.965475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.222 [2024-07-26 14:20:35.965502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.222 qpair failed and we were unable to recover it. 00:26:28.222 [2024-07-26 14:20:35.965625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.222 [2024-07-26 14:20:35.965653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.222 qpair failed and we were unable to recover it. 00:26:28.222 [2024-07-26 14:20:35.965739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.222 [2024-07-26 14:20:35.965766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.222 qpair failed and we were unable to recover it. 00:26:28.222 [2024-07-26 14:20:35.965847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.222 [2024-07-26 14:20:35.965873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.222 qpair failed and we were unable to recover it. 00:26:28.222 [2024-07-26 14:20:35.965958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.222 [2024-07-26 14:20:35.965987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.222 qpair failed and we were unable to recover it. 00:26:28.222 [2024-07-26 14:20:35.966124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.222 [2024-07-26 14:20:35.966184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.222 qpair failed and we were unable to recover it. 00:26:28.222 [2024-07-26 14:20:35.966424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.222 [2024-07-26 14:20:35.966484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.222 qpair failed and we were unable to recover it. 00:26:28.222 [2024-07-26 14:20:35.966687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.222 [2024-07-26 14:20:35.966714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.222 qpair failed and we were unable to recover it. 00:26:28.222 [2024-07-26 14:20:35.966827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.222 [2024-07-26 14:20:35.966855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.222 qpair failed and we were unable to recover it. 00:26:28.223 [2024-07-26 14:20:35.966973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.223 [2024-07-26 14:20:35.967000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.223 qpair failed and we were unable to recover it. 00:26:28.223 [2024-07-26 14:20:35.967091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.223 [2024-07-26 14:20:35.967118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.223 qpair failed and we were unable to recover it. 00:26:28.223 [2024-07-26 14:20:35.967229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.223 [2024-07-26 14:20:35.967257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.223 qpair failed and we were unable to recover it. 00:26:28.223 [2024-07-26 14:20:35.967445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.223 [2024-07-26 14:20:35.967505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.223 qpair failed and we were unable to recover it. 00:26:28.223 [2024-07-26 14:20:35.967707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.223 [2024-07-26 14:20:35.967738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.223 qpair failed and we were unable to recover it. 00:26:28.223 [2024-07-26 14:20:35.967848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.223 [2024-07-26 14:20:35.967874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.223 qpair failed and we were unable to recover it. 00:26:28.223 [2024-07-26 14:20:35.967991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.223 [2024-07-26 14:20:35.968018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.223 qpair failed and we were unable to recover it. 00:26:28.223 [2024-07-26 14:20:35.968237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.223 [2024-07-26 14:20:35.968298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.223 qpair failed and we were unable to recover it. 00:26:28.223 [2024-07-26 14:20:35.968525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.223 [2024-07-26 14:20:35.968596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.223 qpair failed and we were unable to recover it. 00:26:28.223 [2024-07-26 14:20:35.968684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.223 [2024-07-26 14:20:35.968711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.223 qpair failed and we were unable to recover it. 00:26:28.223 [2024-07-26 14:20:35.968801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.223 [2024-07-26 14:20:35.968828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.223 qpair failed and we were unable to recover it. 00:26:28.223 [2024-07-26 14:20:35.968940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.223 [2024-07-26 14:20:35.968967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.223 qpair failed and we were unable to recover it. 00:26:28.223 [2024-07-26 14:20:35.969110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.223 [2024-07-26 14:20:35.969137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.223 qpair failed and we were unable to recover it. 00:26:28.223 [2024-07-26 14:20:35.969243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.223 [2024-07-26 14:20:35.969270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.223 qpair failed and we were unable to recover it. 00:26:28.223 [2024-07-26 14:20:35.969360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.223 [2024-07-26 14:20:35.969386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.223 qpair failed and we were unable to recover it. 00:26:28.223 [2024-07-26 14:20:35.969469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.223 [2024-07-26 14:20:35.969496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.223 qpair failed and we were unable to recover it. 00:26:28.223 [2024-07-26 14:20:35.969608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.223 [2024-07-26 14:20:35.969635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.223 qpair failed and we were unable to recover it. 00:26:28.223 [2024-07-26 14:20:35.969753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.223 [2024-07-26 14:20:35.969779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.223 qpair failed and we were unable to recover it. 00:26:28.223 [2024-07-26 14:20:35.969923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.223 [2024-07-26 14:20:35.969950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.223 qpair failed and we were unable to recover it. 00:26:28.223 [2024-07-26 14:20:35.970134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.223 [2024-07-26 14:20:35.970194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.223 qpair failed and we were unable to recover it. 00:26:28.223 [2024-07-26 14:20:35.970379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.223 [2024-07-26 14:20:35.970438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.223 qpair failed and we were unable to recover it. 00:26:28.223 [2024-07-26 14:20:35.970647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.223 [2024-07-26 14:20:35.970675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.223 qpair failed and we were unable to recover it. 00:26:28.223 [2024-07-26 14:20:35.970764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.223 [2024-07-26 14:20:35.970792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.223 qpair failed and we were unable to recover it. 00:26:28.223 [2024-07-26 14:20:35.970871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.223 [2024-07-26 14:20:35.970898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.223 qpair failed and we were unable to recover it. 00:26:28.223 [2024-07-26 14:20:35.971006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.223 [2024-07-26 14:20:35.971032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.223 qpair failed and we were unable to recover it. 00:26:28.223 [2024-07-26 14:20:35.971123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.223 [2024-07-26 14:20:35.971192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.223 qpair failed and we were unable to recover it. 00:26:28.223 [2024-07-26 14:20:35.971361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.223 [2024-07-26 14:20:35.971418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.223 qpair failed and we were unable to recover it. 00:26:28.223 [2024-07-26 14:20:35.971567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.223 [2024-07-26 14:20:35.971595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.223 qpair failed and we were unable to recover it. 00:26:28.223 [2024-07-26 14:20:35.971710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.223 [2024-07-26 14:20:35.971736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.223 qpair failed and we were unable to recover it. 00:26:28.223 [2024-07-26 14:20:35.971853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.223 [2024-07-26 14:20:35.971880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.223 qpair failed and we were unable to recover it. 00:26:28.223 [2024-07-26 14:20:35.971983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.223 [2024-07-26 14:20:35.972030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.223 qpair failed and we were unable to recover it. 00:26:28.223 [2024-07-26 14:20:35.972263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.223 [2024-07-26 14:20:35.972320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.223 qpair failed and we were unable to recover it. 00:26:28.224 [2024-07-26 14:20:35.972524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.224 [2024-07-26 14:20:35.972591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.224 qpair failed and we were unable to recover it. 00:26:28.224 [2024-07-26 14:20:35.972709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.224 [2024-07-26 14:20:35.972736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.224 qpair failed and we were unable to recover it. 00:26:28.224 [2024-07-26 14:20:35.972823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.224 [2024-07-26 14:20:35.972850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.224 qpair failed and we were unable to recover it. 00:26:28.224 [2024-07-26 14:20:35.972936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.224 [2024-07-26 14:20:35.972964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.224 qpair failed and we were unable to recover it. 00:26:28.224 [2024-07-26 14:20:35.973068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.224 [2024-07-26 14:20:35.973095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.224 qpair failed and we were unable to recover it. 00:26:28.224 [2024-07-26 14:20:35.973180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.224 [2024-07-26 14:20:35.973207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.224 qpair failed and we were unable to recover it. 00:26:28.224 [2024-07-26 14:20:35.973289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.224 [2024-07-26 14:20:35.973317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.224 qpair failed and we were unable to recover it. 00:26:28.224 [2024-07-26 14:20:35.973423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.224 [2024-07-26 14:20:35.973449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.224 qpair failed and we were unable to recover it. 00:26:28.224 [2024-07-26 14:20:35.973536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.224 [2024-07-26 14:20:35.973564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.224 qpair failed and we were unable to recover it. 00:26:28.224 [2024-07-26 14:20:35.973655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.224 [2024-07-26 14:20:35.973682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.224 qpair failed and we were unable to recover it. 00:26:28.224 [2024-07-26 14:20:35.973795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.224 [2024-07-26 14:20:35.973821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.224 qpair failed and we were unable to recover it. 00:26:28.224 [2024-07-26 14:20:35.973958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.224 [2024-07-26 14:20:35.973985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.224 qpair failed and we were unable to recover it. 00:26:28.224 [2024-07-26 14:20:35.974073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.224 [2024-07-26 14:20:35.974105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.224 qpair failed and we were unable to recover it. 00:26:28.224 [2024-07-26 14:20:35.974189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.224 [2024-07-26 14:20:35.974215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.224 qpair failed and we were unable to recover it. 00:26:28.224 [2024-07-26 14:20:35.974301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.224 [2024-07-26 14:20:35.974328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.224 qpair failed and we were unable to recover it. 00:26:28.224 [2024-07-26 14:20:35.974459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.224 [2024-07-26 14:20:35.974514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.224 qpair failed and we were unable to recover it. 00:26:28.224 [2024-07-26 14:20:35.974712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.224 [2024-07-26 14:20:35.974739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.224 qpair failed and we were unable to recover it. 00:26:28.224 [2024-07-26 14:20:35.974846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.224 [2024-07-26 14:20:35.974873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.224 qpair failed and we were unable to recover it. 00:26:28.224 [2024-07-26 14:20:35.974964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.224 [2024-07-26 14:20:35.974990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.224 qpair failed and we were unable to recover it. 00:26:28.224 [2024-07-26 14:20:35.975072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.224 [2024-07-26 14:20:35.975098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.224 qpair failed and we were unable to recover it. 00:26:28.224 [2024-07-26 14:20:35.975203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.224 [2024-07-26 14:20:35.975230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.224 qpair failed and we were unable to recover it. 00:26:28.224 [2024-07-26 14:20:35.975324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.224 [2024-07-26 14:20:35.975350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.224 qpair failed and we were unable to recover it. 00:26:28.224 [2024-07-26 14:20:35.975483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.224 [2024-07-26 14:20:35.975510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.224 qpair failed and we were unable to recover it. 00:26:28.224 [2024-07-26 14:20:35.975660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.224 [2024-07-26 14:20:35.975716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.224 qpair failed and we were unable to recover it. 00:26:28.224 [2024-07-26 14:20:35.975922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.224 [2024-07-26 14:20:35.975991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.224 qpair failed and we were unable to recover it. 00:26:28.224 [2024-07-26 14:20:35.976243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.224 [2024-07-26 14:20:35.976294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.224 qpair failed and we were unable to recover it. 00:26:28.224 [2024-07-26 14:20:35.976467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.224 [2024-07-26 14:20:35.976520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.224 qpair failed and we were unable to recover it. 00:26:28.224 [2024-07-26 14:20:35.976716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.224 [2024-07-26 14:20:35.976768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.224 qpair failed and we were unable to recover it. 00:26:28.224 [2024-07-26 14:20:35.976970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.224 [2024-07-26 14:20:35.977022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.224 qpair failed and we were unable to recover it. 00:26:28.224 [2024-07-26 14:20:35.977205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.224 [2024-07-26 14:20:35.977232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.224 qpair failed and we were unable to recover it. 00:26:28.224 [2024-07-26 14:20:35.977369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.224 [2024-07-26 14:20:35.977395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.224 qpair failed and we were unable to recover it. 00:26:28.224 [2024-07-26 14:20:35.977540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.224 [2024-07-26 14:20:35.977592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.224 qpair failed and we were unable to recover it. 00:26:28.224 [2024-07-26 14:20:35.977792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.224 [2024-07-26 14:20:35.977844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.224 qpair failed and we were unable to recover it. 00:26:28.224 [2024-07-26 14:20:35.978076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.224 [2024-07-26 14:20:35.978148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.224 qpair failed and we were unable to recover it. 00:26:28.224 [2024-07-26 14:20:35.978320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.224 [2024-07-26 14:20:35.978372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.224 qpair failed and we were unable to recover it. 00:26:28.224 [2024-07-26 14:20:35.978567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.224 [2024-07-26 14:20:35.978621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.224 qpair failed and we were unable to recover it. 00:26:28.224 [2024-07-26 14:20:35.978825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.224 [2024-07-26 14:20:35.978898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.224 qpair failed and we were unable to recover it. 00:26:28.225 [2024-07-26 14:20:35.979131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.225 [2024-07-26 14:20:35.979185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.225 qpair failed and we were unable to recover it. 00:26:28.225 [2024-07-26 14:20:35.979387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.225 [2024-07-26 14:20:35.979439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.225 qpair failed and we were unable to recover it. 00:26:28.225 [2024-07-26 14:20:35.979706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.225 [2024-07-26 14:20:35.979779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.225 qpair failed and we were unable to recover it. 00:26:28.225 [2024-07-26 14:20:35.980059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.225 [2024-07-26 14:20:35.980130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.225 qpair failed and we were unable to recover it. 00:26:28.225 [2024-07-26 14:20:35.980335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.225 [2024-07-26 14:20:35.980388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.225 qpair failed and we were unable to recover it. 00:26:28.225 [2024-07-26 14:20:35.980608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.225 [2024-07-26 14:20:35.980685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.225 qpair failed and we were unable to recover it. 00:26:28.225 [2024-07-26 14:20:35.980970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.225 [2024-07-26 14:20:35.981039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.225 qpair failed and we were unable to recover it. 00:26:28.225 [2024-07-26 14:20:35.981295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.225 [2024-07-26 14:20:35.981347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.225 qpair failed and we were unable to recover it. 00:26:28.225 [2024-07-26 14:20:35.981552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.225 [2024-07-26 14:20:35.981604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.225 qpair failed and we were unable to recover it. 00:26:28.225 [2024-07-26 14:20:35.981853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.225 [2024-07-26 14:20:35.981922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.225 qpair failed and we were unable to recover it. 00:26:28.225 [2024-07-26 14:20:35.982158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.225 [2024-07-26 14:20:35.982185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.225 qpair failed and we were unable to recover it. 00:26:28.225 [2024-07-26 14:20:35.982330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.225 [2024-07-26 14:20:35.982357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.225 qpair failed and we were unable to recover it. 00:26:28.225 [2024-07-26 14:20:35.982573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.225 [2024-07-26 14:20:35.982628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.225 qpair failed and we were unable to recover it. 00:26:28.225 [2024-07-26 14:20:35.982861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.225 [2024-07-26 14:20:35.982932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.225 qpair failed and we were unable to recover it. 00:26:28.225 [2024-07-26 14:20:35.983136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.225 [2024-07-26 14:20:35.983205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.225 qpair failed and we were unable to recover it. 00:26:28.225 [2024-07-26 14:20:35.983436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.225 [2024-07-26 14:20:35.983495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.225 qpair failed and we were unable to recover it. 00:26:28.225 [2024-07-26 14:20:35.983752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.225 [2024-07-26 14:20:35.983822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.225 qpair failed and we were unable to recover it. 00:26:28.225 [2024-07-26 14:20:35.984053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.225 [2024-07-26 14:20:35.984125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.225 qpair failed and we were unable to recover it. 00:26:28.225 [2024-07-26 14:20:35.984362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.225 [2024-07-26 14:20:35.984414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.225 qpair failed and we were unable to recover it. 00:26:28.225 [2024-07-26 14:20:35.984651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.225 [2024-07-26 14:20:35.984723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.225 qpair failed and we were unable to recover it. 00:26:28.225 [2024-07-26 14:20:35.984998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.225 [2024-07-26 14:20:35.985069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.225 qpair failed and we were unable to recover it. 00:26:28.225 [2024-07-26 14:20:35.985311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.225 [2024-07-26 14:20:35.985363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.225 qpair failed and we were unable to recover it. 00:26:28.225 [2024-07-26 14:20:35.985632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.225 [2024-07-26 14:20:35.985702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.225 qpair failed and we were unable to recover it. 00:26:28.225 [2024-07-26 14:20:35.985870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.225 [2024-07-26 14:20:35.985922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.225 qpair failed and we were unable to recover it. 00:26:28.225 [2024-07-26 14:20:35.986124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.225 [2024-07-26 14:20:35.986150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.225 qpair failed and we were unable to recover it. 00:26:28.225 [2024-07-26 14:20:35.986271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.225 [2024-07-26 14:20:35.986297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.225 qpair failed and we were unable to recover it. 00:26:28.225 [2024-07-26 14:20:35.986378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.225 [2024-07-26 14:20:35.986405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.225 qpair failed and we were unable to recover it. 00:26:28.225 [2024-07-26 14:20:35.986545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.225 [2024-07-26 14:20:35.986572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.225 qpair failed and we were unable to recover it. 00:26:28.225 [2024-07-26 14:20:35.986693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.225 [2024-07-26 14:20:35.986770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.225 qpair failed and we were unable to recover it. 00:26:28.225 [2024-07-26 14:20:35.987014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.225 [2024-07-26 14:20:35.987067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.225 qpair failed and we were unable to recover it. 00:26:28.225 [2024-07-26 14:20:35.987309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.225 [2024-07-26 14:20:35.987361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.225 qpair failed and we were unable to recover it. 00:26:28.225 [2024-07-26 14:20:35.987608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.225 [2024-07-26 14:20:35.987682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.225 qpair failed and we were unable to recover it. 00:26:28.225 [2024-07-26 14:20:35.987878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.225 [2024-07-26 14:20:35.987948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.225 qpair failed and we were unable to recover it. 00:26:28.225 [2024-07-26 14:20:35.988135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.225 [2024-07-26 14:20:35.988187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.225 qpair failed and we were unable to recover it. 00:26:28.225 [2024-07-26 14:20:35.988330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.225 [2024-07-26 14:20:35.988381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.225 qpair failed and we were unable to recover it. 00:26:28.225 [2024-07-26 14:20:35.988589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.225 [2024-07-26 14:20:35.988643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.225 qpair failed and we were unable to recover it. 00:26:28.225 [2024-07-26 14:20:35.988820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.225 [2024-07-26 14:20:35.988874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.225 qpair failed and we were unable to recover it. 00:26:28.225 [2024-07-26 14:20:35.989088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.226 [2024-07-26 14:20:35.989141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.226 qpair failed and we were unable to recover it. 00:26:28.226 [2024-07-26 14:20:35.989343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.226 [2024-07-26 14:20:35.989394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.226 qpair failed and we were unable to recover it. 00:26:28.226 [2024-07-26 14:20:35.989630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.226 [2024-07-26 14:20:35.989683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.226 qpair failed and we were unable to recover it. 00:26:28.226 [2024-07-26 14:20:35.989867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.226 [2024-07-26 14:20:35.989919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.226 qpair failed and we were unable to recover it. 00:26:28.226 [2024-07-26 14:20:35.990080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.226 [2024-07-26 14:20:35.990135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.226 qpair failed and we were unable to recover it. 00:26:28.226 [2024-07-26 14:20:35.990371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.226 [2024-07-26 14:20:35.990447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.226 qpair failed and we were unable to recover it. 00:26:28.226 [2024-07-26 14:20:35.990712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.226 [2024-07-26 14:20:35.990782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.226 qpair failed and we were unable to recover it. 00:26:28.226 [2024-07-26 14:20:35.991066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.226 [2024-07-26 14:20:35.991130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.226 qpair failed and we were unable to recover it. 00:26:28.226 [2024-07-26 14:20:35.991416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.226 [2024-07-26 14:20:35.991479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.226 qpair failed and we were unable to recover it. 00:26:28.226 [2024-07-26 14:20:35.991730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.226 [2024-07-26 14:20:35.991783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.226 qpair failed and we were unable to recover it. 00:26:28.226 [2024-07-26 14:20:35.992036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.226 [2024-07-26 14:20:35.992100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.226 qpair failed and we were unable to recover it. 00:26:28.226 [2024-07-26 14:20:35.992350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.226 [2024-07-26 14:20:35.992413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.226 qpair failed and we were unable to recover it. 00:26:28.226 [2024-07-26 14:20:35.992646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.226 [2024-07-26 14:20:35.992698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.226 qpair failed and we were unable to recover it. 00:26:28.226 [2024-07-26 14:20:35.992912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.226 [2024-07-26 14:20:35.992977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.226 qpair failed and we were unable to recover it. 00:26:28.226 [2024-07-26 14:20:35.993255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.226 [2024-07-26 14:20:35.993319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.226 qpair failed and we were unable to recover it. 00:26:28.226 [2024-07-26 14:20:35.993586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.226 [2024-07-26 14:20:35.993640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.226 qpair failed and we were unable to recover it. 00:26:28.226 [2024-07-26 14:20:35.993797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.226 [2024-07-26 14:20:35.993850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.226 qpair failed and we were unable to recover it. 00:26:28.226 [2024-07-26 14:20:35.994047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.226 [2024-07-26 14:20:35.994111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.226 qpair failed and we were unable to recover it. 00:26:28.226 [2024-07-26 14:20:35.994397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.226 [2024-07-26 14:20:35.994461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.226 qpair failed and we were unable to recover it. 00:26:28.226 [2024-07-26 14:20:35.994807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.226 [2024-07-26 14:20:35.994922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.226 qpair failed and we were unable to recover it. 00:26:28.226 [2024-07-26 14:20:35.995223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.226 [2024-07-26 14:20:35.995291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.226 qpair failed and we were unable to recover it. 00:26:28.226 [2024-07-26 14:20:35.995604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.226 [2024-07-26 14:20:35.995659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.226 qpair failed and we were unable to recover it. 00:26:28.226 [2024-07-26 14:20:35.995908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.226 [2024-07-26 14:20:35.995977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.226 qpair failed and we were unable to recover it. 00:26:28.226 [2024-07-26 14:20:35.996256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.226 [2024-07-26 14:20:35.996322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.226 qpair failed and we were unable to recover it. 00:26:28.226 [2024-07-26 14:20:35.996560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.226 [2024-07-26 14:20:35.996634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.226 qpair failed and we were unable to recover it. 00:26:28.226 [2024-07-26 14:20:35.996869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.226 [2024-07-26 14:20:35.996922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.226 qpair failed and we were unable to recover it. 00:26:28.226 [2024-07-26 14:20:35.997199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.226 [2024-07-26 14:20:35.997263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.226 qpair failed and we were unable to recover it. 00:26:28.226 [2024-07-26 14:20:35.997505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.226 [2024-07-26 14:20:35.997600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.226 qpair failed and we were unable to recover it. 00:26:28.226 [2024-07-26 14:20:35.997778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.226 [2024-07-26 14:20:35.997849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.226 qpair failed and we were unable to recover it. 00:26:28.226 [2024-07-26 14:20:35.998084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.226 [2024-07-26 14:20:35.998139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.226 qpair failed and we were unable to recover it. 00:26:28.226 [2024-07-26 14:20:35.998433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.226 [2024-07-26 14:20:35.998496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.226 qpair failed and we were unable to recover it. 00:26:28.226 [2024-07-26 14:20:35.998775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.226 [2024-07-26 14:20:35.998828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.226 qpair failed and we were unable to recover it. 00:26:28.226 [2024-07-26 14:20:35.999105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.226 [2024-07-26 14:20:35.999171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.226 qpair failed and we were unable to recover it. 00:26:28.226 [2024-07-26 14:20:35.999433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.226 [2024-07-26 14:20:35.999497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.226 qpair failed and we were unable to recover it. 00:26:28.226 [2024-07-26 14:20:35.999782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.226 [2024-07-26 14:20:35.999856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.226 qpair failed and we were unable to recover it. 00:26:28.226 [2024-07-26 14:20:36.000147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.226 [2024-07-26 14:20:36.000211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.226 qpair failed and we were unable to recover it. 00:26:28.226 [2024-07-26 14:20:36.000459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.226 [2024-07-26 14:20:36.000524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.226 qpair failed and we were unable to recover it. 00:26:28.226 [2024-07-26 14:20:36.000745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.226 [2024-07-26 14:20:36.000799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.226 qpair failed and we were unable to recover it. 00:26:28.226 [2024-07-26 14:20:36.001042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.227 [2024-07-26 14:20:36.001107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.227 qpair failed and we were unable to recover it. 00:26:28.227 [2024-07-26 14:20:36.001347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.227 [2024-07-26 14:20:36.001413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.227 qpair failed and we were unable to recover it. 00:26:28.227 [2024-07-26 14:20:36.001675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.227 [2024-07-26 14:20:36.001730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.227 qpair failed and we were unable to recover it. 00:26:28.227 [2024-07-26 14:20:36.001935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.227 [2024-07-26 14:20:36.001987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.227 qpair failed and we were unable to recover it. 00:26:28.227 [2024-07-26 14:20:36.002195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.227 [2024-07-26 14:20:36.002260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.227 qpair failed and we were unable to recover it. 00:26:28.227 [2024-07-26 14:20:36.002506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.227 [2024-07-26 14:20:36.002600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.227 qpair failed and we were unable to recover it. 00:26:28.227 [2024-07-26 14:20:36.002852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.227 [2024-07-26 14:20:36.002917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.227 qpair failed and we were unable to recover it. 00:26:28.227 [2024-07-26 14:20:36.003197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.227 [2024-07-26 14:20:36.003275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.227 qpair failed and we were unable to recover it. 00:26:28.227 [2024-07-26 14:20:36.003586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.227 [2024-07-26 14:20:36.003639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.227 qpair failed and we were unable to recover it. 00:26:28.227 [2024-07-26 14:20:36.003875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.227 [2024-07-26 14:20:36.003927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.227 qpair failed and we were unable to recover it. 00:26:28.227 [2024-07-26 14:20:36.004192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.227 [2024-07-26 14:20:36.004266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.227 qpair failed and we were unable to recover it. 00:26:28.227 [2024-07-26 14:20:36.004464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.227 [2024-07-26 14:20:36.004517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.227 qpair failed and we were unable to recover it. 00:26:28.227 [2024-07-26 14:20:36.004741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.227 [2024-07-26 14:20:36.004793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.227 qpair failed and we were unable to recover it. 00:26:28.227 [2024-07-26 14:20:36.005089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.227 [2024-07-26 14:20:36.005115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.227 qpair failed and we were unable to recover it. 00:26:28.227 [2024-07-26 14:20:36.005221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.227 [2024-07-26 14:20:36.005247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.227 qpair failed and we were unable to recover it. 00:26:28.227 [2024-07-26 14:20:36.005365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.227 [2024-07-26 14:20:36.005391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.227 qpair failed and we were unable to recover it. 00:26:28.227 [2024-07-26 14:20:36.005578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.227 [2024-07-26 14:20:36.005632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.227 qpair failed and we were unable to recover it. 00:26:28.227 [2024-07-26 14:20:36.005847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.227 [2024-07-26 14:20:36.005873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.227 qpair failed and we were unable to recover it. 00:26:28.227 [2024-07-26 14:20:36.005989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.227 [2024-07-26 14:20:36.006016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.227 qpair failed and we were unable to recover it. 00:26:28.227 [2024-07-26 14:20:36.006191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.227 [2024-07-26 14:20:36.006243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.227 qpair failed and we were unable to recover it. 00:26:28.227 [2024-07-26 14:20:36.006513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.227 [2024-07-26 14:20:36.006604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.227 qpair failed and we were unable to recover it. 00:26:28.227 [2024-07-26 14:20:36.006762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.227 [2024-07-26 14:20:36.006812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.227 qpair failed and we were unable to recover it. 00:26:28.227 [2024-07-26 14:20:36.006925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.227 [2024-07-26 14:20:36.006951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.227 qpair failed and we were unable to recover it. 00:26:28.227 [2024-07-26 14:20:36.007060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.227 [2024-07-26 14:20:36.007086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.227 qpair failed and we were unable to recover it. 00:26:28.227 [2024-07-26 14:20:36.007166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.227 [2024-07-26 14:20:36.007222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.227 qpair failed and we were unable to recover it. 00:26:28.227 [2024-07-26 14:20:36.007387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.227 [2024-07-26 14:20:36.007456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.227 qpair failed and we were unable to recover it. 00:26:28.227 [2024-07-26 14:20:36.007717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.227 [2024-07-26 14:20:36.007785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.227 qpair failed and we were unable to recover it. 00:26:28.227 [2024-07-26 14:20:36.008050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.227 [2024-07-26 14:20:36.008102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.227 qpair failed and we were unable to recover it. 00:26:28.227 [2024-07-26 14:20:36.008332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.227 [2024-07-26 14:20:36.008397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.227 qpair failed and we were unable to recover it. 00:26:28.227 [2024-07-26 14:20:36.008693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.227 [2024-07-26 14:20:36.008746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.227 qpair failed and we were unable to recover it. 00:26:28.227 [2024-07-26 14:20:36.009028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.227 [2024-07-26 14:20:36.009092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.227 qpair failed and we were unable to recover it. 00:26:28.227 [2024-07-26 14:20:36.009332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.227 [2024-07-26 14:20:36.009399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.227 qpair failed and we were unable to recover it. 00:26:28.227 [2024-07-26 14:20:36.009642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.228 [2024-07-26 14:20:36.009711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.228 qpair failed and we were unable to recover it. 00:26:28.228 [2024-07-26 14:20:36.009991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.228 [2024-07-26 14:20:36.010055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.228 qpair failed and we were unable to recover it. 00:26:28.228 [2024-07-26 14:20:36.010354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.228 [2024-07-26 14:20:36.010419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.228 qpair failed and we were unable to recover it. 00:26:28.228 [2024-07-26 14:20:36.010661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.228 [2024-07-26 14:20:36.010725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.228 qpair failed and we were unable to recover it. 00:26:28.228 [2024-07-26 14:20:36.011002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.228 [2024-07-26 14:20:36.011067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.228 qpair failed and we were unable to recover it. 00:26:28.228 [2024-07-26 14:20:36.011352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.228 [2024-07-26 14:20:36.011416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.228 qpair failed and we were unable to recover it. 00:26:28.228 [2024-07-26 14:20:36.011663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.228 [2024-07-26 14:20:36.011729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.228 qpair failed and we were unable to recover it. 00:26:28.228 [2024-07-26 14:20:36.011975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.228 [2024-07-26 14:20:36.012041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.228 qpair failed and we were unable to recover it. 00:26:28.228 [2024-07-26 14:20:36.012322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.228 [2024-07-26 14:20:36.012387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.228 qpair failed and we were unable to recover it. 00:26:28.228 [2024-07-26 14:20:36.012666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.228 [2024-07-26 14:20:36.012732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.228 qpair failed and we were unable to recover it. 00:26:28.228 [2024-07-26 14:20:36.013022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.228 [2024-07-26 14:20:36.013087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.228 qpair failed and we were unable to recover it. 00:26:28.228 [2024-07-26 14:20:36.013367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.228 [2024-07-26 14:20:36.013432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.228 qpair failed and we were unable to recover it. 00:26:28.228 [2024-07-26 14:20:36.013696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.228 [2024-07-26 14:20:36.013765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.228 qpair failed and we were unable to recover it. 00:26:28.228 [2024-07-26 14:20:36.014010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.228 [2024-07-26 14:20:36.014075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.228 qpair failed and we were unable to recover it. 00:26:28.228 [2024-07-26 14:20:36.014311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.228 [2024-07-26 14:20:36.014375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.228 qpair failed and we were unable to recover it. 00:26:28.228 [2024-07-26 14:20:36.014611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.228 [2024-07-26 14:20:36.014689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.228 qpair failed and we were unable to recover it. 00:26:28.228 [2024-07-26 14:20:36.014935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.228 [2024-07-26 14:20:36.015002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.228 qpair failed and we were unable to recover it. 00:26:28.228 [2024-07-26 14:20:36.015296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.228 [2024-07-26 14:20:36.015361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.228 qpair failed and we were unable to recover it. 00:26:28.228 [2024-07-26 14:20:36.015643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.228 [2024-07-26 14:20:36.015676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.228 qpair failed and we were unable to recover it. 00:26:28.228 [2024-07-26 14:20:36.015782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.228 [2024-07-26 14:20:36.015813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.228 qpair failed and we were unable to recover it. 00:26:28.228 [2024-07-26 14:20:36.015957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.228 [2024-07-26 14:20:36.015988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.228 qpair failed and we were unable to recover it. 00:26:28.228 [2024-07-26 14:20:36.016176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.228 [2024-07-26 14:20:36.016251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.228 qpair failed and we were unable to recover it. 00:26:28.228 [2024-07-26 14:20:36.016542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.228 [2024-07-26 14:20:36.016611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.228 qpair failed and we were unable to recover it. 00:26:28.228 [2024-07-26 14:20:36.016888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.228 [2024-07-26 14:20:36.016952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.228 qpair failed and we were unable to recover it. 00:26:28.228 [2024-07-26 14:20:36.017199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.228 [2024-07-26 14:20:36.017264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.228 qpair failed and we were unable to recover it. 00:26:28.228 [2024-07-26 14:20:36.017515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.228 [2024-07-26 14:20:36.017602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.228 qpair failed and we were unable to recover it. 00:26:28.228 [2024-07-26 14:20:36.017777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.228 [2024-07-26 14:20:36.017843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.228 qpair failed and we were unable to recover it. 00:26:28.228 [2024-07-26 14:20:36.018134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.228 [2024-07-26 14:20:36.018179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.228 qpair failed and we were unable to recover it. 00:26:28.228 [2024-07-26 14:20:36.018354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.228 [2024-07-26 14:20:36.018400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.228 qpair failed and we were unable to recover it. 00:26:28.228 [2024-07-26 14:20:36.018669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.228 [2024-07-26 14:20:36.018736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.228 qpair failed and we were unable to recover it. 00:26:28.228 [2024-07-26 14:20:36.019003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.228 [2024-07-26 14:20:36.019063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.228 qpair failed and we were unable to recover it. 00:26:28.228 [2024-07-26 14:20:36.019295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.228 [2024-07-26 14:20:36.019354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.228 qpair failed and we were unable to recover it. 00:26:28.228 [2024-07-26 14:20:36.019556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.228 [2024-07-26 14:20:36.019617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.228 qpair failed and we were unable to recover it. 00:26:28.228 [2024-07-26 14:20:36.019847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.228 [2024-07-26 14:20:36.019910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.228 qpair failed and we were unable to recover it. 00:26:28.228 [2024-07-26 14:20:36.020113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.228 [2024-07-26 14:20:36.020175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.228 qpair failed and we were unable to recover it. 00:26:28.228 [2024-07-26 14:20:36.020386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.228 [2024-07-26 14:20:36.020446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.228 qpair failed and we were unable to recover it. 00:26:28.228 [2024-07-26 14:20:36.020674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.228 [2024-07-26 14:20:36.020755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.228 qpair failed and we were unable to recover it. 00:26:28.229 [2024-07-26 14:20:36.021016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.229 [2024-07-26 14:20:36.021080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.229 qpair failed and we were unable to recover it. 00:26:28.229 [2024-07-26 14:20:36.021299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.229 [2024-07-26 14:20:36.021366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.229 qpair failed and we were unable to recover it. 00:26:28.229 [2024-07-26 14:20:36.021666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.229 [2024-07-26 14:20:36.021734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.229 qpair failed and we were unable to recover it. 00:26:28.229 [2024-07-26 14:20:36.021983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.229 [2024-07-26 14:20:36.022048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.229 qpair failed and we were unable to recover it. 00:26:28.229 [2024-07-26 14:20:36.022257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.229 [2024-07-26 14:20:36.022324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.229 qpair failed and we were unable to recover it. 00:26:28.229 [2024-07-26 14:20:36.022598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.229 [2024-07-26 14:20:36.022665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.229 qpair failed and we were unable to recover it. 00:26:28.229 [2024-07-26 14:20:36.022918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.229 [2024-07-26 14:20:36.022984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.229 qpair failed and we were unable to recover it. 00:26:28.229 [2024-07-26 14:20:36.023264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.229 [2024-07-26 14:20:36.023329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.229 qpair failed and we were unable to recover it. 00:26:28.229 [2024-07-26 14:20:36.023576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.229 [2024-07-26 14:20:36.023642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.229 qpair failed and we were unable to recover it. 00:26:28.229 [2024-07-26 14:20:36.023919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.229 [2024-07-26 14:20:36.023984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.229 qpair failed and we were unable to recover it. 00:26:28.229 [2024-07-26 14:20:36.024225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.229 [2024-07-26 14:20:36.024290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.229 qpair failed and we were unable to recover it. 00:26:28.229 [2024-07-26 14:20:36.024538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.229 [2024-07-26 14:20:36.024566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.229 qpair failed and we were unable to recover it. 00:26:28.229 [2024-07-26 14:20:36.024678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.229 [2024-07-26 14:20:36.024705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.229 qpair failed and we were unable to recover it. 00:26:28.229 [2024-07-26 14:20:36.024796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.229 [2024-07-26 14:20:36.024823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.229 qpair failed and we were unable to recover it. 00:26:28.229 [2024-07-26 14:20:36.024937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.229 [2024-07-26 14:20:36.024964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.229 qpair failed and we were unable to recover it. 00:26:28.229 [2024-07-26 14:20:36.025152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.229 [2024-07-26 14:20:36.025216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.229 qpair failed and we were unable to recover it. 00:26:28.229 [2024-07-26 14:20:36.025422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.229 [2024-07-26 14:20:36.025490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.229 qpair failed and we were unable to recover it. 00:26:28.229 [2024-07-26 14:20:36.025733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.229 [2024-07-26 14:20:36.025798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.229 qpair failed and we were unable to recover it. 00:26:28.229 [2024-07-26 14:20:36.026051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.229 [2024-07-26 14:20:36.026124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.229 qpair failed and we were unable to recover it. 00:26:28.229 [2024-07-26 14:20:36.026370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.229 [2024-07-26 14:20:36.026444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.229 qpair failed and we were unable to recover it. 00:26:28.229 [2024-07-26 14:20:36.026718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.229 [2024-07-26 14:20:36.026785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.229 qpair failed and we were unable to recover it. 00:26:28.229 [2024-07-26 14:20:36.026982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.229 [2024-07-26 14:20:36.027047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.229 qpair failed and we were unable to recover it. 00:26:28.229 [2024-07-26 14:20:36.027315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.229 [2024-07-26 14:20:36.027379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.229 qpair failed and we were unable to recover it. 00:26:28.229 [2024-07-26 14:20:36.027591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.229 [2024-07-26 14:20:36.027659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.229 qpair failed and we were unable to recover it. 00:26:28.229 [2024-07-26 14:20:36.027909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.229 [2024-07-26 14:20:36.027974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.229 qpair failed and we were unable to recover it. 00:26:28.229 [2024-07-26 14:20:36.028269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.229 [2024-07-26 14:20:36.028334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.229 qpair failed and we were unable to recover it. 00:26:28.229 [2024-07-26 14:20:36.028625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.229 [2024-07-26 14:20:36.028691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.229 qpair failed and we were unable to recover it. 00:26:28.229 [2024-07-26 14:20:36.028972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.229 [2024-07-26 14:20:36.029036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.229 qpair failed and we were unable to recover it. 00:26:28.229 [2024-07-26 14:20:36.029252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.229 [2024-07-26 14:20:36.029318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.229 qpair failed and we were unable to recover it. 00:26:28.229 [2024-07-26 14:20:36.029580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.229 [2024-07-26 14:20:36.029646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.229 qpair failed and we were unable to recover it. 00:26:28.229 [2024-07-26 14:20:36.029880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.229 [2024-07-26 14:20:36.029946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.229 qpair failed and we were unable to recover it. 00:26:28.229 [2024-07-26 14:20:36.030189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.229 [2024-07-26 14:20:36.030256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.229 qpair failed and we were unable to recover it. 00:26:28.229 [2024-07-26 14:20:36.030495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.229 [2024-07-26 14:20:36.030596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.229 qpair failed and we were unable to recover it. 00:26:28.229 [2024-07-26 14:20:36.030844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.229 [2024-07-26 14:20:36.030908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.229 qpair failed and we were unable to recover it. 00:26:28.229 [2024-07-26 14:20:36.031119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.229 [2024-07-26 14:20:36.031185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.229 qpair failed and we were unable to recover it. 00:26:28.229 [2024-07-26 14:20:36.031418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.229 [2024-07-26 14:20:36.031483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.229 qpair failed and we were unable to recover it. 00:26:28.229 [2024-07-26 14:20:36.031735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.230 [2024-07-26 14:20:36.031800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.230 qpair failed and we were unable to recover it. 00:26:28.230 [2024-07-26 14:20:36.032045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.230 [2024-07-26 14:20:36.032112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.230 qpair failed and we were unable to recover it. 00:26:28.230 [2024-07-26 14:20:36.032398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.230 [2024-07-26 14:20:36.032464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.230 qpair failed and we were unable to recover it. 00:26:28.230 [2024-07-26 14:20:36.032697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.230 [2024-07-26 14:20:36.032763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.230 qpair failed and we were unable to recover it. 00:26:28.230 [2024-07-26 14:20:36.032973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.230 [2024-07-26 14:20:36.033040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.230 qpair failed and we were unable to recover it. 00:26:28.230 [2024-07-26 14:20:36.033326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.230 [2024-07-26 14:20:36.033391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.230 qpair failed and we were unable to recover it. 00:26:28.230 [2024-07-26 14:20:36.033602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.230 [2024-07-26 14:20:36.033670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.230 qpair failed and we were unable to recover it. 00:26:28.230 [2024-07-26 14:20:36.033923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.230 [2024-07-26 14:20:36.033988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.230 qpair failed and we were unable to recover it. 00:26:28.230 [2024-07-26 14:20:36.034192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.230 [2024-07-26 14:20:36.034257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.230 qpair failed and we were unable to recover it. 00:26:28.230 [2024-07-26 14:20:36.034484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.230 [2024-07-26 14:20:36.034563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.230 qpair failed and we were unable to recover it. 00:26:28.230 [2024-07-26 14:20:36.034785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.230 [2024-07-26 14:20:36.034851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.230 qpair failed and we were unable to recover it. 00:26:28.230 [2024-07-26 14:20:36.035139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.230 [2024-07-26 14:20:36.035202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.230 qpair failed and we were unable to recover it. 00:26:28.230 [2024-07-26 14:20:36.035463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.230 [2024-07-26 14:20:36.035543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.230 qpair failed and we were unable to recover it. 00:26:28.230 [2024-07-26 14:20:36.035832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.230 [2024-07-26 14:20:36.035898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.230 qpair failed and we were unable to recover it. 00:26:28.230 [2024-07-26 14:20:36.036194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.230 [2024-07-26 14:20:36.036258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.230 qpair failed and we were unable to recover it. 00:26:28.230 [2024-07-26 14:20:36.036555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.230 [2024-07-26 14:20:36.036620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.230 qpair failed and we were unable to recover it. 00:26:28.230 [2024-07-26 14:20:36.036899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.230 [2024-07-26 14:20:36.036963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.230 qpair failed and we were unable to recover it. 00:26:28.230 [2024-07-26 14:20:36.037240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.230 [2024-07-26 14:20:36.037305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.230 qpair failed and we were unable to recover it. 00:26:28.230 [2024-07-26 14:20:36.037558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.230 [2024-07-26 14:20:36.037624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.230 qpair failed and we were unable to recover it. 00:26:28.230 [2024-07-26 14:20:36.037842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.230 [2024-07-26 14:20:36.037906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.230 qpair failed and we were unable to recover it. 00:26:28.230 [2024-07-26 14:20:36.038187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.230 [2024-07-26 14:20:36.038213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.230 qpair failed and we were unable to recover it. 00:26:28.230 [2024-07-26 14:20:36.038325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.230 [2024-07-26 14:20:36.038350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.230 qpair failed and we were unable to recover it. 00:26:28.230 [2024-07-26 14:20:36.038435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.230 [2024-07-26 14:20:36.038465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.230 qpair failed and we were unable to recover it. 00:26:28.230 [2024-07-26 14:20:36.038609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.230 [2024-07-26 14:20:36.038678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.230 qpair failed and we were unable to recover it. 00:26:28.230 [2024-07-26 14:20:36.038897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.230 [2024-07-26 14:20:36.038963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.230 qpair failed and we were unable to recover it. 00:26:28.230 [2024-07-26 14:20:36.039250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.230 [2024-07-26 14:20:36.039315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.230 qpair failed and we were unable to recover it. 00:26:28.230 [2024-07-26 14:20:36.039596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.230 [2024-07-26 14:20:36.039662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.230 qpair failed and we were unable to recover it. 00:26:28.230 [2024-07-26 14:20:36.039885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.230 [2024-07-26 14:20:36.039949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.230 qpair failed and we were unable to recover it. 00:26:28.230 [2024-07-26 14:20:36.040164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.230 [2024-07-26 14:20:36.040230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.230 qpair failed and we were unable to recover it. 00:26:28.230 [2024-07-26 14:20:36.040493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.230 [2024-07-26 14:20:36.040570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.230 qpair failed and we were unable to recover it. 00:26:28.231 [2024-07-26 14:20:36.040861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.231 [2024-07-26 14:20:36.040925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.231 qpair failed and we were unable to recover it. 00:26:28.231 [2024-07-26 14:20:36.041141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.231 [2024-07-26 14:20:36.041207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.231 qpair failed and we were unable to recover it. 00:26:28.231 [2024-07-26 14:20:36.041487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.231 [2024-07-26 14:20:36.041567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.231 qpair failed and we were unable to recover it. 00:26:28.231 [2024-07-26 14:20:36.041785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.231 [2024-07-26 14:20:36.041853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.231 qpair failed and we were unable to recover it. 00:26:28.231 [2024-07-26 14:20:36.042087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.231 [2024-07-26 14:20:36.042138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.231 qpair failed and we were unable to recover it. 00:26:28.231 [2024-07-26 14:20:36.042393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.231 [2024-07-26 14:20:36.042458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.231 qpair failed and we were unable to recover it. 00:26:28.231 [2024-07-26 14:20:36.042723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.231 [2024-07-26 14:20:36.042789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.231 qpair failed and we were unable to recover it. 00:26:28.231 [2024-07-26 14:20:36.043036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.231 [2024-07-26 14:20:36.043101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.231 qpair failed and we were unable to recover it. 00:26:28.231 [2024-07-26 14:20:36.043382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.231 [2024-07-26 14:20:36.043448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.231 qpair failed and we were unable to recover it. 00:26:28.231 [2024-07-26 14:20:36.043714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.231 [2024-07-26 14:20:36.043781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.231 qpair failed and we were unable to recover it. 00:26:28.231 [2024-07-26 14:20:36.044025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.231 [2024-07-26 14:20:36.044092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.231 qpair failed and we were unable to recover it. 00:26:28.231 [2024-07-26 14:20:36.044318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.231 [2024-07-26 14:20:36.044383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.231 qpair failed and we were unable to recover it. 00:26:28.231 [2024-07-26 14:20:36.044620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.231 [2024-07-26 14:20:36.044688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.231 qpair failed and we were unable to recover it. 00:26:28.231 [2024-07-26 14:20:36.044905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.231 [2024-07-26 14:20:36.044971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.231 qpair failed and we were unable to recover it. 00:26:28.231 [2024-07-26 14:20:36.045263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.231 [2024-07-26 14:20:36.045327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.231 qpair failed and we were unable to recover it. 00:26:28.231 [2024-07-26 14:20:36.045597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.231 [2024-07-26 14:20:36.045664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.231 qpair failed and we were unable to recover it. 00:26:28.231 [2024-07-26 14:20:36.045936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.231 [2024-07-26 14:20:36.046001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.231 qpair failed and we were unable to recover it. 00:26:28.231 [2024-07-26 14:20:36.046246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.231 [2024-07-26 14:20:36.046310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.231 qpair failed and we were unable to recover it. 00:26:28.231 [2024-07-26 14:20:36.046563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.231 [2024-07-26 14:20:36.046628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.231 qpair failed and we were unable to recover it. 00:26:28.231 [2024-07-26 14:20:36.046860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.231 [2024-07-26 14:20:36.046926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.231 qpair failed and we were unable to recover it. 00:26:28.231 [2024-07-26 14:20:36.047186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.231 [2024-07-26 14:20:36.047250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.231 qpair failed and we were unable to recover it. 00:26:28.231 [2024-07-26 14:20:36.047493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.231 [2024-07-26 14:20:36.047573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.231 qpair failed and we were unable to recover it. 00:26:28.231 [2024-07-26 14:20:36.047827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.231 [2024-07-26 14:20:36.047892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.231 qpair failed and we were unable to recover it. 00:26:28.231 [2024-07-26 14:20:36.048147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.231 [2024-07-26 14:20:36.048211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.231 qpair failed and we were unable to recover it. 00:26:28.231 [2024-07-26 14:20:36.048492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.231 [2024-07-26 14:20:36.048575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.231 qpair failed and we were unable to recover it. 00:26:28.231 [2024-07-26 14:20:36.048861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.231 [2024-07-26 14:20:36.048925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.231 qpair failed and we were unable to recover it. 00:26:28.231 [2024-07-26 14:20:36.049183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.231 [2024-07-26 14:20:36.049247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.231 qpair failed and we were unable to recover it. 00:26:28.231 [2024-07-26 14:20:36.049478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.231 [2024-07-26 14:20:36.049560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.231 qpair failed and we were unable to recover it. 00:26:28.231 [2024-07-26 14:20:36.049779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.231 [2024-07-26 14:20:36.049843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.231 qpair failed and we were unable to recover it. 00:26:28.231 [2024-07-26 14:20:36.050061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.231 [2024-07-26 14:20:36.050126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.231 qpair failed and we were unable to recover it. 00:26:28.231 [2024-07-26 14:20:36.050364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.231 [2024-07-26 14:20:36.050429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.231 qpair failed and we were unable to recover it. 00:26:28.231 [2024-07-26 14:20:36.050748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.231 [2024-07-26 14:20:36.050814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.231 qpair failed and we were unable to recover it. 00:26:28.231 [2024-07-26 14:20:36.051112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.231 [2024-07-26 14:20:36.051191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.231 qpair failed and we were unable to recover it. 00:26:28.231 [2024-07-26 14:20:36.051443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.231 [2024-07-26 14:20:36.051508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.231 qpair failed and we were unable to recover it. 00:26:28.231 [2024-07-26 14:20:36.051784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.231 [2024-07-26 14:20:36.051853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.231 qpair failed and we were unable to recover it. 00:26:28.231 [2024-07-26 14:20:36.052102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.231 [2024-07-26 14:20:36.052166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.231 qpair failed and we were unable to recover it. 00:26:28.231 [2024-07-26 14:20:36.052410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.232 [2024-07-26 14:20:36.052477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.232 qpair failed and we were unable to recover it. 00:26:28.232 [2024-07-26 14:20:36.052748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.232 [2024-07-26 14:20:36.052818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.232 qpair failed and we were unable to recover it. 00:26:28.232 [2024-07-26 14:20:36.053117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.232 [2024-07-26 14:20:36.053183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.232 qpair failed and we were unable to recover it. 00:26:28.232 [2024-07-26 14:20:36.053396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.232 [2024-07-26 14:20:36.053461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.232 qpair failed and we were unable to recover it. 00:26:28.232 [2024-07-26 14:20:36.053704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.232 [2024-07-26 14:20:36.053772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.232 qpair failed and we were unable to recover it. 00:26:28.232 [2024-07-26 14:20:36.053966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.232 [2024-07-26 14:20:36.054031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.232 qpair failed and we were unable to recover it. 00:26:28.232 [2024-07-26 14:20:36.054344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.232 [2024-07-26 14:20:36.054409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.232 qpair failed and we were unable to recover it. 00:26:28.232 [2024-07-26 14:20:36.054665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.232 [2024-07-26 14:20:36.054731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.232 qpair failed and we were unable to recover it. 00:26:28.232 [2024-07-26 14:20:36.054973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.232 [2024-07-26 14:20:36.055038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.232 qpair failed and we were unable to recover it. 00:26:28.232 [2024-07-26 14:20:36.055291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.232 [2024-07-26 14:20:36.055356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.232 qpair failed and we were unable to recover it. 00:26:28.232 [2024-07-26 14:20:36.055662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.232 [2024-07-26 14:20:36.055729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.232 qpair failed and we were unable to recover it. 00:26:28.232 [2024-07-26 14:20:36.055994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.232 [2024-07-26 14:20:36.056059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.232 qpair failed and we were unable to recover it. 00:26:28.232 [2024-07-26 14:20:36.056292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.232 [2024-07-26 14:20:36.056357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.232 qpair failed and we were unable to recover it. 00:26:28.232 [2024-07-26 14:20:36.056619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.232 [2024-07-26 14:20:36.056685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.232 qpair failed and we were unable to recover it. 00:26:28.232 [2024-07-26 14:20:36.056883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.232 [2024-07-26 14:20:36.056950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.232 qpair failed and we were unable to recover it. 00:26:28.232 [2024-07-26 14:20:36.057198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.232 [2024-07-26 14:20:36.057263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.232 qpair failed and we were unable to recover it. 00:26:28.232 [2024-07-26 14:20:36.057517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.232 [2024-07-26 14:20:36.057595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.232 qpair failed and we were unable to recover it. 00:26:28.232 [2024-07-26 14:20:36.057800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.232 [2024-07-26 14:20:36.057867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.232 qpair failed and we were unable to recover it. 00:26:28.232 [2024-07-26 14:20:36.058076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.232 [2024-07-26 14:20:36.058142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.232 qpair failed and we were unable to recover it. 00:26:28.232 [2024-07-26 14:20:36.058429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.232 [2024-07-26 14:20:36.058495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.232 qpair failed and we were unable to recover it. 00:26:28.232 [2024-07-26 14:20:36.058801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.232 [2024-07-26 14:20:36.058867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.232 qpair failed and we were unable to recover it. 00:26:28.232 [2024-07-26 14:20:36.059116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.232 [2024-07-26 14:20:36.059181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.232 qpair failed and we were unable to recover it. 00:26:28.232 [2024-07-26 14:20:36.059424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.232 [2024-07-26 14:20:36.059489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.232 qpair failed and we were unable to recover it. 00:26:28.232 [2024-07-26 14:20:36.059754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.232 [2024-07-26 14:20:36.059820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.232 qpair failed and we were unable to recover it. 00:26:28.232 [2024-07-26 14:20:36.060031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.233 [2024-07-26 14:20:36.060100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.233 qpair failed and we were unable to recover it. 00:26:28.233 [2024-07-26 14:20:36.060388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.233 [2024-07-26 14:20:36.060452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.233 qpair failed and we were unable to recover it. 00:26:28.233 [2024-07-26 14:20:36.060715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.233 [2024-07-26 14:20:36.060784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.233 qpair failed and we were unable to recover it. 00:26:28.233 [2024-07-26 14:20:36.061031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.233 [2024-07-26 14:20:36.061096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.233 qpair failed and we were unable to recover it. 00:26:28.233 [2024-07-26 14:20:36.061385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.233 [2024-07-26 14:20:36.061450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.233 qpair failed and we were unable to recover it. 00:26:28.233 [2024-07-26 14:20:36.061682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.233 [2024-07-26 14:20:36.061751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.233 qpair failed and we were unable to recover it. 00:26:28.233 [2024-07-26 14:20:36.061997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.233 [2024-07-26 14:20:36.062061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.233 qpair failed and we were unable to recover it. 00:26:28.233 [2024-07-26 14:20:36.062271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.233 [2024-07-26 14:20:36.062338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.233 qpair failed and we were unable to recover it. 00:26:28.233 [2024-07-26 14:20:36.062595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.233 [2024-07-26 14:20:36.062663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.233 qpair failed and we were unable to recover it. 00:26:28.233 [2024-07-26 14:20:36.062893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.233 [2024-07-26 14:20:36.062958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.233 qpair failed and we were unable to recover it. 00:26:28.233 [2024-07-26 14:20:36.063201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.233 [2024-07-26 14:20:36.063269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.233 qpair failed and we were unable to recover it. 00:26:28.233 [2024-07-26 14:20:36.063456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.233 [2024-07-26 14:20:36.063524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.233 qpair failed and we were unable to recover it. 00:26:28.233 [2024-07-26 14:20:36.063772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.233 [2024-07-26 14:20:36.063846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.233 qpair failed and we were unable to recover it. 00:26:28.233 [2024-07-26 14:20:36.064094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.233 [2024-07-26 14:20:36.064159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.233 qpair failed and we were unable to recover it. 00:26:28.233 [2024-07-26 14:20:36.064444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.233 [2024-07-26 14:20:36.064510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.233 qpair failed and we were unable to recover it. 00:26:28.233 [2024-07-26 14:20:36.064820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.233 [2024-07-26 14:20:36.064885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.233 qpair failed and we were unable to recover it. 00:26:28.233 [2024-07-26 14:20:36.065064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.233 [2024-07-26 14:20:36.065144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.233 qpair failed and we were unable to recover it. 00:26:28.233 [2024-07-26 14:20:36.065369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.233 [2024-07-26 14:20:36.065434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.233 qpair failed and we were unable to recover it. 00:26:28.233 [2024-07-26 14:20:36.065687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.233 [2024-07-26 14:20:36.065753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.233 qpair failed and we were unable to recover it. 00:26:28.234 [2024-07-26 14:20:36.066005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.234 [2024-07-26 14:20:36.066070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.234 qpair failed and we were unable to recover it. 00:26:28.234 [2024-07-26 14:20:36.066270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.234 [2024-07-26 14:20:36.066334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.234 qpair failed and we were unable to recover it. 00:26:28.234 [2024-07-26 14:20:36.066590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.234 [2024-07-26 14:20:36.066657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.234 qpair failed and we were unable to recover it. 00:26:28.234 [2024-07-26 14:20:36.066915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.234 [2024-07-26 14:20:36.066981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.234 qpair failed and we were unable to recover it. 00:26:28.234 [2024-07-26 14:20:36.067224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.234 [2024-07-26 14:20:36.067291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.234 qpair failed and we were unable to recover it. 00:26:28.234 [2024-07-26 14:20:36.067504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.234 [2024-07-26 14:20:36.067583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.234 qpair failed and we were unable to recover it. 00:26:28.234 [2024-07-26 14:20:36.067843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.234 [2024-07-26 14:20:36.067909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.234 qpair failed and we were unable to recover it. 00:26:28.234 [2024-07-26 14:20:36.068134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.234 [2024-07-26 14:20:36.068199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.234 qpair failed and we were unable to recover it. 00:26:28.234 [2024-07-26 14:20:36.068411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.234 [2024-07-26 14:20:36.068477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.234 qpair failed and we were unable to recover it. 00:26:28.234 [2024-07-26 14:20:36.068776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.234 [2024-07-26 14:20:36.068841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.234 qpair failed and we were unable to recover it. 00:26:28.234 [2024-07-26 14:20:36.069085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.234 [2024-07-26 14:20:36.069151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.234 qpair failed and we were unable to recover it. 00:26:28.234 [2024-07-26 14:20:36.069398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.234 [2024-07-26 14:20:36.069464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.234 qpair failed and we were unable to recover it. 00:26:28.234 [2024-07-26 14:20:36.069696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.234 [2024-07-26 14:20:36.069763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.234 qpair failed and we were unable to recover it. 00:26:28.234 [2024-07-26 14:20:36.070023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.234 [2024-07-26 14:20:36.070088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.234 qpair failed and we were unable to recover it. 00:26:28.234 [2024-07-26 14:20:36.070342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.234 [2024-07-26 14:20:36.070407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.234 qpair failed and we were unable to recover it. 00:26:28.234 [2024-07-26 14:20:36.070660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.234 [2024-07-26 14:20:36.070727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.234 qpair failed and we were unable to recover it. 00:26:28.234 [2024-07-26 14:20:36.070935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.234 [2024-07-26 14:20:36.070999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.234 qpair failed and we were unable to recover it. 00:26:28.234 [2024-07-26 14:20:36.071212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.234 [2024-07-26 14:20:36.071276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.234 qpair failed and we were unable to recover it. 00:26:28.234 [2024-07-26 14:20:36.071540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.234 [2024-07-26 14:20:36.071605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.234 qpair failed and we were unable to recover it. 00:26:28.234 [2024-07-26 14:20:36.071886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.234 [2024-07-26 14:20:36.071951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.234 qpair failed and we were unable to recover it. 00:26:28.234 [2024-07-26 14:20:36.072209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.234 [2024-07-26 14:20:36.072274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.234 qpair failed and we were unable to recover it. 00:26:28.234 [2024-07-26 14:20:36.072512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.234 [2024-07-26 14:20:36.072573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.234 qpair failed and we were unable to recover it. 00:26:28.234 [2024-07-26 14:20:36.072837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.234 [2024-07-26 14:20:36.072902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.234 qpair failed and we were unable to recover it. 00:26:28.234 [2024-07-26 14:20:36.073145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.234 [2024-07-26 14:20:36.073210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.234 qpair failed and we were unable to recover it. 00:26:28.234 [2024-07-26 14:20:36.073487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.234 [2024-07-26 14:20:36.073566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.234 qpair failed and we were unable to recover it. 00:26:28.234 [2024-07-26 14:20:36.073780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.234 [2024-07-26 14:20:36.073846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.234 qpair failed and we were unable to recover it. 00:26:28.234 [2024-07-26 14:20:36.074076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.234 [2024-07-26 14:20:36.074143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.234 qpair failed and we were unable to recover it. 00:26:28.234 [2024-07-26 14:20:36.074421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.234 [2024-07-26 14:20:36.074486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.234 qpair failed and we were unable to recover it. 00:26:28.234 [2024-07-26 14:20:36.074775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.234 [2024-07-26 14:20:36.074840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.234 qpair failed and we were unable to recover it. 00:26:28.234 [2024-07-26 14:20:36.075042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.234 [2024-07-26 14:20:36.075109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.234 qpair failed and we were unable to recover it. 00:26:28.234 [2024-07-26 14:20:36.075384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.234 [2024-07-26 14:20:36.075449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.234 qpair failed and we were unable to recover it. 00:26:28.234 [2024-07-26 14:20:36.075706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.234 [2024-07-26 14:20:36.075773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.234 qpair failed and we were unable to recover it. 00:26:28.234 [2024-07-26 14:20:36.076017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.234 [2024-07-26 14:20:36.076082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.234 qpair failed and we were unable to recover it. 00:26:28.234 [2024-07-26 14:20:36.076296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.234 [2024-07-26 14:20:36.076371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.235 qpair failed and we were unable to recover it. 00:26:28.235 [2024-07-26 14:20:36.076632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.235 [2024-07-26 14:20:36.076699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.235 qpair failed and we were unable to recover it. 00:26:28.235 [2024-07-26 14:20:36.076974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.235 [2024-07-26 14:20:36.077038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.235 qpair failed and we were unable to recover it. 00:26:28.235 [2024-07-26 14:20:36.077324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.235 [2024-07-26 14:20:36.077388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.235 qpair failed and we were unable to recover it. 00:26:28.235 [2024-07-26 14:20:36.077620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.235 [2024-07-26 14:20:36.077686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.235 qpair failed and we were unable to recover it. 00:26:28.235 [2024-07-26 14:20:36.077946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.235 [2024-07-26 14:20:36.078010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.235 qpair failed and we were unable to recover it. 00:26:28.235 [2024-07-26 14:20:36.078253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.235 [2024-07-26 14:20:36.078316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.235 qpair failed and we were unable to recover it. 00:26:28.235 [2024-07-26 14:20:36.078561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.235 [2024-07-26 14:20:36.078627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.235 qpair failed and we were unable to recover it. 00:26:28.235 [2024-07-26 14:20:36.078829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.235 [2024-07-26 14:20:36.078899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.235 qpair failed and we were unable to recover it. 00:26:28.235 [2024-07-26 14:20:36.079139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.235 [2024-07-26 14:20:36.079204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.235 qpair failed and we were unable to recover it. 00:26:28.235 [2024-07-26 14:20:36.079483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.235 [2024-07-26 14:20:36.079563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.235 qpair failed and we were unable to recover it. 00:26:28.235 [2024-07-26 14:20:36.079772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.235 [2024-07-26 14:20:36.079836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.235 qpair failed and we were unable to recover it. 00:26:28.235 [2024-07-26 14:20:36.080022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.235 [2024-07-26 14:20:36.080090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.235 qpair failed and we were unable to recover it. 00:26:28.235 [2024-07-26 14:20:36.080374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.235 [2024-07-26 14:20:36.080439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.235 qpair failed and we were unable to recover it. 00:26:28.235 [2024-07-26 14:20:36.080719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.235 [2024-07-26 14:20:36.080785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.235 qpair failed and we were unable to recover it. 00:26:28.235 [2024-07-26 14:20:36.081028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.235 [2024-07-26 14:20:36.081093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.235 qpair failed and we were unable to recover it. 00:26:28.235 [2024-07-26 14:20:36.081320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.235 [2024-07-26 14:20:36.081389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.235 qpair failed and we were unable to recover it. 00:26:28.235 [2024-07-26 14:20:36.081639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.235 [2024-07-26 14:20:36.081707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.235 qpair failed and we were unable to recover it. 00:26:28.235 [2024-07-26 14:20:36.081950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.235 [2024-07-26 14:20:36.082015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.235 qpair failed and we were unable to recover it. 00:26:28.235 [2024-07-26 14:20:36.082262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.235 [2024-07-26 14:20:36.082327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.235 qpair failed and we were unable to recover it. 00:26:28.235 [2024-07-26 14:20:36.082640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.235 [2024-07-26 14:20:36.082707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.235 qpair failed and we were unable to recover it. 00:26:28.235 [2024-07-26 14:20:36.082976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.235 [2024-07-26 14:20:36.083040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.235 qpair failed and we were unable to recover it. 00:26:28.235 [2024-07-26 14:20:36.083320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.235 [2024-07-26 14:20:36.083385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.235 qpair failed and we were unable to recover it. 00:26:28.235 [2024-07-26 14:20:36.083584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.235 [2024-07-26 14:20:36.083651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.235 qpair failed and we were unable to recover it. 00:26:28.235 [2024-07-26 14:20:36.083857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.235 [2024-07-26 14:20:36.083923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.235 qpair failed and we were unable to recover it. 00:26:28.235 [2024-07-26 14:20:36.084160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.235 [2024-07-26 14:20:36.084225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.235 qpair failed and we were unable to recover it. 00:26:28.235 [2024-07-26 14:20:36.084451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.235 [2024-07-26 14:20:36.084517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.235 qpair failed and we were unable to recover it. 00:26:28.235 [2024-07-26 14:20:36.084788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.235 [2024-07-26 14:20:36.084853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.235 qpair failed and we were unable to recover it. 00:26:28.235 [2024-07-26 14:20:36.085101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.235 [2024-07-26 14:20:36.085166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.235 qpair failed and we were unable to recover it. 00:26:28.235 [2024-07-26 14:20:36.085421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.235 [2024-07-26 14:20:36.085485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.235 qpair failed and we were unable to recover it. 00:26:28.235 [2024-07-26 14:20:36.085788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.235 [2024-07-26 14:20:36.085854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.235 qpair failed and we were unable to recover it. 00:26:28.235 [2024-07-26 14:20:36.086091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.235 [2024-07-26 14:20:36.086156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.235 qpair failed and we were unable to recover it. 00:26:28.235 [2024-07-26 14:20:36.086402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.235 [2024-07-26 14:20:36.086466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.235 qpair failed and we were unable to recover it. 00:26:28.235 [2024-07-26 14:20:36.086729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.236 [2024-07-26 14:20:36.086794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.236 qpair failed and we were unable to recover it. 00:26:28.236 [2024-07-26 14:20:36.087045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.236 [2024-07-26 14:20:36.087110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.236 qpair failed and we were unable to recover it. 00:26:28.236 [2024-07-26 14:20:36.087358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.236 [2024-07-26 14:20:36.087421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.236 qpair failed and we were unable to recover it. 00:26:28.236 [2024-07-26 14:20:36.087715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.236 [2024-07-26 14:20:36.087782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.236 qpair failed and we were unable to recover it. 00:26:28.236 [2024-07-26 14:20:36.088042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.236 [2024-07-26 14:20:36.088108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.236 qpair failed and we were unable to recover it. 00:26:28.236 [2024-07-26 14:20:36.088331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.236 [2024-07-26 14:20:36.088396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.236 qpair failed and we were unable to recover it. 00:26:28.236 [2024-07-26 14:20:36.088645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.236 [2024-07-26 14:20:36.088712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.236 qpair failed and we were unable to recover it. 00:26:28.236 [2024-07-26 14:20:36.088976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.236 [2024-07-26 14:20:36.089052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.236 qpair failed and we were unable to recover it. 00:26:28.236 [2024-07-26 14:20:36.089300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.236 [2024-07-26 14:20:36.089326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.236 qpair failed and we were unable to recover it. 00:26:28.236 [2024-07-26 14:20:36.089408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.236 [2024-07-26 14:20:36.089435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.236 qpair failed and we were unable to recover it. 00:26:28.236 [2024-07-26 14:20:36.089532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.236 [2024-07-26 14:20:36.089587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.236 qpair failed and we were unable to recover it. 00:26:28.236 [2024-07-26 14:20:36.089844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.236 [2024-07-26 14:20:36.089909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.236 qpair failed and we were unable to recover it. 00:26:28.236 [2024-07-26 14:20:36.090156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.236 [2024-07-26 14:20:36.090221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.236 qpair failed and we were unable to recover it. 00:26:28.236 [2024-07-26 14:20:36.090481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.236 [2024-07-26 14:20:36.090567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.236 qpair failed and we were unable to recover it. 00:26:28.236 [2024-07-26 14:20:36.090821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.236 [2024-07-26 14:20:36.090886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.236 qpair failed and we were unable to recover it. 00:26:28.236 [2024-07-26 14:20:36.091093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.236 [2024-07-26 14:20:36.091157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.236 qpair failed and we were unable to recover it. 00:26:28.236 [2024-07-26 14:20:36.091357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.236 [2024-07-26 14:20:36.091421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.236 qpair failed and we were unable to recover it. 00:26:28.236 [2024-07-26 14:20:36.091683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.236 [2024-07-26 14:20:36.091750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.236 qpair failed and we were unable to recover it. 00:26:28.236 [2024-07-26 14:20:36.091993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.236 [2024-07-26 14:20:36.092059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.236 qpair failed and we were unable to recover it. 00:26:28.236 [2024-07-26 14:20:36.092309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.236 [2024-07-26 14:20:36.092336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.236 qpair failed and we were unable to recover it. 00:26:28.236 [2024-07-26 14:20:36.092457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.236 [2024-07-26 14:20:36.092484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.236 qpair failed and we were unable to recover it. 00:26:28.236 [2024-07-26 14:20:36.092663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.236 [2024-07-26 14:20:36.092732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.236 qpair failed and we were unable to recover it. 00:26:28.236 [2024-07-26 14:20:36.092954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.236 [2024-07-26 14:20:36.093022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.236 qpair failed and we were unable to recover it. 00:26:28.236 [2024-07-26 14:20:36.093268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.236 [2024-07-26 14:20:36.093334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.236 qpair failed and we were unable to recover it. 00:26:28.236 [2024-07-26 14:20:36.093562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.236 [2024-07-26 14:20:36.093629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.236 qpair failed and we were unable to recover it. 00:26:28.236 [2024-07-26 14:20:36.093878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.236 [2024-07-26 14:20:36.093943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.236 qpair failed and we were unable to recover it. 00:26:28.236 [2024-07-26 14:20:36.094218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.236 [2024-07-26 14:20:36.094283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.236 qpair failed and we were unable to recover it. 00:26:28.236 [2024-07-26 14:20:36.094548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.236 [2024-07-26 14:20:36.094614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.236 qpair failed and we were unable to recover it. 00:26:28.236 [2024-07-26 14:20:36.094837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.236 [2024-07-26 14:20:36.094904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.236 qpair failed and we were unable to recover it. 00:26:28.236 [2024-07-26 14:20:36.095191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.236 [2024-07-26 14:20:36.095256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.236 qpair failed and we were unable to recover it. 00:26:28.236 [2024-07-26 14:20:36.095504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.236 [2024-07-26 14:20:36.095601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.236 qpair failed and we were unable to recover it. 00:26:28.236 [2024-07-26 14:20:36.095834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.236 [2024-07-26 14:20:36.095899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.236 qpair failed and we were unable to recover it. 00:26:28.236 [2024-07-26 14:20:36.096178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.236 [2024-07-26 14:20:36.096242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.236 qpair failed and we were unable to recover it. 00:26:28.236 [2024-07-26 14:20:36.096500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.236 [2024-07-26 14:20:36.096585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.236 qpair failed and we were unable to recover it. 00:26:28.236 [2024-07-26 14:20:36.096849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.237 [2024-07-26 14:20:36.096876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.237 qpair failed and we were unable to recover it. 00:26:28.237 [2024-07-26 14:20:36.096966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.237 [2024-07-26 14:20:36.096992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.237 qpair failed and we were unable to recover it. 00:26:28.237 [2024-07-26 14:20:36.097109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.237 [2024-07-26 14:20:36.097188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.237 qpair failed and we were unable to recover it. 00:26:28.237 [2024-07-26 14:20:36.097460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.237 [2024-07-26 14:20:36.097525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.237 qpair failed and we were unable to recover it. 00:26:28.237 [2024-07-26 14:20:36.097795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.237 [2024-07-26 14:20:36.097862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.237 qpair failed and we were unable to recover it. 00:26:28.237 [2024-07-26 14:20:36.098151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.237 [2024-07-26 14:20:36.098217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.237 qpair failed and we were unable to recover it. 00:26:28.237 [2024-07-26 14:20:36.098450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.237 [2024-07-26 14:20:36.098514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.237 qpair failed and we were unable to recover it. 00:26:28.237 [2024-07-26 14:20:36.098823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.237 [2024-07-26 14:20:36.098887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.237 qpair failed and we were unable to recover it. 00:26:28.237 [2024-07-26 14:20:36.099142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.237 [2024-07-26 14:20:36.099206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.237 qpair failed and we were unable to recover it. 00:26:28.237 [2024-07-26 14:20:36.099456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.237 [2024-07-26 14:20:36.099522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.237 qpair failed and we were unable to recover it. 00:26:28.237 [2024-07-26 14:20:36.099791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.237 [2024-07-26 14:20:36.099855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.237 qpair failed and we were unable to recover it. 00:26:28.237 [2024-07-26 14:20:36.100096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.237 [2024-07-26 14:20:36.100160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.237 qpair failed and we were unable to recover it. 00:26:28.237 [2024-07-26 14:20:36.100449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.237 [2024-07-26 14:20:36.100514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.237 qpair failed and we were unable to recover it. 00:26:28.237 [2024-07-26 14:20:36.100735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.237 [2024-07-26 14:20:36.100809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.237 qpair failed and we were unable to recover it. 00:26:28.237 [2024-07-26 14:20:36.101096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.237 [2024-07-26 14:20:36.101160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.237 qpair failed and we were unable to recover it. 00:26:28.237 [2024-07-26 14:20:36.101403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.237 [2024-07-26 14:20:36.101469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.237 qpair failed and we were unable to recover it. 00:26:28.237 [2024-07-26 14:20:36.101744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.237 [2024-07-26 14:20:36.101809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.237 qpair failed and we were unable to recover it. 00:26:28.237 [2024-07-26 14:20:36.102088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.237 [2024-07-26 14:20:36.102153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.237 qpair failed and we were unable to recover it. 00:26:28.237 [2024-07-26 14:20:36.102352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.237 [2024-07-26 14:20:36.102416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.237 qpair failed and we were unable to recover it. 00:26:28.237 [2024-07-26 14:20:36.102629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.237 [2024-07-26 14:20:36.102695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.237 qpair failed and we were unable to recover it. 00:26:28.237 [2024-07-26 14:20:36.102946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.237 [2024-07-26 14:20:36.103010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.237 qpair failed and we were unable to recover it. 00:26:28.237 [2024-07-26 14:20:36.103301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.237 [2024-07-26 14:20:36.103367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.237 qpair failed and we were unable to recover it. 00:26:28.237 [2024-07-26 14:20:36.103608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.237 [2024-07-26 14:20:36.103674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.237 qpair failed and we were unable to recover it. 00:26:28.237 [2024-07-26 14:20:36.103890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.237 [2024-07-26 14:20:36.103917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.237 qpair failed and we were unable to recover it. 00:26:28.237 [2024-07-26 14:20:36.104051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.237 [2024-07-26 14:20:36.104078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.237 qpair failed and we were unable to recover it. 00:26:28.237 [2024-07-26 14:20:36.104191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.237 [2024-07-26 14:20:36.104217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.237 qpair failed and we were unable to recover it. 00:26:28.237 [2024-07-26 14:20:36.104305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.237 [2024-07-26 14:20:36.104331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.237 qpair failed and we were unable to recover it. 00:26:28.237 [2024-07-26 14:20:36.104446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.237 [2024-07-26 14:20:36.104471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.237 qpair failed and we were unable to recover it. 00:26:28.237 [2024-07-26 14:20:36.104585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.237 [2024-07-26 14:20:36.104612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.237 qpair failed and we were unable to recover it. 00:26:28.237 [2024-07-26 14:20:36.104726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.237 [2024-07-26 14:20:36.104752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.237 qpair failed and we were unable to recover it. 00:26:28.237 [2024-07-26 14:20:36.104842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.237 [2024-07-26 14:20:36.104868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.237 qpair failed and we were unable to recover it. 00:26:28.237 [2024-07-26 14:20:36.105113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.238 [2024-07-26 14:20:36.105178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.238 qpair failed and we were unable to recover it. 00:26:28.238 [2024-07-26 14:20:36.105428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.238 [2024-07-26 14:20:36.105495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.238 qpair failed and we were unable to recover it. 00:26:28.238 [2024-07-26 14:20:36.105727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.238 [2024-07-26 14:20:36.105795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.238 qpair failed and we were unable to recover it. 00:26:28.238 [2024-07-26 14:20:36.106028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.238 [2024-07-26 14:20:36.106093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.238 qpair failed and we were unable to recover it. 00:26:28.238 [2024-07-26 14:20:36.106333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.238 [2024-07-26 14:20:36.106399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.238 qpair failed and we were unable to recover it. 00:26:28.238 [2024-07-26 14:20:36.106665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.238 [2024-07-26 14:20:36.106692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.238 qpair failed and we were unable to recover it. 00:26:28.238 [2024-07-26 14:20:36.106801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.238 [2024-07-26 14:20:36.106827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.238 qpair failed and we were unable to recover it. 00:26:28.238 [2024-07-26 14:20:36.106931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.238 [2024-07-26 14:20:36.106957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.238 qpair failed and we were unable to recover it. 00:26:28.238 [2024-07-26 14:20:36.107072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.238 [2024-07-26 14:20:36.107098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.238 qpair failed and we were unable to recover it. 00:26:28.238 [2024-07-26 14:20:36.107235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.238 [2024-07-26 14:20:36.107264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.238 qpair failed and we were unable to recover it. 00:26:28.238 [2024-07-26 14:20:36.107348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.238 [2024-07-26 14:20:36.107375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.238 qpair failed and we were unable to recover it. 00:26:28.238 [2024-07-26 14:20:36.107486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.238 [2024-07-26 14:20:36.107512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.238 qpair failed and we were unable to recover it. 00:26:28.238 [2024-07-26 14:20:36.107602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.238 [2024-07-26 14:20:36.107628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.238 qpair failed and we were unable to recover it. 00:26:28.238 [2024-07-26 14:20:36.107734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.238 [2024-07-26 14:20:36.107759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.238 qpair failed and we were unable to recover it. 00:26:28.238 [2024-07-26 14:20:36.107973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.238 [2024-07-26 14:20:36.108035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.238 qpair failed and we were unable to recover it. 00:26:28.238 [2024-07-26 14:20:36.108283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.238 [2024-07-26 14:20:36.108347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.238 qpair failed and we were unable to recover it. 00:26:28.238 [2024-07-26 14:20:36.108590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.238 [2024-07-26 14:20:36.108656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.238 qpair failed and we were unable to recover it. 00:26:28.238 [2024-07-26 14:20:36.108857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.238 [2024-07-26 14:20:36.108921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.238 qpair failed and we were unable to recover it. 00:26:28.238 [2024-07-26 14:20:36.109143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.238 [2024-07-26 14:20:36.109211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.238 qpair failed and we were unable to recover it. 00:26:28.238 [2024-07-26 14:20:36.109453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.238 [2024-07-26 14:20:36.109520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.238 qpair failed and we were unable to recover it. 00:26:28.238 [2024-07-26 14:20:36.109841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.238 [2024-07-26 14:20:36.109907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.238 qpair failed and we were unable to recover it. 00:26:28.238 [2024-07-26 14:20:36.110198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.238 [2024-07-26 14:20:36.110262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.238 qpair failed and we were unable to recover it. 00:26:28.238 [2024-07-26 14:20:36.110480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.238 [2024-07-26 14:20:36.110560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.238 qpair failed and we were unable to recover it. 00:26:28.238 [2024-07-26 14:20:36.110823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.238 [2024-07-26 14:20:36.110888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.238 qpair failed and we were unable to recover it. 00:26:28.238 [2024-07-26 14:20:36.111101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.238 [2024-07-26 14:20:36.111166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.238 qpair failed and we were unable to recover it. 00:26:28.238 [2024-07-26 14:20:36.111445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.238 [2024-07-26 14:20:36.111510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.238 qpair failed and we were unable to recover it. 00:26:28.238 [2024-07-26 14:20:36.111745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.238 [2024-07-26 14:20:36.111810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.238 qpair failed and we were unable to recover it. 00:26:28.238 [2024-07-26 14:20:36.112012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.238 [2024-07-26 14:20:36.112077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.238 qpair failed and we were unable to recover it. 00:26:28.238 [2024-07-26 14:20:36.112317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.238 [2024-07-26 14:20:36.112384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.238 qpair failed and we were unable to recover it. 00:26:28.238 [2024-07-26 14:20:36.112602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.238 [2024-07-26 14:20:36.112671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.238 qpair failed and we were unable to recover it. 00:26:28.238 [2024-07-26 14:20:36.112884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.238 [2024-07-26 14:20:36.112948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.238 qpair failed and we were unable to recover it. 00:26:28.238 [2024-07-26 14:20:36.113202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.239 [2024-07-26 14:20:36.113266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.239 qpair failed and we were unable to recover it. 00:26:28.239 [2024-07-26 14:20:36.113490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.239 [2024-07-26 14:20:36.113568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.239 qpair failed and we were unable to recover it. 00:26:28.239 [2024-07-26 14:20:36.113815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.239 [2024-07-26 14:20:36.113882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.239 qpair failed and we were unable to recover it. 00:26:28.239 [2024-07-26 14:20:36.114126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.239 [2024-07-26 14:20:36.114191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.239 qpair failed and we were unable to recover it. 00:26:28.239 [2024-07-26 14:20:36.114439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.239 [2024-07-26 14:20:36.114504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.239 qpair failed and we were unable to recover it. 00:26:28.239 [2024-07-26 14:20:36.114802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.239 [2024-07-26 14:20:36.114868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.239 qpair failed and we were unable to recover it. 00:26:28.239 [2024-07-26 14:20:36.115158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.239 [2024-07-26 14:20:36.115222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.239 qpair failed and we were unable to recover it. 00:26:28.239 [2024-07-26 14:20:36.115492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.239 [2024-07-26 14:20:36.115571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.239 qpair failed and we were unable to recover it. 00:26:28.239 [2024-07-26 14:20:36.115830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.239 [2024-07-26 14:20:36.115896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.239 qpair failed and we were unable to recover it. 00:26:28.239 [2024-07-26 14:20:36.116140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.239 [2024-07-26 14:20:36.116204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.239 qpair failed and we were unable to recover it. 00:26:28.239 [2024-07-26 14:20:36.116415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.239 [2024-07-26 14:20:36.116482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.239 qpair failed and we were unable to recover it. 00:26:28.239 [2024-07-26 14:20:36.116759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.239 [2024-07-26 14:20:36.116824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.239 qpair failed and we were unable to recover it. 00:26:28.239 [2024-07-26 14:20:36.117078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.239 [2024-07-26 14:20:36.117142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.239 qpair failed and we were unable to recover it. 00:26:28.239 [2024-07-26 14:20:36.117351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.239 [2024-07-26 14:20:36.117416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.239 qpair failed and we were unable to recover it. 00:26:28.239 [2024-07-26 14:20:36.117635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.239 [2024-07-26 14:20:36.117704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.239 qpair failed and we were unable to recover it. 00:26:28.239 [2024-07-26 14:20:36.117972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.239 [2024-07-26 14:20:36.118038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.239 qpair failed and we were unable to recover it. 00:26:28.239 [2024-07-26 14:20:36.118294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.239 [2024-07-26 14:20:36.118358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.239 qpair failed and we were unable to recover it. 00:26:28.239 [2024-07-26 14:20:36.118580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.239 [2024-07-26 14:20:36.118650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.239 qpair failed and we were unable to recover it. 00:26:28.239 [2024-07-26 14:20:36.118890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.239 [2024-07-26 14:20:36.118965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.239 qpair failed and we were unable to recover it. 00:26:28.239 [2024-07-26 14:20:36.119178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.239 [2024-07-26 14:20:36.119244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.239 qpair failed and we were unable to recover it. 00:26:28.239 [2024-07-26 14:20:36.119498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.239 [2024-07-26 14:20:36.119577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.239 qpair failed and we were unable to recover it. 00:26:28.239 [2024-07-26 14:20:36.119774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.239 [2024-07-26 14:20:36.119840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.239 qpair failed and we were unable to recover it. 00:26:28.239 [2024-07-26 14:20:36.120120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.239 [2024-07-26 14:20:36.120186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.239 qpair failed and we were unable to recover it. 00:26:28.239 [2024-07-26 14:20:36.120412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.239 [2024-07-26 14:20:36.120477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.239 qpair failed and we were unable to recover it. 00:26:28.239 [2024-07-26 14:20:36.120749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.239 [2024-07-26 14:20:36.120813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.239 qpair failed and we were unable to recover it. 00:26:28.239 [2024-07-26 14:20:36.121026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.239 [2024-07-26 14:20:36.121093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.239 qpair failed and we were unable to recover it. 00:26:28.239 [2024-07-26 14:20:36.121354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.239 [2024-07-26 14:20:36.121418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.239 qpair failed and we were unable to recover it. 00:26:28.239 [2024-07-26 14:20:36.121708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.239 [2024-07-26 14:20:36.121773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.239 qpair failed and we were unable to recover it. 00:26:28.239 [2024-07-26 14:20:36.122018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.239 [2024-07-26 14:20:36.122083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.239 qpair failed and we were unable to recover it. 00:26:28.239 [2024-07-26 14:20:36.122290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.239 [2024-07-26 14:20:36.122357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.239 qpair failed and we were unable to recover it. 00:26:28.239 [2024-07-26 14:20:36.122606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.239 [2024-07-26 14:20:36.122674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.239 qpair failed and we were unable to recover it. 00:26:28.239 [2024-07-26 14:20:36.122953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.239 [2024-07-26 14:20:36.123019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.239 qpair failed and we were unable to recover it. 00:26:28.239 [2024-07-26 14:20:36.123286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.239 [2024-07-26 14:20:36.123353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.239 qpair failed and we were unable to recover it. 00:26:28.239 [2024-07-26 14:20:36.123586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.239 [2024-07-26 14:20:36.123653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.239 qpair failed and we were unable to recover it. 00:26:28.239 [2024-07-26 14:20:36.123942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.239 [2024-07-26 14:20:36.124008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.239 qpair failed and we were unable to recover it. 00:26:28.239 [2024-07-26 14:20:36.124245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.239 [2024-07-26 14:20:36.124309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.239 qpair failed and we were unable to recover it. 00:26:28.239 [2024-07-26 14:20:36.124594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.239 [2024-07-26 14:20:36.124661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.239 qpair failed and we were unable to recover it. 00:26:28.239 [2024-07-26 14:20:36.124865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.239 [2024-07-26 14:20:36.124930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.239 qpair failed and we were unable to recover it. 00:26:28.240 [2024-07-26 14:20:36.125212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.240 [2024-07-26 14:20:36.125277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.240 qpair failed and we were unable to recover it. 00:26:28.240 [2024-07-26 14:20:36.125568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.240 [2024-07-26 14:20:36.125634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.240 qpair failed and we were unable to recover it. 00:26:28.240 [2024-07-26 14:20:36.125859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.240 [2024-07-26 14:20:36.125886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.240 qpair failed and we were unable to recover it. 00:26:28.240 [2024-07-26 14:20:36.126115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.240 [2024-07-26 14:20:36.126183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.240 qpair failed and we were unable to recover it. 00:26:28.240 [2024-07-26 14:20:36.126461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.240 [2024-07-26 14:20:36.126546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.240 qpair failed and we were unable to recover it. 00:26:28.240 [2024-07-26 14:20:36.126800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.240 [2024-07-26 14:20:36.126866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.240 qpair failed and we were unable to recover it. 00:26:28.240 [2024-07-26 14:20:36.127075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.240 [2024-07-26 14:20:36.127142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.240 qpair failed and we were unable to recover it. 00:26:28.240 [2024-07-26 14:20:36.127428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.240 [2024-07-26 14:20:36.127494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.240 qpair failed and we were unable to recover it. 00:26:28.240 [2024-07-26 14:20:36.127771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.240 [2024-07-26 14:20:36.127837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.240 qpair failed and we were unable to recover it. 00:26:28.240 [2024-07-26 14:20:36.128078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.240 [2024-07-26 14:20:36.128142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.240 qpair failed and we were unable to recover it. 00:26:28.240 [2024-07-26 14:20:36.128351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.240 [2024-07-26 14:20:36.128415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.240 qpair failed and we were unable to recover it. 00:26:28.240 [2024-07-26 14:20:36.128699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.240 [2024-07-26 14:20:36.128766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.240 qpair failed and we were unable to recover it. 00:26:28.240 [2024-07-26 14:20:36.128977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.240 [2024-07-26 14:20:36.129041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.240 qpair failed and we were unable to recover it. 00:26:28.240 [2024-07-26 14:20:36.129245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.240 [2024-07-26 14:20:36.129310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.240 qpair failed and we were unable to recover it. 00:26:28.240 [2024-07-26 14:20:36.129499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.240 [2024-07-26 14:20:36.129583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.240 qpair failed and we were unable to recover it. 00:26:28.240 [2024-07-26 14:20:36.129839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.240 [2024-07-26 14:20:36.129904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.240 qpair failed and we were unable to recover it. 00:26:28.240 [2024-07-26 14:20:36.130195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.240 [2024-07-26 14:20:36.130260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.240 qpair failed and we were unable to recover it. 00:26:28.240 [2024-07-26 14:20:36.130467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.240 [2024-07-26 14:20:36.130550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.240 qpair failed and we were unable to recover it. 00:26:28.240 [2024-07-26 14:20:36.130775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.240 [2024-07-26 14:20:36.130843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.240 qpair failed and we were unable to recover it. 00:26:28.240 [2024-07-26 14:20:36.131138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.240 [2024-07-26 14:20:36.131203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.240 qpair failed and we were unable to recover it. 00:26:28.240 [2024-07-26 14:20:36.131486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.240 [2024-07-26 14:20:36.131579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.240 qpair failed and we were unable to recover it. 00:26:28.240 [2024-07-26 14:20:36.131824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.240 [2024-07-26 14:20:36.131889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.240 qpair failed and we were unable to recover it. 00:26:28.240 [2024-07-26 14:20:36.132139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.240 [2024-07-26 14:20:36.132204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.240 qpair failed and we were unable to recover it. 00:26:28.240 [2024-07-26 14:20:36.132484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.240 [2024-07-26 14:20:36.132586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.240 qpair failed and we were unable to recover it. 00:26:28.240 [2024-07-26 14:20:36.132846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.240 [2024-07-26 14:20:36.132911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.240 qpair failed and we were unable to recover it. 00:26:28.240 [2024-07-26 14:20:36.133091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.240 [2024-07-26 14:20:36.133156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.240 qpair failed and we were unable to recover it. 00:26:28.240 [2024-07-26 14:20:36.133402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.240 [2024-07-26 14:20:36.133468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.240 qpair failed and we were unable to recover it. 00:26:28.240 [2024-07-26 14:20:36.133698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.240 [2024-07-26 14:20:36.133765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.240 qpair failed and we were unable to recover it. 00:26:28.240 [2024-07-26 14:20:36.134012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.240 [2024-07-26 14:20:36.134077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.240 qpair failed and we were unable to recover it. 00:26:28.240 [2024-07-26 14:20:36.134311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.240 [2024-07-26 14:20:36.134377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.240 qpair failed and we were unable to recover it. 00:26:28.240 [2024-07-26 14:20:36.134596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.240 [2024-07-26 14:20:36.134666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.240 qpair failed and we were unable to recover it. 00:26:28.240 [2024-07-26 14:20:36.134877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.240 [2024-07-26 14:20:36.134943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.240 qpair failed and we were unable to recover it. 00:26:28.240 [2024-07-26 14:20:36.135179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.240 [2024-07-26 14:20:36.135245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.240 qpair failed and we were unable to recover it. 00:26:28.240 [2024-07-26 14:20:36.135474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.240 [2024-07-26 14:20:36.135552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.240 qpair failed and we were unable to recover it. 00:26:28.240 [2024-07-26 14:20:36.135808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.240 [2024-07-26 14:20:36.135874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.240 qpair failed and we were unable to recover it. 00:26:28.240 [2024-07-26 14:20:36.136115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.240 [2024-07-26 14:20:36.136182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.240 qpair failed and we were unable to recover it. 00:26:28.240 [2024-07-26 14:20:36.136440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.241 [2024-07-26 14:20:36.136505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.241 qpair failed and we were unable to recover it. 00:26:28.241 [2024-07-26 14:20:36.136750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.241 [2024-07-26 14:20:36.136816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.241 qpair failed and we were unable to recover it. 00:26:28.241 [2024-07-26 14:20:36.137032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.241 [2024-07-26 14:20:36.137098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.241 qpair failed and we were unable to recover it. 00:26:28.241 [2024-07-26 14:20:36.137327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.241 [2024-07-26 14:20:36.137393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.241 qpair failed and we were unable to recover it. 00:26:28.241 [2024-07-26 14:20:36.137676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.241 [2024-07-26 14:20:36.137741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.241 qpair failed and we were unable to recover it. 00:26:28.241 [2024-07-26 14:20:36.137929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.241 [2024-07-26 14:20:36.137994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.241 qpair failed and we were unable to recover it. 00:26:28.241 [2024-07-26 14:20:36.138177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.241 [2024-07-26 14:20:36.138243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.241 qpair failed and we were unable to recover it. 00:26:28.241 [2024-07-26 14:20:36.138460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.241 [2024-07-26 14:20:36.138525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.241 qpair failed and we were unable to recover it. 00:26:28.241 [2024-07-26 14:20:36.138781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.241 [2024-07-26 14:20:36.138846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.241 qpair failed and we were unable to recover it. 00:26:28.241 [2024-07-26 14:20:36.139129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.241 [2024-07-26 14:20:36.139194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.241 qpair failed and we were unable to recover it. 00:26:28.241 [2024-07-26 14:20:36.139396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.241 [2024-07-26 14:20:36.139462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.241 qpair failed and we were unable to recover it. 00:26:28.241 [2024-07-26 14:20:36.139684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.241 [2024-07-26 14:20:36.139750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.241 qpair failed and we were unable to recover it. 00:26:28.241 [2024-07-26 14:20:36.139986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.241 [2024-07-26 14:20:36.140051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.241 qpair failed and we were unable to recover it. 00:26:28.241 [2024-07-26 14:20:36.140296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.241 [2024-07-26 14:20:36.140360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.241 qpair failed and we were unable to recover it. 00:26:28.241 [2024-07-26 14:20:36.140581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.241 [2024-07-26 14:20:36.140649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.241 qpair failed and we were unable to recover it. 00:26:28.241 [2024-07-26 14:20:36.140895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.241 [2024-07-26 14:20:36.140960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.241 qpair failed and we were unable to recover it. 00:26:28.241 [2024-07-26 14:20:36.141245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.241 [2024-07-26 14:20:36.141310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.241 qpair failed and we were unable to recover it. 00:26:28.241 [2024-07-26 14:20:36.141513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.241 [2024-07-26 14:20:36.141595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.241 qpair failed and we were unable to recover it. 00:26:28.241 [2024-07-26 14:20:36.141805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.241 [2024-07-26 14:20:36.141872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.241 qpair failed and we were unable to recover it. 00:26:28.241 [2024-07-26 14:20:36.142156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.241 [2024-07-26 14:20:36.142221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.241 qpair failed and we were unable to recover it. 00:26:28.241 [2024-07-26 14:20:36.142473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.241 [2024-07-26 14:20:36.142551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.241 qpair failed and we were unable to recover it. 00:26:28.241 [2024-07-26 14:20:36.142811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.241 [2024-07-26 14:20:36.142875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.241 qpair failed and we were unable to recover it. 00:26:28.241 [2024-07-26 14:20:36.143116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.241 [2024-07-26 14:20:36.143181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.241 qpair failed and we were unable to recover it. 00:26:28.241 [2024-07-26 14:20:36.143440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.241 [2024-07-26 14:20:36.143506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.241 qpair failed and we were unable to recover it. 00:26:28.241 [2024-07-26 14:20:36.143738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.241 [2024-07-26 14:20:36.143814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.241 qpair failed and we were unable to recover it. 00:26:28.241 [2024-07-26 14:20:36.144028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.241 [2024-07-26 14:20:36.144093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.241 qpair failed and we were unable to recover it. 00:26:28.241 [2024-07-26 14:20:36.144376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.241 [2024-07-26 14:20:36.144440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.241 qpair failed and we were unable to recover it. 00:26:28.241 [2024-07-26 14:20:36.144720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.241 [2024-07-26 14:20:36.144788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.241 qpair failed and we were unable to recover it. 00:26:28.241 [2024-07-26 14:20:36.145030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.241 [2024-07-26 14:20:36.145099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.241 qpair failed and we were unable to recover it. 00:26:28.241 [2024-07-26 14:20:36.145351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.241 [2024-07-26 14:20:36.145416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.241 qpair failed and we were unable to recover it. 00:26:28.241 [2024-07-26 14:20:36.145663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.241 [2024-07-26 14:20:36.145731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.241 qpair failed and we were unable to recover it. 00:26:28.241 [2024-07-26 14:20:36.145945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.241 [2024-07-26 14:20:36.146012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.241 qpair failed and we were unable to recover it. 00:26:28.241 [2024-07-26 14:20:36.146227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.241 [2024-07-26 14:20:36.146291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.241 qpair failed and we were unable to recover it. 00:26:28.241 [2024-07-26 14:20:36.146575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.241 [2024-07-26 14:20:36.146642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.241 qpair failed and we were unable to recover it. 00:26:28.241 [2024-07-26 14:20:36.146892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.241 [2024-07-26 14:20:36.146957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.241 qpair failed and we were unable to recover it. 00:26:28.241 [2024-07-26 14:20:36.147163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.241 [2024-07-26 14:20:36.147229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.241 qpair failed and we were unable to recover it. 00:26:28.241 [2024-07-26 14:20:36.147516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.241 [2024-07-26 14:20:36.147595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.241 qpair failed and we were unable to recover it. 00:26:28.241 [2024-07-26 14:20:36.147815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.241 [2024-07-26 14:20:36.147880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.241 qpair failed and we were unable to recover it. 00:26:28.241 [2024-07-26 14:20:36.148134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.242 [2024-07-26 14:20:36.148202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.242 qpair failed and we were unable to recover it. 00:26:28.242 [2024-07-26 14:20:36.148445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.242 [2024-07-26 14:20:36.148513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.242 qpair failed and we were unable to recover it. 00:26:28.242 [2024-07-26 14:20:36.148801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.242 [2024-07-26 14:20:36.148866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.242 qpair failed and we were unable to recover it. 00:26:28.242 [2024-07-26 14:20:36.149093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.242 [2024-07-26 14:20:36.149158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.242 qpair failed and we were unable to recover it. 00:26:28.242 [2024-07-26 14:20:36.149382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.242 [2024-07-26 14:20:36.149447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.242 qpair failed and we were unable to recover it. 00:26:28.242 [2024-07-26 14:20:36.149749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.242 [2024-07-26 14:20:36.149816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.242 qpair failed and we were unable to recover it. 00:26:28.242 [2024-07-26 14:20:36.150034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.242 [2024-07-26 14:20:36.150099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.242 qpair failed and we were unable to recover it. 00:26:28.242 [2024-07-26 14:20:36.150389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.242 [2024-07-26 14:20:36.150455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.242 qpair failed and we were unable to recover it. 00:26:28.242 [2024-07-26 14:20:36.150712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.242 [2024-07-26 14:20:36.150780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.242 qpair failed and we were unable to recover it. 00:26:28.242 [2024-07-26 14:20:36.151005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.242 [2024-07-26 14:20:36.151072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.242 qpair failed and we were unable to recover it. 00:26:28.242 [2024-07-26 14:20:36.151297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.242 [2024-07-26 14:20:36.151362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.242 qpair failed and we were unable to recover it. 00:26:28.242 [2024-07-26 14:20:36.151598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.242 [2024-07-26 14:20:36.151664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.242 qpair failed and we were unable to recover it. 00:26:28.242 [2024-07-26 14:20:36.151953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.242 [2024-07-26 14:20:36.152018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.242 qpair failed and we were unable to recover it. 00:26:28.242 [2024-07-26 14:20:36.152274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.242 [2024-07-26 14:20:36.152339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.242 qpair failed and we were unable to recover it. 00:26:28.242 [2024-07-26 14:20:36.152581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.242 [2024-07-26 14:20:36.152648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.242 qpair failed and we were unable to recover it. 00:26:28.242 [2024-07-26 14:20:36.152888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.242 [2024-07-26 14:20:36.152954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.242 qpair failed and we were unable to recover it. 00:26:28.242 [2024-07-26 14:20:36.153200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.242 [2024-07-26 14:20:36.153265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.242 qpair failed and we were unable to recover it. 00:26:28.242 [2024-07-26 14:20:36.153509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.242 [2024-07-26 14:20:36.153589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.242 qpair failed and we were unable to recover it. 00:26:28.242 [2024-07-26 14:20:36.153841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.242 [2024-07-26 14:20:36.153906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.242 qpair failed and we were unable to recover it. 00:26:28.242 [2024-07-26 14:20:36.154107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.242 [2024-07-26 14:20:36.154171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.242 qpair failed and we were unable to recover it. 00:26:28.242 [2024-07-26 14:20:36.154372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.242 [2024-07-26 14:20:36.154436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.242 qpair failed and we were unable to recover it. 00:26:28.242 [2024-07-26 14:20:36.154730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.242 [2024-07-26 14:20:36.154796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.242 qpair failed and we were unable to recover it. 00:26:28.242 [2024-07-26 14:20:36.155055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.242 [2024-07-26 14:20:36.155123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.242 qpair failed and we were unable to recover it. 00:26:28.242 [2024-07-26 14:20:36.155332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.242 [2024-07-26 14:20:36.155399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.242 qpair failed and we were unable to recover it. 00:26:28.242 [2024-07-26 14:20:36.155634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.242 [2024-07-26 14:20:36.155700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.242 qpair failed and we were unable to recover it. 00:26:28.242 [2024-07-26 14:20:36.155993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.242 [2024-07-26 14:20:36.156057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.242 qpair failed and we were unable to recover it. 00:26:28.242 [2024-07-26 14:20:36.156312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.242 [2024-07-26 14:20:36.156389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.242 qpair failed and we were unable to recover it. 00:26:28.242 [2024-07-26 14:20:36.156670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.242 [2024-07-26 14:20:36.156736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.242 qpair failed and we were unable to recover it. 00:26:28.242 [2024-07-26 14:20:36.156967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.242 [2024-07-26 14:20:36.157033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.242 qpair failed and we were unable to recover it. 00:26:28.242 [2024-07-26 14:20:36.157232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.242 [2024-07-26 14:20:36.157298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.242 qpair failed and we were unable to recover it. 00:26:28.242 [2024-07-26 14:20:36.157555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.242 [2024-07-26 14:20:36.157621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.242 qpair failed and we were unable to recover it. 00:26:28.242 [2024-07-26 14:20:36.157828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.242 [2024-07-26 14:20:36.157893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.242 qpair failed and we were unable to recover it. 00:26:28.242 [2024-07-26 14:20:36.158167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.242 [2024-07-26 14:20:36.158232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.242 qpair failed and we were unable to recover it. 00:26:28.242 [2024-07-26 14:20:36.158504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.242 [2024-07-26 14:20:36.158604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.242 qpair failed and we were unable to recover it. 00:26:28.242 [2024-07-26 14:20:36.158889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.242 [2024-07-26 14:20:36.158954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.242 qpair failed and we were unable to recover it. 00:26:28.242 [2024-07-26 14:20:36.159246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.242 [2024-07-26 14:20:36.159311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.242 qpair failed and we were unable to recover it. 00:26:28.242 [2024-07-26 14:20:36.159563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.242 [2024-07-26 14:20:36.159631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.242 qpair failed and we were unable to recover it. 00:26:28.242 [2024-07-26 14:20:36.159876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.242 [2024-07-26 14:20:36.159940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.243 qpair failed and we were unable to recover it. 00:26:28.243 [2024-07-26 14:20:36.160169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.243 [2024-07-26 14:20:36.160234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.243 qpair failed and we were unable to recover it. 00:26:28.243 [2024-07-26 14:20:36.160432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.243 [2024-07-26 14:20:36.160497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.243 qpair failed and we were unable to recover it. 00:26:28.243 [2024-07-26 14:20:36.160731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.243 [2024-07-26 14:20:36.160798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.243 qpair failed and we were unable to recover it. 00:26:28.243 [2024-07-26 14:20:36.161020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.243 [2024-07-26 14:20:36.161084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.243 qpair failed and we were unable to recover it. 00:26:28.243 [2024-07-26 14:20:36.161309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.243 [2024-07-26 14:20:36.161372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.243 qpair failed and we were unable to recover it. 00:26:28.243 [2024-07-26 14:20:36.161654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.243 [2024-07-26 14:20:36.161719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.243 qpair failed and we were unable to recover it. 00:26:28.243 [2024-07-26 14:20:36.161957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.243 [2024-07-26 14:20:36.162023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.243 qpair failed and we were unable to recover it. 00:26:28.243 [2024-07-26 14:20:36.162240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.243 [2024-07-26 14:20:36.162303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.243 qpair failed and we were unable to recover it. 00:26:28.243 [2024-07-26 14:20:36.162517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.243 [2024-07-26 14:20:36.162595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.243 qpair failed and we were unable to recover it. 00:26:28.243 [2024-07-26 14:20:36.162862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.243 [2024-07-26 14:20:36.162927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.243 qpair failed and we were unable to recover it. 00:26:28.243 [2024-07-26 14:20:36.163200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.243 [2024-07-26 14:20:36.163265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.243 qpair failed and we were unable to recover it. 00:26:28.243 [2024-07-26 14:20:36.163545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.243 [2024-07-26 14:20:36.163611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.243 qpair failed and we were unable to recover it. 00:26:28.243 [2024-07-26 14:20:36.163847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.243 [2024-07-26 14:20:36.163912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.243 qpair failed and we were unable to recover it. 00:26:28.243 [2024-07-26 14:20:36.164158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.243 [2024-07-26 14:20:36.164224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.243 qpair failed and we were unable to recover it. 00:26:28.243 [2024-07-26 14:20:36.164445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.243 [2024-07-26 14:20:36.164510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.243 qpair failed and we were unable to recover it. 00:26:28.243 [2024-07-26 14:20:36.164762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.243 [2024-07-26 14:20:36.164827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.243 qpair failed and we were unable to recover it. 00:26:28.243 [2024-07-26 14:20:36.165025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.243 [2024-07-26 14:20:36.165090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.243 qpair failed and we were unable to recover it. 00:26:28.243 [2024-07-26 14:20:36.165320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.243 [2024-07-26 14:20:36.165385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.243 qpair failed and we were unable to recover it. 00:26:28.243 [2024-07-26 14:20:36.165627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.243 [2024-07-26 14:20:36.165693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.243 qpair failed and we were unable to recover it. 00:26:28.243 [2024-07-26 14:20:36.165971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.243 [2024-07-26 14:20:36.166036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.243 qpair failed and we were unable to recover it. 00:26:28.243 [2024-07-26 14:20:36.166236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.243 [2024-07-26 14:20:36.166305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.243 qpair failed and we were unable to recover it. 00:26:28.243 [2024-07-26 14:20:36.166584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.243 [2024-07-26 14:20:36.166650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.243 qpair failed and we were unable to recover it. 00:26:28.243 [2024-07-26 14:20:36.166890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.243 [2024-07-26 14:20:36.166955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.243 qpair failed and we were unable to recover it. 00:26:28.243 [2024-07-26 14:20:36.167225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.243 [2024-07-26 14:20:36.167290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.243 qpair failed and we were unable to recover it. 00:26:28.243 [2024-07-26 14:20:36.167495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.243 [2024-07-26 14:20:36.167582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.243 qpair failed and we were unable to recover it. 00:26:28.243 [2024-07-26 14:20:36.167836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.243 [2024-07-26 14:20:36.167901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.243 qpair failed and we were unable to recover it. 00:26:28.243 [2024-07-26 14:20:36.168106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.243 [2024-07-26 14:20:36.168171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.243 qpair failed and we were unable to recover it. 00:26:28.243 [2024-07-26 14:20:36.168416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.243 [2024-07-26 14:20:36.168481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.243 qpair failed and we were unable to recover it. 00:26:28.243 [2024-07-26 14:20:36.168703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.243 [2024-07-26 14:20:36.168781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.243 qpair failed and we were unable to recover it. 00:26:28.243 [2024-07-26 14:20:36.169026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.243 [2024-07-26 14:20:36.169092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.243 qpair failed and we were unable to recover it. 00:26:28.243 [2024-07-26 14:20:36.169372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.243 [2024-07-26 14:20:36.169437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.243 qpair failed and we were unable to recover it. 00:26:28.243 [2024-07-26 14:20:36.169707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.244 [2024-07-26 14:20:36.169774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.244 qpair failed and we were unable to recover it. 00:26:28.244 [2024-07-26 14:20:36.169967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.244 [2024-07-26 14:20:36.170031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.244 qpair failed and we were unable to recover it. 00:26:28.244 [2024-07-26 14:20:36.170247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.244 [2024-07-26 14:20:36.170313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.244 qpair failed and we were unable to recover it. 00:26:28.244 [2024-07-26 14:20:36.170560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.244 [2024-07-26 14:20:36.170626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.244 qpair failed and we were unable to recover it. 00:26:28.244 [2024-07-26 14:20:36.170838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.244 [2024-07-26 14:20:36.170903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.244 qpair failed and we were unable to recover it. 00:26:28.244 [2024-07-26 14:20:36.171148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.244 [2024-07-26 14:20:36.171212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.244 qpair failed and we were unable to recover it. 00:26:28.244 [2024-07-26 14:20:36.171475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.244 [2024-07-26 14:20:36.171553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.244 qpair failed and we were unable to recover it. 00:26:28.244 [2024-07-26 14:20:36.171816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.244 [2024-07-26 14:20:36.171881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.244 qpair failed and we were unable to recover it. 00:26:28.244 [2024-07-26 14:20:36.172153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.244 [2024-07-26 14:20:36.172218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.244 qpair failed and we were unable to recover it. 00:26:28.244 [2024-07-26 14:20:36.172468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.244 [2024-07-26 14:20:36.172547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.244 qpair failed and we were unable to recover it. 00:26:28.244 [2024-07-26 14:20:36.172781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.244 [2024-07-26 14:20:36.172845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.244 qpair failed and we were unable to recover it. 00:26:28.244 [2024-07-26 14:20:36.173145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.244 [2024-07-26 14:20:36.173210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.244 qpair failed and we were unable to recover it. 00:26:28.244 [2024-07-26 14:20:36.173438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.244 [2024-07-26 14:20:36.173502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.244 qpair failed and we were unable to recover it. 00:26:28.244 [2024-07-26 14:20:36.173779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.244 [2024-07-26 14:20:36.173846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.244 qpair failed and we were unable to recover it. 00:26:28.244 [2024-07-26 14:20:36.174057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.244 [2024-07-26 14:20:36.174122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.244 qpair failed and we were unable to recover it. 00:26:28.244 [2024-07-26 14:20:36.174328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.244 [2024-07-26 14:20:36.174395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.244 qpair failed and we were unable to recover it. 00:26:28.244 [2024-07-26 14:20:36.174643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.244 [2024-07-26 14:20:36.174711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.244 qpair failed and we were unable to recover it. 00:26:28.244 [2024-07-26 14:20:36.174937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.244 [2024-07-26 14:20:36.175003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.244 qpair failed and we were unable to recover it. 00:26:28.244 [2024-07-26 14:20:36.175243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.244 [2024-07-26 14:20:36.175308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.244 qpair failed and we were unable to recover it. 00:26:28.244 [2024-07-26 14:20:36.175520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.244 [2024-07-26 14:20:36.175601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.244 qpair failed and we were unable to recover it. 00:26:28.244 [2024-07-26 14:20:36.175839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.244 [2024-07-26 14:20:36.175904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.244 qpair failed and we were unable to recover it. 00:26:28.244 [2024-07-26 14:20:36.176103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.244 [2024-07-26 14:20:36.176168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.244 qpair failed and we were unable to recover it. 00:26:28.244 [2024-07-26 14:20:36.176423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.244 [2024-07-26 14:20:36.176488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.244 qpair failed and we were unable to recover it. 00:26:28.244 [2024-07-26 14:20:36.176756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.244 [2024-07-26 14:20:36.176821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.244 qpair failed and we were unable to recover it. 00:26:28.244 [2024-07-26 14:20:36.177084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.244 [2024-07-26 14:20:36.177149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.244 qpair failed and we were unable to recover it. 00:26:28.244 [2024-07-26 14:20:36.177429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.244 [2024-07-26 14:20:36.177493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.244 qpair failed and we were unable to recover it. 00:26:28.244 [2024-07-26 14:20:36.177759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.244 [2024-07-26 14:20:36.177824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.244 qpair failed and we were unable to recover it. 00:26:28.244 [2024-07-26 14:20:36.178109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.244 [2024-07-26 14:20:36.178174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.244 qpair failed and we were unable to recover it. 00:26:28.244 [2024-07-26 14:20:36.178365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.244 [2024-07-26 14:20:36.178429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.244 qpair failed and we were unable to recover it. 00:26:28.244 [2024-07-26 14:20:36.178691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.244 [2024-07-26 14:20:36.178757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.244 qpair failed and we were unable to recover it. 00:26:28.244 [2024-07-26 14:20:36.179000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.244 [2024-07-26 14:20:36.179067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.244 qpair failed and we were unable to recover it. 00:26:28.244 [2024-07-26 14:20:36.179353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.244 [2024-07-26 14:20:36.179418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.244 qpair failed and we were unable to recover it. 00:26:28.244 [2024-07-26 14:20:36.179715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.244 [2024-07-26 14:20:36.179782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.244 qpair failed and we were unable to recover it. 00:26:28.244 [2024-07-26 14:20:36.180036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.244 [2024-07-26 14:20:36.180100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.244 qpair failed and we were unable to recover it. 00:26:28.244 [2024-07-26 14:20:36.180308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.245 [2024-07-26 14:20:36.180373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.245 qpair failed and we were unable to recover it. 00:26:28.245 [2024-07-26 14:20:36.180631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.245 [2024-07-26 14:20:36.180698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.245 qpair failed and we were unable to recover it. 00:26:28.245 [2024-07-26 14:20:36.180987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.245 [2024-07-26 14:20:36.181052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.245 qpair failed and we were unable to recover it. 00:26:28.245 [2024-07-26 14:20:36.181294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.245 [2024-07-26 14:20:36.181368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.245 qpair failed and we were unable to recover it. 00:26:28.245 [2024-07-26 14:20:36.181623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.245 [2024-07-26 14:20:36.181689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.245 qpair failed and we were unable to recover it. 00:26:28.245 [2024-07-26 14:20:36.181874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.245 [2024-07-26 14:20:36.181939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.245 qpair failed and we were unable to recover it. 00:26:28.245 [2024-07-26 14:20:36.182162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.245 [2024-07-26 14:20:36.182230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.245 qpair failed and we were unable to recover it. 00:26:28.245 [2024-07-26 14:20:36.182477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.245 [2024-07-26 14:20:36.182561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.245 qpair failed and we were unable to recover it. 00:26:28.245 [2024-07-26 14:20:36.182796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.245 [2024-07-26 14:20:36.182862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.245 qpair failed and we were unable to recover it. 00:26:28.245 [2024-07-26 14:20:36.183119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.245 [2024-07-26 14:20:36.183185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.245 qpair failed and we were unable to recover it. 00:26:28.245 [2024-07-26 14:20:36.183424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.245 [2024-07-26 14:20:36.183488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.245 qpair failed and we were unable to recover it. 00:26:28.245 [2024-07-26 14:20:36.183721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.245 [2024-07-26 14:20:36.183786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.245 qpair failed and we were unable to recover it. 00:26:28.245 [2024-07-26 14:20:36.183987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.245 [2024-07-26 14:20:36.184052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.245 qpair failed and we were unable to recover it. 00:26:28.245 [2024-07-26 14:20:36.184299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.245 [2024-07-26 14:20:36.184365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.245 qpair failed and we were unable to recover it. 00:26:28.245 [2024-07-26 14:20:36.184584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.245 [2024-07-26 14:20:36.184652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.245 qpair failed and we were unable to recover it. 00:26:28.245 [2024-07-26 14:20:36.184868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.245 [2024-07-26 14:20:36.184935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.245 qpair failed and we were unable to recover it. 00:26:28.245 [2024-07-26 14:20:36.185136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.245 [2024-07-26 14:20:36.185204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.245 qpair failed and we were unable to recover it. 00:26:28.245 [2024-07-26 14:20:36.185461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.245 [2024-07-26 14:20:36.185543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.245 qpair failed and we were unable to recover it. 00:26:28.245 [2024-07-26 14:20:36.185749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.245 [2024-07-26 14:20:36.185815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.245 qpair failed and we were unable to recover it. 00:26:28.245 [2024-07-26 14:20:36.186059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.245 [2024-07-26 14:20:36.186127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.245 qpair failed and we were unable to recover it. 00:26:28.245 [2024-07-26 14:20:36.186382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.245 [2024-07-26 14:20:36.186447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.245 qpair failed and we were unable to recover it. 00:26:28.245 [2024-07-26 14:20:36.186723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.245 [2024-07-26 14:20:36.186789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.245 qpair failed and we were unable to recover it. 00:26:28.245 [2024-07-26 14:20:36.187051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.245 [2024-07-26 14:20:36.187116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.245 qpair failed and we were unable to recover it. 00:26:28.245 [2024-07-26 14:20:36.187390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.245 [2024-07-26 14:20:36.187455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.245 qpair failed and we were unable to recover it. 00:26:28.245 [2024-07-26 14:20:36.187673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.245 [2024-07-26 14:20:36.187742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.245 qpair failed and we were unable to recover it. 00:26:28.245 [2024-07-26 14:20:36.187985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.245 [2024-07-26 14:20:36.188049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.245 qpair failed and we were unable to recover it. 00:26:28.245 [2024-07-26 14:20:36.188289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.245 [2024-07-26 14:20:36.188353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.245 qpair failed and we were unable to recover it. 00:26:28.245 [2024-07-26 14:20:36.188597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.245 [2024-07-26 14:20:36.188665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.245 qpair failed and we were unable to recover it. 00:26:28.245 [2024-07-26 14:20:36.188884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.245 [2024-07-26 14:20:36.188951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.245 qpair failed and we were unable to recover it. 00:26:28.245 [2024-07-26 14:20:36.189134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.245 [2024-07-26 14:20:36.189198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.245 qpair failed and we were unable to recover it. 00:26:28.245 [2024-07-26 14:20:36.189456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.245 [2024-07-26 14:20:36.189522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.245 qpair failed and we were unable to recover it. 00:26:28.245 [2024-07-26 14:20:36.189776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.245 [2024-07-26 14:20:36.189842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.245 qpair failed and we were unable to recover it. 00:26:28.245 [2024-07-26 14:20:36.190085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.245 [2024-07-26 14:20:36.190153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.246 qpair failed and we were unable to recover it. 00:26:28.246 [2024-07-26 14:20:36.190365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.246 [2024-07-26 14:20:36.190430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.246 qpair failed and we were unable to recover it. 00:26:28.246 [2024-07-26 14:20:36.190633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.246 [2024-07-26 14:20:36.190700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.246 qpair failed and we were unable to recover it. 00:26:28.246 [2024-07-26 14:20:36.190953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.246 [2024-07-26 14:20:36.191018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.246 qpair failed and we were unable to recover it. 00:26:28.246 [2024-07-26 14:20:36.191219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.246 [2024-07-26 14:20:36.191284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.246 qpair failed and we were unable to recover it. 00:26:28.246 [2024-07-26 14:20:36.191566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.246 [2024-07-26 14:20:36.191633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.246 qpair failed and we were unable to recover it. 00:26:28.246 [2024-07-26 14:20:36.191879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.246 [2024-07-26 14:20:36.191944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.246 qpair failed and we were unable to recover it. 00:26:28.246 [2024-07-26 14:20:36.192185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.246 [2024-07-26 14:20:36.192250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.246 qpair failed and we were unable to recover it. 00:26:28.246 [2024-07-26 14:20:36.192525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.246 [2024-07-26 14:20:36.192607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.246 qpair failed and we were unable to recover it. 00:26:28.246 [2024-07-26 14:20:36.192873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.246 [2024-07-26 14:20:36.192938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.246 qpair failed and we were unable to recover it. 00:26:28.246 [2024-07-26 14:20:36.193190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.246 [2024-07-26 14:20:36.193257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.246 qpair failed and we were unable to recover it. 00:26:28.246 [2024-07-26 14:20:36.193485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.246 [2024-07-26 14:20:36.193577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.246 qpair failed and we were unable to recover it. 00:26:28.246 [2024-07-26 14:20:36.193786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.246 [2024-07-26 14:20:36.193853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.246 qpair failed and we were unable to recover it. 00:26:28.246 [2024-07-26 14:20:36.194107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.246 [2024-07-26 14:20:36.194172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.246 qpair failed and we were unable to recover it. 00:26:28.246 [2024-07-26 14:20:36.194432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.246 [2024-07-26 14:20:36.194496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.246 qpair failed and we were unable to recover it. 00:26:28.246 [2024-07-26 14:20:36.194800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.246 [2024-07-26 14:20:36.194865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.246 qpair failed and we were unable to recover it. 00:26:28.246 [2024-07-26 14:20:36.195110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.246 [2024-07-26 14:20:36.195175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.246 qpair failed and we were unable to recover it. 00:26:28.246 [2024-07-26 14:20:36.195458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.246 [2024-07-26 14:20:36.195523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.246 qpair failed and we were unable to recover it. 00:26:28.246 [2024-07-26 14:20:36.195856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.246 [2024-07-26 14:20:36.195921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.246 qpair failed and we were unable to recover it. 00:26:28.246 [2024-07-26 14:20:36.196167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.246 [2024-07-26 14:20:36.196234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.246 qpair failed and we were unable to recover it. 00:26:28.246 [2024-07-26 14:20:36.196468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.246 [2024-07-26 14:20:36.196554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.246 qpair failed and we were unable to recover it. 00:26:28.246 [2024-07-26 14:20:36.196805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.246 [2024-07-26 14:20:36.196870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.246 qpair failed and we were unable to recover it. 00:26:28.246 [2024-07-26 14:20:36.197087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.246 [2024-07-26 14:20:36.197151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.246 qpair failed and we were unable to recover it. 00:26:28.246 [2024-07-26 14:20:36.197362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.246 [2024-07-26 14:20:36.197429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.246 qpair failed and we were unable to recover it. 00:26:28.246 [2024-07-26 14:20:36.197680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.246 [2024-07-26 14:20:36.197747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.246 qpair failed and we were unable to recover it. 00:26:28.246 [2024-07-26 14:20:36.198003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.246 [2024-07-26 14:20:36.198070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.246 qpair failed and we were unable to recover it. 00:26:28.246 [2024-07-26 14:20:36.198306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.246 [2024-07-26 14:20:36.198372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.246 qpair failed and we were unable to recover it. 00:26:28.246 [2024-07-26 14:20:36.198581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.246 [2024-07-26 14:20:36.198648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.246 qpair failed and we were unable to recover it. 00:26:28.246 [2024-07-26 14:20:36.198833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.246 [2024-07-26 14:20:36.198899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.246 qpair failed and we were unable to recover it. 00:26:28.246 [2024-07-26 14:20:36.199150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.246 [2024-07-26 14:20:36.199215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.246 qpair failed and we were unable to recover it. 00:26:28.246 [2024-07-26 14:20:36.199458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.246 [2024-07-26 14:20:36.199524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.246 qpair failed and we were unable to recover it. 00:26:28.246 [2024-07-26 14:20:36.199770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.246 [2024-07-26 14:20:36.199835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.246 qpair failed and we were unable to recover it. 00:26:28.246 [2024-07-26 14:20:36.200052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.246 [2024-07-26 14:20:36.200119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.246 qpair failed and we were unable to recover it. 00:26:28.246 [2024-07-26 14:20:36.200368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.246 [2024-07-26 14:20:36.200432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.246 qpair failed and we were unable to recover it. 00:26:28.246 [2024-07-26 14:20:36.200664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.246 [2024-07-26 14:20:36.200731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.246 qpair failed and we were unable to recover it. 00:26:28.246 [2024-07-26 14:20:36.200977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.246 [2024-07-26 14:20:36.201041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.247 qpair failed and we were unable to recover it. 00:26:28.247 [2024-07-26 14:20:36.201258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.247 [2024-07-26 14:20:36.201324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.247 qpair failed and we were unable to recover it. 00:26:28.247 [2024-07-26 14:20:36.201557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.247 [2024-07-26 14:20:36.201623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.247 qpair failed and we were unable to recover it. 00:26:28.247 [2024-07-26 14:20:36.201915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.247 [2024-07-26 14:20:36.201980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.247 qpair failed and we were unable to recover it. 00:26:28.247 [2024-07-26 14:20:36.202228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.247 [2024-07-26 14:20:36.202292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.247 qpair failed and we were unable to recover it. 00:26:28.247 [2024-07-26 14:20:36.202521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.247 [2024-07-26 14:20:36.202603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.247 qpair failed and we were unable to recover it. 00:26:28.247 [2024-07-26 14:20:36.202861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.247 [2024-07-26 14:20:36.202928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.247 qpair failed and we were unable to recover it. 00:26:28.247 [2024-07-26 14:20:36.203173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.247 [2024-07-26 14:20:36.203237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.247 qpair failed and we were unable to recover it. 00:26:28.247 [2024-07-26 14:20:36.203481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.247 [2024-07-26 14:20:36.203579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.247 qpair failed and we were unable to recover it. 00:26:28.247 [2024-07-26 14:20:36.203847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.247 [2024-07-26 14:20:36.203913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.247 qpair failed and we were unable to recover it. 00:26:28.247 [2024-07-26 14:20:36.204163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.247 [2024-07-26 14:20:36.204227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.247 qpair failed and we were unable to recover it. 00:26:28.247 [2024-07-26 14:20:36.204467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.247 [2024-07-26 14:20:36.204553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.247 qpair failed and we were unable to recover it. 00:26:28.247 [2024-07-26 14:20:36.204786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.247 [2024-07-26 14:20:36.204851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.247 qpair failed and we were unable to recover it. 00:26:28.247 [2024-07-26 14:20:36.205100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.247 [2024-07-26 14:20:36.205164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.247 qpair failed and we were unable to recover it. 00:26:28.247 [2024-07-26 14:20:36.205425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.247 [2024-07-26 14:20:36.205490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.247 qpair failed and we were unable to recover it. 00:26:28.247 [2024-07-26 14:20:36.205764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.247 [2024-07-26 14:20:36.205830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.247 qpair failed and we were unable to recover it. 00:26:28.247 [2024-07-26 14:20:36.206081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.247 [2024-07-26 14:20:36.206157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.247 qpair failed and we were unable to recover it. 00:26:28.247 [2024-07-26 14:20:36.206443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.247 [2024-07-26 14:20:36.206508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.247 qpair failed and we were unable to recover it. 00:26:28.247 [2024-07-26 14:20:36.206817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.247 [2024-07-26 14:20:36.206882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.247 qpair failed and we were unable to recover it. 00:26:28.247 [2024-07-26 14:20:36.207136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.247 [2024-07-26 14:20:36.207205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.247 qpair failed and we were unable to recover it. 00:26:28.247 [2024-07-26 14:20:36.207481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.247 [2024-07-26 14:20:36.207596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.247 qpair failed and we were unable to recover it. 00:26:28.247 [2024-07-26 14:20:36.207930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.247 [2024-07-26 14:20:36.208025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.247 qpair failed and we were unable to recover it. 00:26:28.247 [2024-07-26 14:20:36.208313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.247 [2024-07-26 14:20:36.208381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.247 qpair failed and we were unable to recover it. 00:26:28.247 [2024-07-26 14:20:36.208604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.247 [2024-07-26 14:20:36.208672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.247 qpair failed and we were unable to recover it. 00:26:28.247 [2024-07-26 14:20:36.208904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.247 [2024-07-26 14:20:36.208969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.247 qpair failed and we were unable to recover it. 00:26:28.247 [2024-07-26 14:20:36.209216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.247 [2024-07-26 14:20:36.209283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.247 qpair failed and we were unable to recover it. 00:26:28.247 [2024-07-26 14:20:36.209556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.247 [2024-07-26 14:20:36.209623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.247 qpair failed and we were unable to recover it. 00:26:28.247 [2024-07-26 14:20:36.209919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.247 [2024-07-26 14:20:36.209997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.247 qpair failed and we were unable to recover it. 00:26:28.247 [2024-07-26 14:20:36.210282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.247 [2024-07-26 14:20:36.210376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.247 qpair failed and we were unable to recover it. 00:26:28.247 [2024-07-26 14:20:36.210736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.247 [2024-07-26 14:20:36.210809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.247 qpair failed and we were unable to recover it. 00:26:28.247 [2024-07-26 14:20:36.211121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.247 [2024-07-26 14:20:36.211186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.247 qpair failed and we were unable to recover it. 00:26:28.247 [2024-07-26 14:20:36.211388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.247 [2024-07-26 14:20:36.211453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.247 qpair failed and we were unable to recover it. 00:26:28.247 [2024-07-26 14:20:36.211716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.248 [2024-07-26 14:20:36.211783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.248 qpair failed and we were unable to recover it. 00:26:28.248 [2024-07-26 14:20:36.212010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.248 [2024-07-26 14:20:36.212075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.248 qpair failed and we were unable to recover it. 00:26:28.248 [2024-07-26 14:20:36.212316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.248 [2024-07-26 14:20:36.212382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.248 qpair failed and we were unable to recover it. 00:26:28.248 [2024-07-26 14:20:36.212673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.248 [2024-07-26 14:20:36.212766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.248 qpair failed and we were unable to recover it. 00:26:28.248 [2024-07-26 14:20:36.213095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.248 [2024-07-26 14:20:36.213182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.248 qpair failed and we were unable to recover it. 00:26:28.527 [2024-07-26 14:20:36.213456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.527 [2024-07-26 14:20:36.213525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.527 qpair failed and we were unable to recover it. 00:26:28.527 [2024-07-26 14:20:36.213778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.527 [2024-07-26 14:20:36.213846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.527 qpair failed and we were unable to recover it. 00:26:28.527 [2024-07-26 14:20:36.214132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.527 [2024-07-26 14:20:36.214198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.527 qpair failed and we were unable to recover it. 00:26:28.527 [2024-07-26 14:20:36.214456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.527 [2024-07-26 14:20:36.214521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.527 qpair failed and we were unable to recover it. 00:26:28.527 [2024-07-26 14:20:36.214758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.527 [2024-07-26 14:20:36.214824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.527 qpair failed and we were unable to recover it. 00:26:28.527 [2024-07-26 14:20:36.215091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.527 [2024-07-26 14:20:36.215158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.527 qpair failed and we were unable to recover it. 00:26:28.527 [2024-07-26 14:20:36.215461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.527 [2024-07-26 14:20:36.215580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.527 qpair failed and we were unable to recover it. 00:26:28.527 [2024-07-26 14:20:36.215856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.527 [2024-07-26 14:20:36.215932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.527 qpair failed and we were unable to recover it. 00:26:28.527 [2024-07-26 14:20:36.216191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.527 [2024-07-26 14:20:36.216270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.527 qpair failed and we were unable to recover it. 00:26:28.528 [2024-07-26 14:20:36.216586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.528 [2024-07-26 14:20:36.216656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.528 qpair failed and we were unable to recover it. 00:26:28.528 [2024-07-26 14:20:36.216873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.528 [2024-07-26 14:20:36.216939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.528 qpair failed and we were unable to recover it. 00:26:28.528 [2024-07-26 14:20:36.217152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.528 [2024-07-26 14:20:36.217221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.528 qpair failed and we were unable to recover it. 00:26:28.528 [2024-07-26 14:20:36.217440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.528 [2024-07-26 14:20:36.217506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.528 qpair failed and we were unable to recover it. 00:26:28.528 [2024-07-26 14:20:36.217735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.528 [2024-07-26 14:20:36.217801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.528 qpair failed and we were unable to recover it. 00:26:28.528 [2024-07-26 14:20:36.218053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.528 [2024-07-26 14:20:36.218117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.528 qpair failed and we were unable to recover it. 00:26:28.528 [2024-07-26 14:20:36.218322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.528 [2024-07-26 14:20:36.218386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.528 qpair failed and we were unable to recover it. 00:26:28.528 [2024-07-26 14:20:36.218641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.528 [2024-07-26 14:20:36.218707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.528 qpair failed and we were unable to recover it. 00:26:28.528 [2024-07-26 14:20:36.218905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.528 [2024-07-26 14:20:36.218969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.528 qpair failed and we were unable to recover it. 00:26:28.528 [2024-07-26 14:20:36.219187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.528 [2024-07-26 14:20:36.219256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.528 qpair failed and we were unable to recover it. 00:26:28.528 [2024-07-26 14:20:36.219469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.528 [2024-07-26 14:20:36.219559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.528 qpair failed and we were unable to recover it. 00:26:28.528 [2024-07-26 14:20:36.219776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.528 [2024-07-26 14:20:36.219841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.528 qpair failed and we were unable to recover it. 00:26:28.528 [2024-07-26 14:20:36.220074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.528 [2024-07-26 14:20:36.220139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.528 qpair failed and we were unable to recover it. 00:26:28.528 [2024-07-26 14:20:36.220374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.528 [2024-07-26 14:20:36.220439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.528 qpair failed and we were unable to recover it. 00:26:28.528 [2024-07-26 14:20:36.220709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.528 [2024-07-26 14:20:36.220776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.528 qpair failed and we were unable to recover it. 00:26:28.528 [2024-07-26 14:20:36.221079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.528 [2024-07-26 14:20:36.221145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.528 qpair failed and we were unable to recover it. 00:26:28.528 [2024-07-26 14:20:36.221429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.528 [2024-07-26 14:20:36.221493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.528 qpair failed and we were unable to recover it. 00:26:28.528 [2024-07-26 14:20:36.221789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.528 [2024-07-26 14:20:36.221855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.528 qpair failed and we were unable to recover it. 00:26:28.528 [2024-07-26 14:20:36.222069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.528 [2024-07-26 14:20:36.222134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.528 qpair failed and we were unable to recover it. 00:26:28.528 [2024-07-26 14:20:36.222380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.528 [2024-07-26 14:20:36.222444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.528 qpair failed and we were unable to recover it. 00:26:28.528 [2024-07-26 14:20:36.222708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.528 [2024-07-26 14:20:36.222774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.528 qpair failed and we were unable to recover it. 00:26:28.528 [2024-07-26 14:20:36.223061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.528 [2024-07-26 14:20:36.223125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.528 qpair failed and we were unable to recover it. 00:26:28.528 [2024-07-26 14:20:36.223404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.528 [2024-07-26 14:20:36.223468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.528 qpair failed and we were unable to recover it. 00:26:28.528 [2024-07-26 14:20:36.223755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.528 [2024-07-26 14:20:36.223821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.528 qpair failed and we were unable to recover it. 00:26:28.528 [2024-07-26 14:20:36.224104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.528 [2024-07-26 14:20:36.224169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.528 qpair failed and we were unable to recover it. 00:26:28.528 [2024-07-26 14:20:36.224406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.528 [2024-07-26 14:20:36.224470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.528 qpair failed and we were unable to recover it. 00:26:28.528 [2024-07-26 14:20:36.224769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.528 [2024-07-26 14:20:36.224836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.528 qpair failed and we were unable to recover it. 00:26:28.528 [2024-07-26 14:20:36.225117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.528 [2024-07-26 14:20:36.225180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.528 qpair failed and we were unable to recover it. 00:26:28.528 [2024-07-26 14:20:36.225472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.528 [2024-07-26 14:20:36.225553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.528 qpair failed and we were unable to recover it. 00:26:28.528 [2024-07-26 14:20:36.225838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.528 [2024-07-26 14:20:36.225903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.528 qpair failed and we were unable to recover it. 00:26:28.528 [2024-07-26 14:20:36.226180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.528 [2024-07-26 14:20:36.226245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.528 qpair failed and we were unable to recover it. 00:26:28.528 [2024-07-26 14:20:36.226503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.529 [2024-07-26 14:20:36.226586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.529 qpair failed and we were unable to recover it. 00:26:28.529 [2024-07-26 14:20:36.226803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.529 [2024-07-26 14:20:36.226869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.529 qpair failed and we were unable to recover it. 00:26:28.529 [2024-07-26 14:20:36.227122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.529 [2024-07-26 14:20:36.227188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.529 qpair failed and we were unable to recover it. 00:26:28.529 [2024-07-26 14:20:36.227422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.529 [2024-07-26 14:20:36.227487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.529 qpair failed and we were unable to recover it. 00:26:28.529 [2024-07-26 14:20:36.227752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.529 [2024-07-26 14:20:36.227817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.529 qpair failed and we were unable to recover it. 00:26:28.529 [2024-07-26 14:20:36.228065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.529 [2024-07-26 14:20:36.228130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.529 qpair failed and we were unable to recover it. 00:26:28.529 [2024-07-26 14:20:36.228386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.529 [2024-07-26 14:20:36.228451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.529 qpair failed and we were unable to recover it. 00:26:28.529 [2024-07-26 14:20:36.228711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.529 [2024-07-26 14:20:36.228778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.529 qpair failed and we were unable to recover it. 00:26:28.529 [2024-07-26 14:20:36.229032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.529 [2024-07-26 14:20:36.229099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.529 qpair failed and we were unable to recover it. 00:26:28.529 [2024-07-26 14:20:36.229323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.529 [2024-07-26 14:20:36.229387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.529 qpair failed and we were unable to recover it. 00:26:28.529 [2024-07-26 14:20:36.229670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.529 [2024-07-26 14:20:36.229736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.529 qpair failed and we were unable to recover it. 00:26:28.529 [2024-07-26 14:20:36.229949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.529 [2024-07-26 14:20:36.230014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.529 qpair failed and we were unable to recover it. 00:26:28.529 [2024-07-26 14:20:36.230268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.529 [2024-07-26 14:20:36.230332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.529 qpair failed and we were unable to recover it. 00:26:28.529 [2024-07-26 14:20:36.230539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.529 [2024-07-26 14:20:36.230605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.529 qpair failed and we were unable to recover it. 00:26:28.529 [2024-07-26 14:20:36.230872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.529 [2024-07-26 14:20:36.230938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.529 qpair failed and we were unable to recover it. 00:26:28.529 [2024-07-26 14:20:36.231183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.529 [2024-07-26 14:20:36.231248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.529 qpair failed and we were unable to recover it. 00:26:28.529 [2024-07-26 14:20:36.231522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.529 [2024-07-26 14:20:36.231618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.529 qpair failed and we were unable to recover it. 00:26:28.529 [2024-07-26 14:20:36.231812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.529 [2024-07-26 14:20:36.231876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.529 qpair failed and we were unable to recover it. 00:26:28.529 [2024-07-26 14:20:36.232106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.529 [2024-07-26 14:20:36.232171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.529 qpair failed and we were unable to recover it. 00:26:28.529 [2024-07-26 14:20:36.232421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.529 [2024-07-26 14:20:36.232495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.529 qpair failed and we were unable to recover it. 00:26:28.529 [2024-07-26 14:20:36.232740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.529 [2024-07-26 14:20:36.232807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.529 qpair failed and we were unable to recover it. 00:26:28.529 [2024-07-26 14:20:36.233093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.529 [2024-07-26 14:20:36.233157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.529 qpair failed and we were unable to recover it. 00:26:28.529 [2024-07-26 14:20:36.233372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.529 [2024-07-26 14:20:36.233436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.529 qpair failed and we were unable to recover it. 00:26:28.529 [2024-07-26 14:20:36.233664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.529 [2024-07-26 14:20:36.233731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.529 qpair failed and we were unable to recover it. 00:26:28.529 [2024-07-26 14:20:36.234007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.529 [2024-07-26 14:20:36.234071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.529 qpair failed and we were unable to recover it. 00:26:28.529 [2024-07-26 14:20:36.234346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.529 [2024-07-26 14:20:36.234411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.529 qpair failed and we were unable to recover it. 00:26:28.529 [2024-07-26 14:20:36.234622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.529 [2024-07-26 14:20:36.234690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.529 qpair failed and we were unable to recover it. 00:26:28.529 [2024-07-26 14:20:36.234983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.529 [2024-07-26 14:20:36.235048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.529 qpair failed and we were unable to recover it. 00:26:28.529 [2024-07-26 14:20:36.235292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.529 [2024-07-26 14:20:36.235359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.529 qpair failed and we were unable to recover it. 00:26:28.529 [2024-07-26 14:20:36.235617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.529 [2024-07-26 14:20:36.235683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.529 qpair failed and we were unable to recover it. 00:26:28.529 [2024-07-26 14:20:36.235958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.529 [2024-07-26 14:20:36.236023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.529 qpair failed and we were unable to recover it. 00:26:28.529 [2024-07-26 14:20:36.236281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.529 [2024-07-26 14:20:36.236346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.529 qpair failed and we were unable to recover it. 00:26:28.529 [2024-07-26 14:20:36.236584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.529 [2024-07-26 14:20:36.236651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.529 qpair failed and we were unable to recover it. 00:26:28.529 [2024-07-26 14:20:36.236915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.529 [2024-07-26 14:20:36.236980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.529 qpair failed and we were unable to recover it. 00:26:28.529 [2024-07-26 14:20:36.237200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.529 [2024-07-26 14:20:36.237264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.529 qpair failed and we were unable to recover it. 00:26:28.529 [2024-07-26 14:20:36.237506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.529 [2024-07-26 14:20:36.237585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.529 qpair failed and we were unable to recover it. 00:26:28.529 [2024-07-26 14:20:36.237852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.529 [2024-07-26 14:20:36.237916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.529 qpair failed and we were unable to recover it. 00:26:28.529 [2024-07-26 14:20:36.238168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.530 [2024-07-26 14:20:36.238232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.530 qpair failed and we were unable to recover it. 00:26:28.530 [2024-07-26 14:20:36.238452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.530 [2024-07-26 14:20:36.238517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.530 qpair failed and we were unable to recover it. 00:26:28.530 [2024-07-26 14:20:36.238798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.530 [2024-07-26 14:20:36.238864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.530 qpair failed and we were unable to recover it. 00:26:28.530 [2024-07-26 14:20:36.239100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.530 [2024-07-26 14:20:36.239167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.530 qpair failed and we were unable to recover it. 00:26:28.530 [2024-07-26 14:20:36.239424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.530 [2024-07-26 14:20:36.239489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.530 qpair failed and we were unable to recover it. 00:26:28.530 [2024-07-26 14:20:36.239737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.530 [2024-07-26 14:20:36.239802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.530 qpair failed and we were unable to recover it. 00:26:28.530 [2024-07-26 14:20:36.240063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.530 [2024-07-26 14:20:36.240127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.530 qpair failed and we were unable to recover it. 00:26:28.530 [2024-07-26 14:20:36.240340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.530 [2024-07-26 14:20:36.240409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.530 qpair failed and we were unable to recover it. 00:26:28.530 [2024-07-26 14:20:36.240696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.530 [2024-07-26 14:20:36.240763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.530 qpair failed and we were unable to recover it. 00:26:28.530 [2024-07-26 14:20:36.241048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.530 [2024-07-26 14:20:36.241114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.530 qpair failed and we were unable to recover it. 00:26:28.530 [2024-07-26 14:20:36.241324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.530 [2024-07-26 14:20:36.241389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.530 qpair failed and we were unable to recover it. 00:26:28.530 [2024-07-26 14:20:36.241683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.530 [2024-07-26 14:20:36.241751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.530 qpair failed and we were unable to recover it. 00:26:28.530 [2024-07-26 14:20:36.242001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.530 [2024-07-26 14:20:36.242065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.530 qpair failed and we were unable to recover it. 00:26:28.530 [2024-07-26 14:20:36.242265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.530 [2024-07-26 14:20:36.242332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.530 qpair failed and we were unable to recover it. 00:26:28.530 [2024-07-26 14:20:36.242574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.530 [2024-07-26 14:20:36.242639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.530 qpair failed and we were unable to recover it. 00:26:28.530 [2024-07-26 14:20:36.242917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.530 [2024-07-26 14:20:36.242982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.530 qpair failed and we were unable to recover it. 00:26:28.530 [2024-07-26 14:20:36.243230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.530 [2024-07-26 14:20:36.243295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.530 qpair failed and we were unable to recover it. 00:26:28.530 [2024-07-26 14:20:36.243561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.530 [2024-07-26 14:20:36.243627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.530 qpair failed and we were unable to recover it. 00:26:28.530 [2024-07-26 14:20:36.243910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.530 [2024-07-26 14:20:36.243974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.530 qpair failed and we were unable to recover it. 00:26:28.530 [2024-07-26 14:20:36.244194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.530 [2024-07-26 14:20:36.244260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.530 qpair failed and we were unable to recover it. 00:26:28.530 [2024-07-26 14:20:36.244508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.530 [2024-07-26 14:20:36.244591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.530 qpair failed and we were unable to recover it. 00:26:28.530 [2024-07-26 14:20:36.244797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.530 [2024-07-26 14:20:36.244864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.530 qpair failed and we were unable to recover it. 00:26:28.530 [2024-07-26 14:20:36.245114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.530 [2024-07-26 14:20:36.245189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.530 qpair failed and we were unable to recover it. 00:26:28.530 [2024-07-26 14:20:36.245455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.530 [2024-07-26 14:20:36.245520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.530 qpair failed and we were unable to recover it. 00:26:28.530 [2024-07-26 14:20:36.245718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.530 [2024-07-26 14:20:36.245783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.530 qpair failed and we were unable to recover it. 00:26:28.530 [2024-07-26 14:20:36.245997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.530 [2024-07-26 14:20:36.246064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.530 qpair failed and we were unable to recover it. 00:26:28.530 [2024-07-26 14:20:36.246351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.530 [2024-07-26 14:20:36.246416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.530 qpair failed and we were unable to recover it. 00:26:28.530 [2024-07-26 14:20:36.246702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.530 [2024-07-26 14:20:36.246769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.530 qpair failed and we were unable to recover it. 00:26:28.530 [2024-07-26 14:20:36.247019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.530 [2024-07-26 14:20:36.247083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.530 qpair failed and we were unable to recover it. 00:26:28.530 [2024-07-26 14:20:36.247315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.530 [2024-07-26 14:20:36.247379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.530 qpair failed and we were unable to recover it. 00:26:28.530 [2024-07-26 14:20:36.247666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.530 [2024-07-26 14:20:36.247732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.530 qpair failed and we were unable to recover it. 00:26:28.530 [2024-07-26 14:20:36.247988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.530 [2024-07-26 14:20:36.248053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.530 qpair failed and we were unable to recover it. 00:26:28.530 [2024-07-26 14:20:36.248286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.530 [2024-07-26 14:20:36.248350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.530 qpair failed and we were unable to recover it. 00:26:28.530 [2024-07-26 14:20:36.248607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.530 [2024-07-26 14:20:36.248673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.530 qpair failed and we were unable to recover it. 00:26:28.530 [2024-07-26 14:20:36.248915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.530 [2024-07-26 14:20:36.248980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.530 qpair failed and we were unable to recover it. 00:26:28.530 [2024-07-26 14:20:36.249242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.530 [2024-07-26 14:20:36.249306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.530 qpair failed and we were unable to recover it. 00:26:28.531 [2024-07-26 14:20:36.249591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.531 [2024-07-26 14:20:36.249656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.531 qpair failed and we were unable to recover it. 00:26:28.531 [2024-07-26 14:20:36.249938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.531 [2024-07-26 14:20:36.250001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.531 qpair failed and we were unable to recover it. 00:26:28.531 [2024-07-26 14:20:36.250238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.531 [2024-07-26 14:20:36.250302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.531 qpair failed and we were unable to recover it. 00:26:28.531 [2024-07-26 14:20:36.250554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.531 [2024-07-26 14:20:36.250619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.531 qpair failed and we were unable to recover it. 00:26:28.531 [2024-07-26 14:20:36.250874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.531 [2024-07-26 14:20:36.250938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.531 qpair failed and we were unable to recover it. 00:26:28.531 [2024-07-26 14:20:36.251180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.531 [2024-07-26 14:20:36.251246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.531 qpair failed and we were unable to recover it. 00:26:28.531 [2024-07-26 14:20:36.251497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.531 [2024-07-26 14:20:36.251580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.531 qpair failed and we were unable to recover it. 00:26:28.531 [2024-07-26 14:20:36.251821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.531 [2024-07-26 14:20:36.251887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.531 qpair failed and we were unable to recover it. 00:26:28.531 [2024-07-26 14:20:36.252167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.531 [2024-07-26 14:20:36.252231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.531 qpair failed and we were unable to recover it. 00:26:28.531 [2024-07-26 14:20:36.252476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.531 [2024-07-26 14:20:36.252558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.531 qpair failed and we were unable to recover it. 00:26:28.531 [2024-07-26 14:20:36.252819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.531 [2024-07-26 14:20:36.252884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.531 qpair failed and we were unable to recover it. 00:26:28.531 [2024-07-26 14:20:36.253104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.531 [2024-07-26 14:20:36.253171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.531 qpair failed and we were unable to recover it. 00:26:28.531 [2024-07-26 14:20:36.253414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.531 [2024-07-26 14:20:36.253479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.531 qpair failed and we were unable to recover it. 00:26:28.531 [2024-07-26 14:20:36.253756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.531 [2024-07-26 14:20:36.253823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.531 qpair failed and we were unable to recover it. 00:26:28.531 [2024-07-26 14:20:36.254071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.531 [2024-07-26 14:20:36.254135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.531 qpair failed and we were unable to recover it. 00:26:28.531 [2024-07-26 14:20:36.254382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.531 [2024-07-26 14:20:36.254446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.531 qpair failed and we were unable to recover it. 00:26:28.531 [2024-07-26 14:20:36.254676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.531 [2024-07-26 14:20:36.254744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.531 qpair failed and we were unable to recover it. 00:26:28.531 [2024-07-26 14:20:36.255035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.531 [2024-07-26 14:20:36.255101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.531 qpair failed and we were unable to recover it. 00:26:28.531 [2024-07-26 14:20:36.255335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.531 [2024-07-26 14:20:36.255399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.531 qpair failed and we were unable to recover it. 00:26:28.531 [2024-07-26 14:20:36.255652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.531 [2024-07-26 14:20:36.255719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.531 qpair failed and we were unable to recover it. 00:26:28.531 [2024-07-26 14:20:36.256001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.531 [2024-07-26 14:20:36.256067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.531 qpair failed and we were unable to recover it. 00:26:28.531 [2024-07-26 14:20:36.256316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.531 [2024-07-26 14:20:36.256383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.531 qpair failed and we were unable to recover it. 00:26:28.531 [2024-07-26 14:20:36.256631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.531 [2024-07-26 14:20:36.256698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.531 qpair failed and we were unable to recover it. 00:26:28.531 [2024-07-26 14:20:36.256982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.531 [2024-07-26 14:20:36.257046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.531 qpair failed and we were unable to recover it. 00:26:28.531 [2024-07-26 14:20:36.257326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.531 [2024-07-26 14:20:36.257391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.531 qpair failed and we were unable to recover it. 00:26:28.531 [2024-07-26 14:20:36.257655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.531 [2024-07-26 14:20:36.257722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.531 qpair failed and we were unable to recover it. 00:26:28.531 [2024-07-26 14:20:36.257978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.531 [2024-07-26 14:20:36.258052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.531 qpair failed and we were unable to recover it. 00:26:28.531 [2024-07-26 14:20:36.258313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.531 [2024-07-26 14:20:36.258377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.531 qpair failed and we were unable to recover it. 00:26:28.531 [2024-07-26 14:20:36.258621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.531 [2024-07-26 14:20:36.258689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.531 qpair failed and we were unable to recover it. 00:26:28.531 [2024-07-26 14:20:36.258985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.531 [2024-07-26 14:20:36.259050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.531 qpair failed and we were unable to recover it. 00:26:28.531 [2024-07-26 14:20:36.259338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.531 [2024-07-26 14:20:36.259402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.531 qpair failed and we were unable to recover it. 00:26:28.531 [2024-07-26 14:20:36.259621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.531 [2024-07-26 14:20:36.259688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.531 qpair failed and we were unable to recover it. 00:26:28.531 [2024-07-26 14:20:36.259914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.531 [2024-07-26 14:20:36.259981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.531 qpair failed and we were unable to recover it. 00:26:28.531 [2024-07-26 14:20:36.260247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.531 [2024-07-26 14:20:36.260312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.531 qpair failed and we were unable to recover it. 00:26:28.531 [2024-07-26 14:20:36.260486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.531 [2024-07-26 14:20:36.260562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.531 qpair failed and we were unable to recover it. 00:26:28.531 [2024-07-26 14:20:36.260847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.532 [2024-07-26 14:20:36.260912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.532 qpair failed and we were unable to recover it. 00:26:28.532 [2024-07-26 14:20:36.261107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.532 [2024-07-26 14:20:36.261172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.532 qpair failed and we were unable to recover it. 00:26:28.532 [2024-07-26 14:20:36.261413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.532 [2024-07-26 14:20:36.261478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:28.532 qpair failed and we were unable to recover it. 00:26:28.532 [2024-07-26 14:20:36.261761] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103e230 is same with the state(5) to be set 00:26:28.532 [2024-07-26 14:20:36.262149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.532 [2024-07-26 14:20:36.262247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.532 qpair failed and we were unable to recover it. 00:26:28.532 [2024-07-26 14:20:36.262434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.532 [2024-07-26 14:20:36.262515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.532 qpair failed and we were unable to recover it. 00:26:28.532 [2024-07-26 14:20:36.262805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.532 [2024-07-26 14:20:36.262871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.532 qpair failed and we were unable to recover it. 00:26:28.532 [2024-07-26 14:20:36.263072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.532 [2024-07-26 14:20:36.263138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.532 qpair failed and we were unable to recover it. 00:26:28.532 [2024-07-26 14:20:36.263387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.532 [2024-07-26 14:20:36.263452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.532 qpair failed and we were unable to recover it. 00:26:28.532 [2024-07-26 14:20:36.263697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.532 [2024-07-26 14:20:36.263764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.532 qpair failed and we were unable to recover it. 00:26:28.532 [2024-07-26 14:20:36.264050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.532 [2024-07-26 14:20:36.264115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.532 qpair failed and we were unable to recover it. 00:26:28.532 [2024-07-26 14:20:36.264393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.532 [2024-07-26 14:20:36.264458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.532 qpair failed and we were unable to recover it. 00:26:28.532 [2024-07-26 14:20:36.264722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.532 [2024-07-26 14:20:36.264788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.532 qpair failed and we were unable to recover it. 00:26:28.532 [2024-07-26 14:20:36.265040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.532 [2024-07-26 14:20:36.265105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.532 qpair failed and we were unable to recover it. 00:26:28.532 [2024-07-26 14:20:36.265348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.532 [2024-07-26 14:20:36.265414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.532 qpair failed and we were unable to recover it. 00:26:28.532 [2024-07-26 14:20:36.265701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.532 [2024-07-26 14:20:36.265768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.532 qpair failed and we were unable to recover it. 00:26:28.532 [2024-07-26 14:20:36.265995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.532 [2024-07-26 14:20:36.266060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.532 qpair failed and we were unable to recover it. 00:26:28.532 [2024-07-26 14:20:36.266288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.532 [2024-07-26 14:20:36.266354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.532 qpair failed and we were unable to recover it. 00:26:28.532 [2024-07-26 14:20:36.266605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.532 [2024-07-26 14:20:36.266673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.532 qpair failed and we were unable to recover it. 00:26:28.532 [2024-07-26 14:20:36.266952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.532 [2024-07-26 14:20:36.267017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.532 qpair failed and we were unable to recover it. 00:26:28.532 [2024-07-26 14:20:36.267266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.532 [2024-07-26 14:20:36.267331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.532 qpair failed and we were unable to recover it. 00:26:28.532 [2024-07-26 14:20:36.267618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.532 [2024-07-26 14:20:36.267685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.532 qpair failed and we were unable to recover it. 00:26:28.532 [2024-07-26 14:20:36.267937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.532 [2024-07-26 14:20:36.268006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.532 qpair failed and we were unable to recover it. 00:26:28.532 [2024-07-26 14:20:36.268226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.532 [2024-07-26 14:20:36.268292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.532 qpair failed and we were unable to recover it. 00:26:28.532 [2024-07-26 14:20:36.268574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.532 [2024-07-26 14:20:36.268642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.532 qpair failed and we were unable to recover it. 00:26:28.532 [2024-07-26 14:20:36.268850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.532 [2024-07-26 14:20:36.268915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.532 qpair failed and we were unable to recover it. 00:26:28.532 [2024-07-26 14:20:36.269126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.532 [2024-07-26 14:20:36.269192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.532 qpair failed and we were unable to recover it. 00:26:28.532 [2024-07-26 14:20:36.269437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.532 [2024-07-26 14:20:36.269501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.532 qpair failed and we were unable to recover it. 00:26:28.533 [2024-07-26 14:20:36.269767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.533 [2024-07-26 14:20:36.269833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.533 qpair failed and we were unable to recover it. 00:26:28.533 [2024-07-26 14:20:36.270121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.533 [2024-07-26 14:20:36.270186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.533 qpair failed and we were unable to recover it. 00:26:28.533 [2024-07-26 14:20:36.270431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.533 [2024-07-26 14:20:36.270495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.533 qpair failed and we were unable to recover it. 00:26:28.533 [2024-07-26 14:20:36.270760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.533 [2024-07-26 14:20:36.270826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.533 qpair failed and we were unable to recover it. 00:26:28.533 [2024-07-26 14:20:36.271125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.533 [2024-07-26 14:20:36.271191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.533 qpair failed and we were unable to recover it. 00:26:28.533 [2024-07-26 14:20:36.271439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.533 [2024-07-26 14:20:36.271504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.533 qpair failed and we were unable to recover it. 00:26:28.533 [2024-07-26 14:20:36.271820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.533 [2024-07-26 14:20:36.271886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.533 qpair failed and we were unable to recover it. 00:26:28.533 [2024-07-26 14:20:36.272112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.533 [2024-07-26 14:20:36.272177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.533 qpair failed and we were unable to recover it. 00:26:28.533 [2024-07-26 14:20:36.272374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.533 [2024-07-26 14:20:36.272439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.533 qpair failed and we were unable to recover it. 00:26:28.533 [2024-07-26 14:20:36.272668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.533 [2024-07-26 14:20:36.272734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.533 qpair failed and we were unable to recover it. 00:26:28.533 [2024-07-26 14:20:36.273021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.533 [2024-07-26 14:20:36.273085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.533 qpair failed and we were unable to recover it. 00:26:28.533 [2024-07-26 14:20:36.273331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.533 [2024-07-26 14:20:36.273401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.533 qpair failed and we were unable to recover it. 00:26:28.533 [2024-07-26 14:20:36.273660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.533 [2024-07-26 14:20:36.273728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.533 qpair failed and we were unable to recover it. 00:26:28.533 [2024-07-26 14:20:36.273946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.533 [2024-07-26 14:20:36.274013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.533 qpair failed and we were unable to recover it. 00:26:28.533 [2024-07-26 14:20:36.274299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.533 [2024-07-26 14:20:36.274365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.533 qpair failed and we were unable to recover it. 00:26:28.533 [2024-07-26 14:20:36.274641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.533 [2024-07-26 14:20:36.274708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.533 qpair failed and we were unable to recover it. 00:26:28.533 [2024-07-26 14:20:36.274932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.533 [2024-07-26 14:20:36.274997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.533 qpair failed and we were unable to recover it. 00:26:28.533 [2024-07-26 14:20:36.275214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.533 [2024-07-26 14:20:36.275291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.533 qpair failed and we were unable to recover it. 00:26:28.533 [2024-07-26 14:20:36.275578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.533 [2024-07-26 14:20:36.275645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.533 qpair failed and we were unable to recover it. 00:26:28.533 [2024-07-26 14:20:36.275891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.533 [2024-07-26 14:20:36.275957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.533 qpair failed and we were unable to recover it. 00:26:28.533 [2024-07-26 14:20:36.276155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.533 [2024-07-26 14:20:36.276221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.533 qpair failed and we were unable to recover it. 00:26:28.533 [2024-07-26 14:20:36.276467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.533 [2024-07-26 14:20:36.276545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.533 qpair failed and we were unable to recover it. 00:26:28.533 [2024-07-26 14:20:36.276763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.533 [2024-07-26 14:20:36.276828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.533 qpair failed and we were unable to recover it. 00:26:28.533 [2024-07-26 14:20:36.277072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.533 [2024-07-26 14:20:36.277137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.533 qpair failed and we were unable to recover it. 00:26:28.533 [2024-07-26 14:20:36.277412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.533 [2024-07-26 14:20:36.277476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.533 qpair failed and we were unable to recover it. 00:26:28.533 [2024-07-26 14:20:36.277777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.533 [2024-07-26 14:20:36.277843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.533 qpair failed and we were unable to recover it. 00:26:28.533 [2024-07-26 14:20:36.278093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.533 [2024-07-26 14:20:36.278160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.533 qpair failed and we were unable to recover it. 00:26:28.533 [2024-07-26 14:20:36.278366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.533 [2024-07-26 14:20:36.278434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.533 qpair failed and we were unable to recover it. 00:26:28.533 [2024-07-26 14:20:36.278659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.533 [2024-07-26 14:20:36.278728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.533 qpair failed and we were unable to recover it. 00:26:28.533 [2024-07-26 14:20:36.278983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.533 [2024-07-26 14:20:36.279048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.533 qpair failed and we were unable to recover it. 00:26:28.533 [2024-07-26 14:20:36.279255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.533 [2024-07-26 14:20:36.279320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.533 qpair failed and we were unable to recover it. 00:26:28.533 [2024-07-26 14:20:36.279588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.533 [2024-07-26 14:20:36.279657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.533 qpair failed and we were unable to recover it. 00:26:28.533 [2024-07-26 14:20:36.279911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.533 [2024-07-26 14:20:36.279975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.533 qpair failed and we were unable to recover it. 00:26:28.533 [2024-07-26 14:20:36.280263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.533 [2024-07-26 14:20:36.280328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.533 qpair failed and we were unable to recover it. 00:26:28.533 [2024-07-26 14:20:36.280569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.533 [2024-07-26 14:20:36.280637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.533 qpair failed and we were unable to recover it. 00:26:28.533 [2024-07-26 14:20:36.280919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.533 [2024-07-26 14:20:36.280984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.533 qpair failed and we were unable to recover it. 00:26:28.533 [2024-07-26 14:20:36.281253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.534 [2024-07-26 14:20:36.281319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.534 qpair failed and we were unable to recover it. 00:26:28.534 [2024-07-26 14:20:36.281568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.534 [2024-07-26 14:20:36.281633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.534 qpair failed and we were unable to recover it. 00:26:28.534 [2024-07-26 14:20:36.281872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.534 [2024-07-26 14:20:36.281937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.534 qpair failed and we were unable to recover it. 00:26:28.534 [2024-07-26 14:20:36.282184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.534 [2024-07-26 14:20:36.282251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.534 qpair failed and we were unable to recover it. 00:26:28.534 [2024-07-26 14:20:36.282465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.534 [2024-07-26 14:20:36.282551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.534 qpair failed and we were unable to recover it. 00:26:28.534 [2024-07-26 14:20:36.282805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.534 [2024-07-26 14:20:36.282873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.534 qpair failed and we were unable to recover it. 00:26:28.534 [2024-07-26 14:20:36.283172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.534 [2024-07-26 14:20:36.283238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.534 qpair failed and we were unable to recover it. 00:26:28.534 [2024-07-26 14:20:36.283457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.534 [2024-07-26 14:20:36.283522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.534 qpair failed and we were unable to recover it. 00:26:28.534 [2024-07-26 14:20:36.283834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.534 [2024-07-26 14:20:36.283900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.534 qpair failed and we were unable to recover it. 00:26:28.534 [2024-07-26 14:20:36.284146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.534 [2024-07-26 14:20:36.284211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.534 qpair failed and we were unable to recover it. 00:26:28.534 [2024-07-26 14:20:36.284471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.534 [2024-07-26 14:20:36.284556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.534 qpair failed and we were unable to recover it. 00:26:28.534 [2024-07-26 14:20:36.284811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.534 [2024-07-26 14:20:36.284878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.534 qpair failed and we were unable to recover it. 00:26:28.534 [2024-07-26 14:20:36.285134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.534 [2024-07-26 14:20:36.285200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.534 qpair failed and we were unable to recover it. 00:26:28.534 [2024-07-26 14:20:36.285485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.534 [2024-07-26 14:20:36.285566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.534 qpair failed and we were unable to recover it. 00:26:28.534 [2024-07-26 14:20:36.285818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.534 [2024-07-26 14:20:36.285885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.534 qpair failed and we were unable to recover it. 00:26:28.534 [2024-07-26 14:20:36.286177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.534 [2024-07-26 14:20:36.286242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.534 qpair failed and we were unable to recover it. 00:26:28.534 [2024-07-26 14:20:36.286495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.534 [2024-07-26 14:20:36.286578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.534 qpair failed and we were unable to recover it. 00:26:28.534 [2024-07-26 14:20:36.286782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.534 [2024-07-26 14:20:36.286847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.534 qpair failed and we were unable to recover it. 00:26:28.534 [2024-07-26 14:20:36.287050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.534 [2024-07-26 14:20:36.287117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.534 qpair failed and we were unable to recover it. 00:26:28.534 [2024-07-26 14:20:36.287372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.534 [2024-07-26 14:20:36.287436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.534 qpair failed and we were unable to recover it. 00:26:28.534 [2024-07-26 14:20:36.287718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.534 [2024-07-26 14:20:36.287786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.534 qpair failed and we were unable to recover it. 00:26:28.534 [2024-07-26 14:20:36.288011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.534 [2024-07-26 14:20:36.288087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.534 qpair failed and we were unable to recover it. 00:26:28.534 [2024-07-26 14:20:36.288339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.534 [2024-07-26 14:20:36.288404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.534 qpair failed and we were unable to recover it. 00:26:28.534 [2024-07-26 14:20:36.288659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.534 [2024-07-26 14:20:36.288726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.534 qpair failed and we were unable to recover it. 00:26:28.534 [2024-07-26 14:20:36.288953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.534 [2024-07-26 14:20:36.289018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.534 qpair failed and we were unable to recover it. 00:26:28.534 [2024-07-26 14:20:36.289303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.534 [2024-07-26 14:20:36.289367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.534 qpair failed and we were unable to recover it. 00:26:28.534 [2024-07-26 14:20:36.289584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.534 [2024-07-26 14:20:36.289651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.534 qpair failed and we were unable to recover it. 00:26:28.534 [2024-07-26 14:20:36.289901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.534 [2024-07-26 14:20:36.289966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.534 qpair failed and we were unable to recover it. 00:26:28.534 [2024-07-26 14:20:36.290187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.534 [2024-07-26 14:20:36.290251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.534 qpair failed and we were unable to recover it. 00:26:28.534 [2024-07-26 14:20:36.290521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.534 [2024-07-26 14:20:36.290609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.534 qpair failed and we were unable to recover it. 00:26:28.534 [2024-07-26 14:20:36.290856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.534 [2024-07-26 14:20:36.290925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.534 qpair failed and we were unable to recover it. 00:26:28.534 [2024-07-26 14:20:36.291182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.534 [2024-07-26 14:20:36.291248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.534 qpair failed and we were unable to recover it. 00:26:28.534 [2024-07-26 14:20:36.291493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.534 [2024-07-26 14:20:36.291593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.534 qpair failed and we were unable to recover it. 00:26:28.534 [2024-07-26 14:20:36.291848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.534 [2024-07-26 14:20:36.291913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.534 qpair failed and we were unable to recover it. 00:26:28.534 [2024-07-26 14:20:36.292110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.534 [2024-07-26 14:20:36.292176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.534 qpair failed and we were unable to recover it. 00:26:28.534 [2024-07-26 14:20:36.292484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.534 [2024-07-26 14:20:36.292567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.534 qpair failed and we were unable to recover it. 00:26:28.534 [2024-07-26 14:20:36.292792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.534 [2024-07-26 14:20:36.292856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.534 qpair failed and we were unable to recover it. 00:26:28.534 [2024-07-26 14:20:36.293107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.535 [2024-07-26 14:20:36.293173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.535 qpair failed and we were unable to recover it. 00:26:28.535 [2024-07-26 14:20:36.293411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.535 [2024-07-26 14:20:36.293475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.535 qpair failed and we were unable to recover it. 00:26:28.535 [2024-07-26 14:20:36.293745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.535 [2024-07-26 14:20:36.293810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.535 qpair failed and we were unable to recover it. 00:26:28.535 [2024-07-26 14:20:36.294076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.535 [2024-07-26 14:20:36.294140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.535 qpair failed and we were unable to recover it. 00:26:28.535 [2024-07-26 14:20:36.294347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.535 [2024-07-26 14:20:36.294415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.535 qpair failed and we were unable to recover it. 00:26:28.535 [2024-07-26 14:20:36.294648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.535 [2024-07-26 14:20:36.294714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.535 qpair failed and we were unable to recover it. 00:26:28.535 [2024-07-26 14:20:36.294945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.535 [2024-07-26 14:20:36.295010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.535 qpair failed and we were unable to recover it. 00:26:28.535 [2024-07-26 14:20:36.295299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.535 [2024-07-26 14:20:36.295363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.535 qpair failed and we were unable to recover it. 00:26:28.535 [2024-07-26 14:20:36.295618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.535 [2024-07-26 14:20:36.295659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.535 qpair failed and we were unable to recover it. 00:26:28.535 [2024-07-26 14:20:36.295826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.535 [2024-07-26 14:20:36.295865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.535 qpair failed and we were unable to recover it. 00:26:28.535 [2024-07-26 14:20:36.295989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.535 [2024-07-26 14:20:36.296028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.535 qpair failed and we were unable to recover it. 00:26:28.535 [2024-07-26 14:20:36.296162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.535 [2024-07-26 14:20:36.296201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.535 qpair failed and we were unable to recover it. 00:26:28.535 [2024-07-26 14:20:36.296340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.535 [2024-07-26 14:20:36.296379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.535 qpair failed and we were unable to recover it. 00:26:28.535 [2024-07-26 14:20:36.296550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.535 [2024-07-26 14:20:36.296591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.535 qpair failed and we were unable to recover it. 00:26:28.535 [2024-07-26 14:20:36.296711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.535 [2024-07-26 14:20:36.296750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.535 qpair failed and we were unable to recover it. 00:26:28.535 [2024-07-26 14:20:36.296902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.535 [2024-07-26 14:20:36.296940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.535 qpair failed and we were unable to recover it. 00:26:28.535 [2024-07-26 14:20:36.297056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.535 [2024-07-26 14:20:36.297095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.535 qpair failed and we were unable to recover it. 00:26:28.535 [2024-07-26 14:20:36.297258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.535 [2024-07-26 14:20:36.297297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.535 qpair failed and we were unable to recover it. 00:26:28.535 [2024-07-26 14:20:36.297427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.535 [2024-07-26 14:20:36.297468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.535 qpair failed and we were unable to recover it. 00:26:28.535 [2024-07-26 14:20:36.297663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.535 [2024-07-26 14:20:36.297704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.535 qpair failed and we were unable to recover it. 00:26:28.535 [2024-07-26 14:20:36.297820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.535 [2024-07-26 14:20:36.297859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.535 qpair failed and we were unable to recover it. 00:26:28.535 [2024-07-26 14:20:36.298011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.535 [2024-07-26 14:20:36.298050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.535 qpair failed and we were unable to recover it. 00:26:28.535 [2024-07-26 14:20:36.298210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.535 [2024-07-26 14:20:36.298249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.535 qpair failed and we were unable to recover it. 00:26:28.535 [2024-07-26 14:20:36.298381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.535 [2024-07-26 14:20:36.298421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.535 qpair failed and we were unable to recover it. 00:26:28.535 [2024-07-26 14:20:36.298584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.535 [2024-07-26 14:20:36.298630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.535 qpair failed and we were unable to recover it. 00:26:28.535 [2024-07-26 14:20:36.298805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.535 [2024-07-26 14:20:36.298839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.535 qpair failed and we were unable to recover it. 00:26:28.535 [2024-07-26 14:20:36.298975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.535 [2024-07-26 14:20:36.299029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.535 qpair failed and we were unable to recover it. 00:26:28.535 [2024-07-26 14:20:36.299154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.535 [2024-07-26 14:20:36.299221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.535 qpair failed and we were unable to recover it. 00:26:28.535 [2024-07-26 14:20:36.299413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.535 [2024-07-26 14:20:36.299447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.535 qpair failed and we were unable to recover it. 00:26:28.535 [2024-07-26 14:20:36.299609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.535 [2024-07-26 14:20:36.299645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.535 qpair failed and we were unable to recover it. 00:26:28.535 [2024-07-26 14:20:36.299755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.535 [2024-07-26 14:20:36.299791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.535 qpair failed and we were unable to recover it. 00:26:28.535 [2024-07-26 14:20:36.299918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.535 [2024-07-26 14:20:36.299953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.535 qpair failed and we were unable to recover it. 00:26:28.535 [2024-07-26 14:20:36.300095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.535 [2024-07-26 14:20:36.300148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.535 qpair failed and we were unable to recover it. 00:26:28.535 [2024-07-26 14:20:36.300275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.535 [2024-07-26 14:20:36.300315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.535 qpair failed and we were unable to recover it. 00:26:28.535 [2024-07-26 14:20:36.300438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.535 [2024-07-26 14:20:36.300480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.535 qpair failed and we were unable to recover it. 00:26:28.535 [2024-07-26 14:20:36.300633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.535 [2024-07-26 14:20:36.300671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.535 qpair failed and we were unable to recover it. 00:26:28.535 [2024-07-26 14:20:36.300837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.535 [2024-07-26 14:20:36.300876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.535 qpair failed and we were unable to recover it. 00:26:28.535 [2024-07-26 14:20:36.301145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.535 [2024-07-26 14:20:36.301210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.536 qpair failed and we were unable to recover it. 00:26:28.536 [2024-07-26 14:20:36.301415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.536 [2024-07-26 14:20:36.301482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.536 qpair failed and we were unable to recover it. 00:26:28.536 [2024-07-26 14:20:36.301683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.536 [2024-07-26 14:20:36.301718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.536 qpair failed and we were unable to recover it. 00:26:28.536 [2024-07-26 14:20:36.301862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.536 [2024-07-26 14:20:36.301896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.536 qpair failed and we were unable to recover it. 00:26:28.536 [2024-07-26 14:20:36.302027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.536 [2024-07-26 14:20:36.302062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.536 qpair failed and we were unable to recover it. 00:26:28.536 [2024-07-26 14:20:36.302196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.536 [2024-07-26 14:20:36.302233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.536 qpair failed and we were unable to recover it. 00:26:28.536 [2024-07-26 14:20:36.302449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.536 [2024-07-26 14:20:36.302507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.536 qpair failed and we were unable to recover it. 00:26:28.536 [2024-07-26 14:20:36.302620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.536 [2024-07-26 14:20:36.302655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.536 qpair failed and we were unable to recover it. 00:26:28.536 [2024-07-26 14:20:36.302789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.536 [2024-07-26 14:20:36.302866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.536 qpair failed and we were unable to recover it. 00:26:28.536 [2024-07-26 14:20:36.303091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.536 [2024-07-26 14:20:36.303156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.536 qpair failed and we were unable to recover it. 00:26:28.536 [2024-07-26 14:20:36.303378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.536 [2024-07-26 14:20:36.303442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.536 qpair failed and we were unable to recover it. 00:26:28.536 [2024-07-26 14:20:36.303590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.536 [2024-07-26 14:20:36.303623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.536 qpair failed and we were unable to recover it. 00:26:28.536 [2024-07-26 14:20:36.303737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.536 [2024-07-26 14:20:36.303770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.536 qpair failed and we were unable to recover it. 00:26:28.536 [2024-07-26 14:20:36.303879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.536 [2024-07-26 14:20:36.303915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.536 qpair failed and we were unable to recover it. 00:26:28.536 [2024-07-26 14:20:36.304142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.536 [2024-07-26 14:20:36.304240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.536 qpair failed and we were unable to recover it. 00:26:28.536 [2024-07-26 14:20:36.304474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.536 [2024-07-26 14:20:36.304510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.536 qpair failed and we were unable to recover it. 00:26:28.536 [2024-07-26 14:20:36.304630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.536 [2024-07-26 14:20:36.304665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.536 qpair failed and we were unable to recover it. 00:26:28.536 [2024-07-26 14:20:36.304761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.536 [2024-07-26 14:20:36.304793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.536 qpair failed and we were unable to recover it. 00:26:28.536 [2024-07-26 14:20:36.304919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.536 [2024-07-26 14:20:36.304983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.536 qpair failed and we were unable to recover it. 00:26:28.536 [2024-07-26 14:20:36.305193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.536 [2024-07-26 14:20:36.305255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.536 qpair failed and we were unable to recover it. 00:26:28.536 [2024-07-26 14:20:36.305551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.536 [2024-07-26 14:20:36.305602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.536 qpair failed and we were unable to recover it. 00:26:28.536 [2024-07-26 14:20:36.305705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.536 [2024-07-26 14:20:36.305737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.536 qpair failed and we were unable to recover it. 00:26:28.536 [2024-07-26 14:20:36.305891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.536 [2024-07-26 14:20:36.305927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.536 qpair failed and we were unable to recover it. 00:26:28.536 [2024-07-26 14:20:36.306052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.536 [2024-07-26 14:20:36.306087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.536 qpair failed and we were unable to recover it. 00:26:28.536 [2024-07-26 14:20:36.306253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.536 [2024-07-26 14:20:36.306317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.536 qpair failed and we were unable to recover it. 00:26:28.536 [2024-07-26 14:20:36.306564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.536 [2024-07-26 14:20:36.306598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.536 qpair failed and we were unable to recover it. 00:26:28.536 [2024-07-26 14:20:36.306705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.536 [2024-07-26 14:20:36.306740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.536 qpair failed and we were unable to recover it. 00:26:28.536 [2024-07-26 14:20:36.306932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.536 [2024-07-26 14:20:36.307009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.536 qpair failed and we were unable to recover it. 00:26:28.536 [2024-07-26 14:20:36.307305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.536 [2024-07-26 14:20:36.307340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.536 qpair failed and we were unable to recover it. 00:26:28.536 [2024-07-26 14:20:36.307497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.536 [2024-07-26 14:20:36.307541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.536 qpair failed and we were unable to recover it. 00:26:28.536 [2024-07-26 14:20:36.307661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.536 [2024-07-26 14:20:36.307694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.536 qpair failed and we were unable to recover it. 00:26:28.536 [2024-07-26 14:20:36.307797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.536 [2024-07-26 14:20:36.307831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.536 qpair failed and we were unable to recover it. 00:26:28.536 [2024-07-26 14:20:36.307954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.536 [2024-07-26 14:20:36.308018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.536 qpair failed and we were unable to recover it. 00:26:28.536 [2024-07-26 14:20:36.308231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.536 [2024-07-26 14:20:36.308297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.536 qpair failed and we were unable to recover it. 00:26:28.536 [2024-07-26 14:20:36.308515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.536 [2024-07-26 14:20:36.308561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.536 qpair failed and we were unable to recover it. 00:26:28.536 [2024-07-26 14:20:36.308674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.536 [2024-07-26 14:20:36.308707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.536 qpair failed and we were unable to recover it. 00:26:28.536 [2024-07-26 14:20:36.308846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.536 [2024-07-26 14:20:36.308907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.536 qpair failed and we were unable to recover it. 00:26:28.536 [2024-07-26 14:20:36.309151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.536 [2024-07-26 14:20:36.309215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.536 qpair failed and we were unable to recover it. 00:26:28.537 [2024-07-26 14:20:36.309396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.537 [2024-07-26 14:20:36.309463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.537 qpair failed and we were unable to recover it. 00:26:28.537 [2024-07-26 14:20:36.309586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.537 [2024-07-26 14:20:36.309621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.537 qpair failed and we were unable to recover it. 00:26:28.537 [2024-07-26 14:20:36.309723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.537 [2024-07-26 14:20:36.309757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.537 qpair failed and we were unable to recover it. 00:26:28.537 [2024-07-26 14:20:36.309910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.537 [2024-07-26 14:20:36.309945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.537 qpair failed and we were unable to recover it. 00:26:28.537 [2024-07-26 14:20:36.310109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.537 [2024-07-26 14:20:36.310174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.537 qpair failed and we were unable to recover it. 00:26:28.537 [2024-07-26 14:20:36.310437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.537 [2024-07-26 14:20:36.310471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.537 qpair failed and we were unable to recover it. 00:26:28.537 [2024-07-26 14:20:36.310603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.537 [2024-07-26 14:20:36.310637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.537 qpair failed and we were unable to recover it. 00:26:28.537 [2024-07-26 14:20:36.310771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.537 [2024-07-26 14:20:36.310824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.537 qpair failed and we were unable to recover it. 00:26:28.537 [2024-07-26 14:20:36.310958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.537 [2024-07-26 14:20:36.310992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.537 qpair failed and we were unable to recover it. 00:26:28.537 [2024-07-26 14:20:36.311162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.537 [2024-07-26 14:20:36.311212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.537 qpair failed and we were unable to recover it. 00:26:28.537 [2024-07-26 14:20:36.311328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.537 [2024-07-26 14:20:36.311365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.537 qpair failed and we were unable to recover it. 00:26:28.537 [2024-07-26 14:20:36.311487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.537 [2024-07-26 14:20:36.311520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.537 qpair failed and we were unable to recover it. 00:26:28.537 [2024-07-26 14:20:36.311652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.537 [2024-07-26 14:20:36.311686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.537 qpair failed and we were unable to recover it. 00:26:28.537 [2024-07-26 14:20:36.311785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.537 [2024-07-26 14:20:36.311818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.537 qpair failed and we were unable to recover it. 00:26:28.537 [2024-07-26 14:20:36.311982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.537 [2024-07-26 14:20:36.312020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.537 qpair failed and we were unable to recover it. 00:26:28.537 [2024-07-26 14:20:36.312265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.537 [2024-07-26 14:20:36.312328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.537 qpair failed and we were unable to recover it. 00:26:28.537 [2024-07-26 14:20:36.312612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.537 [2024-07-26 14:20:36.312646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.537 qpair failed and we were unable to recover it. 00:26:28.537 [2024-07-26 14:20:36.312753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.537 [2024-07-26 14:20:36.312787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.537 qpair failed and we were unable to recover it. 00:26:28.537 [2024-07-26 14:20:36.312952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.537 [2024-07-26 14:20:36.313018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.537 qpair failed and we were unable to recover it. 00:26:28.537 [2024-07-26 14:20:36.313218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.537 [2024-07-26 14:20:36.313282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.537 qpair failed and we were unable to recover it. 00:26:28.537 [2024-07-26 14:20:36.313491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.537 [2024-07-26 14:20:36.313590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.537 qpair failed and we were unable to recover it. 00:26:28.537 [2024-07-26 14:20:36.313731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.537 [2024-07-26 14:20:36.313765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.537 qpair failed and we were unable to recover it. 00:26:28.537 [2024-07-26 14:20:36.313926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.537 [2024-07-26 14:20:36.313995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.537 qpair failed and we were unable to recover it. 00:26:28.537 [2024-07-26 14:20:36.314207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.537 [2024-07-26 14:20:36.314271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.537 qpair failed and we were unable to recover it. 00:26:28.537 [2024-07-26 14:20:36.314561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.537 [2024-07-26 14:20:36.314611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.537 qpair failed and we were unable to recover it. 00:26:28.537 [2024-07-26 14:20:36.314717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.537 [2024-07-26 14:20:36.314751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.537 qpair failed and we were unable to recover it. 00:26:28.537 [2024-07-26 14:20:36.314871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.537 [2024-07-26 14:20:36.314905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.537 qpair failed and we were unable to recover it. 00:26:28.537 [2024-07-26 14:20:36.315046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.537 [2024-07-26 14:20:36.315085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.537 qpair failed and we were unable to recover it. 00:26:28.537 [2024-07-26 14:20:36.315268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.537 [2024-07-26 14:20:36.315332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.537 qpair failed and we were unable to recover it. 00:26:28.537 [2024-07-26 14:20:36.315502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.537 [2024-07-26 14:20:36.315550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.537 qpair failed and we were unable to recover it. 00:26:28.537 [2024-07-26 14:20:36.315665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.537 [2024-07-26 14:20:36.315699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.537 qpair failed and we were unable to recover it. 00:26:28.537 [2024-07-26 14:20:36.315831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.538 [2024-07-26 14:20:36.315865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.538 qpair failed and we were unable to recover it. 00:26:28.538 [2024-07-26 14:20:36.316037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.538 [2024-07-26 14:20:36.316076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.538 qpair failed and we were unable to recover it. 00:26:28.538 [2024-07-26 14:20:36.316206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.538 [2024-07-26 14:20:36.316260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.538 qpair failed and we were unable to recover it. 00:26:28.538 [2024-07-26 14:20:36.316467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.538 [2024-07-26 14:20:36.316549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.538 qpair failed and we were unable to recover it. 00:26:28.538 [2024-07-26 14:20:36.316691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.538 [2024-07-26 14:20:36.316724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.538 qpair failed and we were unable to recover it. 00:26:28.538 [2024-07-26 14:20:36.316826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.538 [2024-07-26 14:20:36.316860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.538 qpair failed and we were unable to recover it. 00:26:28.538 [2024-07-26 14:20:36.316989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.538 [2024-07-26 14:20:36.317056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.538 qpair failed and we were unable to recover it. 00:26:28.538 [2024-07-26 14:20:36.317257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.538 [2024-07-26 14:20:36.317324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.538 qpair failed and we were unable to recover it. 00:26:28.538 [2024-07-26 14:20:36.317568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.538 [2024-07-26 14:20:36.317621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.538 qpair failed and we were unable to recover it. 00:26:28.538 [2024-07-26 14:20:36.317731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.538 [2024-07-26 14:20:36.317764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.538 qpair failed and we were unable to recover it. 00:26:28.538 [2024-07-26 14:20:36.317873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.538 [2024-07-26 14:20:36.317907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.538 qpair failed and we were unable to recover it. 00:26:28.538 [2024-07-26 14:20:36.318014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.538 [2024-07-26 14:20:36.318047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.538 qpair failed and we were unable to recover it. 00:26:28.538 [2024-07-26 14:20:36.318222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.538 [2024-07-26 14:20:36.318256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.538 qpair failed and we were unable to recover it. 00:26:28.538 [2024-07-26 14:20:36.318485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.538 [2024-07-26 14:20:36.318567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.538 qpair failed and we were unable to recover it. 00:26:28.538 [2024-07-26 14:20:36.318704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.538 [2024-07-26 14:20:36.318738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.538 qpair failed and we were unable to recover it. 00:26:28.538 [2024-07-26 14:20:36.318869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.538 [2024-07-26 14:20:36.318905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.538 qpair failed and we were unable to recover it. 00:26:28.538 [2024-07-26 14:20:36.319119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.538 [2024-07-26 14:20:36.319159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.538 qpair failed and we were unable to recover it. 00:26:28.538 [2024-07-26 14:20:36.319283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.538 [2024-07-26 14:20:36.319321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.538 qpair failed and we were unable to recover it. 00:26:28.538 [2024-07-26 14:20:36.319486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.538 [2024-07-26 14:20:36.319545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.538 qpair failed and we were unable to recover it. 00:26:28.538 [2024-07-26 14:20:36.319660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.538 [2024-07-26 14:20:36.319693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.538 qpair failed and we were unable to recover it. 00:26:28.538 [2024-07-26 14:20:36.319859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.538 [2024-07-26 14:20:36.319893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.538 qpair failed and we were unable to recover it. 00:26:28.538 [2024-07-26 14:20:36.320054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.538 [2024-07-26 14:20:36.320088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.538 qpair failed and we were unable to recover it. 00:26:28.538 [2024-07-26 14:20:36.320229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.538 [2024-07-26 14:20:36.320293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.538 qpair failed and we were unable to recover it. 00:26:28.538 [2024-07-26 14:20:36.320492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.538 [2024-07-26 14:20:36.320533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.538 qpair failed and we were unable to recover it. 00:26:28.538 [2024-07-26 14:20:36.320643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.538 [2024-07-26 14:20:36.320676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.538 qpair failed and we were unable to recover it. 00:26:28.538 [2024-07-26 14:20:36.320792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.538 [2024-07-26 14:20:36.320826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.538 qpair failed and we were unable to recover it. 00:26:28.538 [2024-07-26 14:20:36.320930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.538 [2024-07-26 14:20:36.320964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.538 qpair failed and we were unable to recover it. 00:26:28.538 [2024-07-26 14:20:36.321112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.538 [2024-07-26 14:20:36.321145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.538 qpair failed and we were unable to recover it. 00:26:28.538 [2024-07-26 14:20:36.321253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.538 [2024-07-26 14:20:36.321288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.538 qpair failed and we were unable to recover it. 00:26:28.538 [2024-07-26 14:20:36.321406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.538 [2024-07-26 14:20:36.321439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.538 qpair failed and we were unable to recover it. 00:26:28.538 [2024-07-26 14:20:36.321553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.538 [2024-07-26 14:20:36.321587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.538 qpair failed and we were unable to recover it. 00:26:28.538 [2024-07-26 14:20:36.321684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.538 [2024-07-26 14:20:36.321717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.538 qpair failed and we were unable to recover it. 00:26:28.538 [2024-07-26 14:20:36.321830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.538 [2024-07-26 14:20:36.321863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.538 qpair failed and we were unable to recover it. 00:26:28.538 [2024-07-26 14:20:36.322071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.538 [2024-07-26 14:20:36.322138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.538 qpair failed and we were unable to recover it. 00:26:28.538 [2024-07-26 14:20:36.322235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.538 [2024-07-26 14:20:36.322269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.538 qpair failed and we were unable to recover it. 00:26:28.538 [2024-07-26 14:20:36.322410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.538 [2024-07-26 14:20:36.322443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.538 qpair failed and we were unable to recover it. 00:26:28.538 [2024-07-26 14:20:36.322632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.538 [2024-07-26 14:20:36.322668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.538 qpair failed and we were unable to recover it. 00:26:28.538 [2024-07-26 14:20:36.322840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.539 [2024-07-26 14:20:36.322907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.539 qpair failed and we were unable to recover it. 00:26:28.539 [2024-07-26 14:20:36.323123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.539 [2024-07-26 14:20:36.323163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.539 qpair failed and we were unable to recover it. 00:26:28.539 [2024-07-26 14:20:36.323287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.539 [2024-07-26 14:20:36.323321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.539 qpair failed and we were unable to recover it. 00:26:28.539 [2024-07-26 14:20:36.323436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.539 [2024-07-26 14:20:36.323469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.539 qpair failed and we were unable to recover it. 00:26:28.539 [2024-07-26 14:20:36.323602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.539 [2024-07-26 14:20:36.323638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.539 qpair failed and we were unable to recover it. 00:26:28.539 [2024-07-26 14:20:36.323758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.539 [2024-07-26 14:20:36.323792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.539 qpair failed and we were unable to recover it. 00:26:28.539 [2024-07-26 14:20:36.323923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.539 [2024-07-26 14:20:36.323957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.539 qpair failed and we were unable to recover it. 00:26:28.539 [2024-07-26 14:20:36.324090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.539 [2024-07-26 14:20:36.324126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.539 qpair failed and we were unable to recover it. 00:26:28.539 [2024-07-26 14:20:36.324232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.539 [2024-07-26 14:20:36.324266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.539 qpair failed and we were unable to recover it. 00:26:28.539 [2024-07-26 14:20:36.324374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.539 [2024-07-26 14:20:36.324407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.539 qpair failed and we were unable to recover it. 00:26:28.539 [2024-07-26 14:20:36.324615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.539 [2024-07-26 14:20:36.324651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.539 qpair failed and we were unable to recover it. 00:26:28.539 [2024-07-26 14:20:36.324772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.539 [2024-07-26 14:20:36.324807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.539 qpair failed and we were unable to recover it. 00:26:28.539 [2024-07-26 14:20:36.325073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.539 [2024-07-26 14:20:36.325137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.539 qpair failed and we were unable to recover it. 00:26:28.539 [2024-07-26 14:20:36.325407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.539 [2024-07-26 14:20:36.325474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.539 qpair failed and we were unable to recover it. 00:26:28.539 [2024-07-26 14:20:36.325692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.539 [2024-07-26 14:20:36.325729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.539 qpair failed and we were unable to recover it. 00:26:28.539 [2024-07-26 14:20:36.325867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.539 [2024-07-26 14:20:36.325906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.539 qpair failed and we were unable to recover it. 00:26:28.539 [2024-07-26 14:20:36.326072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.539 [2024-07-26 14:20:36.326121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.539 qpair failed and we were unable to recover it. 00:26:28.539 [2024-07-26 14:20:36.326354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.539 [2024-07-26 14:20:36.326418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.539 qpair failed and we were unable to recover it. 00:26:28.539 [2024-07-26 14:20:36.326658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.539 [2024-07-26 14:20:36.326694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.539 qpair failed and we were unable to recover it. 00:26:28.539 [2024-07-26 14:20:36.326823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.539 [2024-07-26 14:20:36.326857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.539 qpair failed and we were unable to recover it. 00:26:28.539 [2024-07-26 14:20:36.327084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.539 [2024-07-26 14:20:36.327148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.539 qpair failed and we were unable to recover it. 00:26:28.539 [2024-07-26 14:20:36.327344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.539 [2024-07-26 14:20:36.327408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.539 qpair failed and we were unable to recover it. 00:26:28.539 [2024-07-26 14:20:36.327619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.539 [2024-07-26 14:20:36.327654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.539 qpair failed and we were unable to recover it. 00:26:28.539 [2024-07-26 14:20:36.327778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.539 [2024-07-26 14:20:36.327812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.539 qpair failed and we were unable to recover it. 00:26:28.539 [2024-07-26 14:20:36.328001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.539 [2024-07-26 14:20:36.328037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.539 qpair failed and we were unable to recover it. 00:26:28.539 [2024-07-26 14:20:36.328150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.539 [2024-07-26 14:20:36.328185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.539 qpair failed and we were unable to recover it. 00:26:28.539 [2024-07-26 14:20:36.328307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.539 [2024-07-26 14:20:36.328341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.539 qpair failed and we were unable to recover it. 00:26:28.539 [2024-07-26 14:20:36.328550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.539 [2024-07-26 14:20:36.328606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.539 qpair failed and we were unable to recover it. 00:26:28.539 [2024-07-26 14:20:36.328762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.539 [2024-07-26 14:20:36.328798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.539 qpair failed and we were unable to recover it. 00:26:28.539 [2024-07-26 14:20:36.328998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.539 [2024-07-26 14:20:36.329050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.539 qpair failed and we were unable to recover it. 00:26:28.539 [2024-07-26 14:20:36.329223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.539 [2024-07-26 14:20:36.329275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.539 qpair failed and we were unable to recover it. 00:26:28.539 [2024-07-26 14:20:36.329448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.539 [2024-07-26 14:20:36.329510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.539 qpair failed and we were unable to recover it. 00:26:28.539 [2024-07-26 14:20:36.329696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.539 [2024-07-26 14:20:36.329731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.539 qpair failed and we were unable to recover it. 00:26:28.539 [2024-07-26 14:20:36.329875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.539 [2024-07-26 14:20:36.329912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.539 qpair failed and we were unable to recover it. 00:26:28.539 [2024-07-26 14:20:36.330079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.539 [2024-07-26 14:20:36.330143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.539 qpair failed and we were unable to recover it. 00:26:28.539 [2024-07-26 14:20:36.330383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.539 [2024-07-26 14:20:36.330448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.539 qpair failed and we were unable to recover it. 00:26:28.539 [2024-07-26 14:20:36.330677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.539 [2024-07-26 14:20:36.330713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.539 qpair failed and we were unable to recover it. 00:26:28.539 [2024-07-26 14:20:36.330844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.540 [2024-07-26 14:20:36.330884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.540 qpair failed and we were unable to recover it. 00:26:28.540 [2024-07-26 14:20:36.331057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.540 [2024-07-26 14:20:36.331107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.540 qpair failed and we were unable to recover it. 00:26:28.540 [2024-07-26 14:20:36.331364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.540 [2024-07-26 14:20:36.331428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.540 qpair failed and we were unable to recover it. 00:26:28.540 [2024-07-26 14:20:36.331623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.540 [2024-07-26 14:20:36.331660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.540 qpair failed and we were unable to recover it. 00:26:28.540 [2024-07-26 14:20:36.331774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.540 [2024-07-26 14:20:36.331827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.540 qpair failed and we were unable to recover it. 00:26:28.540 [2024-07-26 14:20:36.332029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.540 [2024-07-26 14:20:36.332095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.540 qpair failed and we were unable to recover it. 00:26:28.540 [2024-07-26 14:20:36.332336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.540 [2024-07-26 14:20:36.332376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.540 qpair failed and we were unable to recover it. 00:26:28.540 [2024-07-26 14:20:36.332505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.540 [2024-07-26 14:20:36.332553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.540 qpair failed and we were unable to recover it. 00:26:28.540 [2024-07-26 14:20:36.332714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.540 [2024-07-26 14:20:36.332749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.540 qpair failed and we were unable to recover it. 00:26:28.540 [2024-07-26 14:20:36.332885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.540 [2024-07-26 14:20:36.332921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.540 qpair failed and we were unable to recover it. 00:26:28.540 [2024-07-26 14:20:36.333036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.540 [2024-07-26 14:20:36.333072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.540 qpair failed and we were unable to recover it. 00:26:28.540 [2024-07-26 14:20:36.333311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.540 [2024-07-26 14:20:36.333379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.540 qpair failed and we were unable to recover it. 00:26:28.540 [2024-07-26 14:20:36.333594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.540 [2024-07-26 14:20:36.333630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.540 qpair failed and we were unable to recover it. 00:26:28.540 [2024-07-26 14:20:36.333744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.540 [2024-07-26 14:20:36.333779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.540 qpair failed and we were unable to recover it. 00:26:28.540 [2024-07-26 14:20:36.333929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.540 [2024-07-26 14:20:36.333963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.540 qpair failed and we were unable to recover it. 00:26:28.540 [2024-07-26 14:20:36.334141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.540 [2024-07-26 14:20:36.334203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.540 qpair failed and we were unable to recover it. 00:26:28.540 [2024-07-26 14:20:36.334417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.540 [2024-07-26 14:20:36.334453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.540 qpair failed and we were unable to recover it. 00:26:28.540 [2024-07-26 14:20:36.334664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.540 [2024-07-26 14:20:36.334701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.540 qpair failed and we were unable to recover it. 00:26:28.540 [2024-07-26 14:20:36.334827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.540 [2024-07-26 14:20:36.334863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.540 qpair failed and we were unable to recover it. 00:26:28.540 [2024-07-26 14:20:36.335040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.540 [2024-07-26 14:20:36.335080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.540 qpair failed and we were unable to recover it. 00:26:28.540 [2024-07-26 14:20:36.335239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.540 [2024-07-26 14:20:36.335277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.540 qpair failed and we were unable to recover it. 00:26:28.540 [2024-07-26 14:20:36.335542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.540 [2024-07-26 14:20:36.335612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.540 qpair failed and we were unable to recover it. 00:26:28.540 [2024-07-26 14:20:36.335825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.540 [2024-07-26 14:20:36.335859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.540 qpair failed and we were unable to recover it. 00:26:28.540 [2024-07-26 14:20:36.335976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.540 [2024-07-26 14:20:36.336011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.540 qpair failed and we were unable to recover it. 00:26:28.540 [2024-07-26 14:20:36.336132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.540 [2024-07-26 14:20:36.336167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.540 qpair failed and we were unable to recover it. 00:26:28.540 [2024-07-26 14:20:36.336312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.540 [2024-07-26 14:20:36.336362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.540 qpair failed and we were unable to recover it. 00:26:28.540 [2024-07-26 14:20:36.336653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.540 [2024-07-26 14:20:36.336718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.540 qpair failed and we were unable to recover it. 00:26:28.540 [2024-07-26 14:20:36.337058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.540 [2024-07-26 14:20:36.337109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.540 qpair failed and we were unable to recover it. 00:26:28.540 [2024-07-26 14:20:36.337313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.540 [2024-07-26 14:20:36.337352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.540 qpair failed and we were unable to recover it. 00:26:28.540 [2024-07-26 14:20:36.337523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.540 [2024-07-26 14:20:36.337567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.540 qpair failed and we were unable to recover it. 00:26:28.540 [2024-07-26 14:20:36.337682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.540 [2024-07-26 14:20:36.337716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.540 qpair failed and we were unable to recover it. 00:26:28.540 [2024-07-26 14:20:36.337924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.540 [2024-07-26 14:20:36.338003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.540 qpair failed and we were unable to recover it. 00:26:28.540 [2024-07-26 14:20:36.338206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.540 [2024-07-26 14:20:36.338269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.540 qpair failed and we were unable to recover it. 00:26:28.540 [2024-07-26 14:20:36.338512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.540 [2024-07-26 14:20:36.338556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.540 qpair failed and we were unable to recover it. 00:26:28.540 [2024-07-26 14:20:36.338728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.540 [2024-07-26 14:20:36.338804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.540 qpair failed and we were unable to recover it. 00:26:28.540 [2024-07-26 14:20:36.339089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.540 [2024-07-26 14:20:36.339149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.540 qpair failed and we were unable to recover it. 00:26:28.540 [2024-07-26 14:20:36.339272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.540 [2024-07-26 14:20:36.339329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.540 qpair failed and we were unable to recover it. 00:26:28.540 [2024-07-26 14:20:36.339598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.540 [2024-07-26 14:20:36.339665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.540 qpair failed and we were unable to recover it. 00:26:28.541 [2024-07-26 14:20:36.339857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.541 [2024-07-26 14:20:36.339920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.541 qpair failed and we were unable to recover it. 00:26:28.541 [2024-07-26 14:20:36.340131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.541 [2024-07-26 14:20:36.340195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.541 qpair failed and we were unable to recover it. 00:26:28.541 [2024-07-26 14:20:36.340408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.541 [2024-07-26 14:20:36.340473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.541 qpair failed and we were unable to recover it. 00:26:28.541 [2024-07-26 14:20:36.340732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.541 [2024-07-26 14:20:36.340797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.541 qpair failed and we were unable to recover it. 00:26:28.541 [2024-07-26 14:20:36.341028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.541 [2024-07-26 14:20:36.341092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.541 qpair failed and we were unable to recover it. 00:26:28.541 [2024-07-26 14:20:36.341376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.541 [2024-07-26 14:20:36.341411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.541 qpair failed and we were unable to recover it. 00:26:28.541 [2024-07-26 14:20:36.341569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.541 [2024-07-26 14:20:36.341623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.541 qpair failed and we were unable to recover it. 00:26:28.541 [2024-07-26 14:20:36.341852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.541 [2024-07-26 14:20:36.341916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.541 qpair failed and we were unable to recover it. 00:26:28.541 [2024-07-26 14:20:36.342164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.541 [2024-07-26 14:20:36.342227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.541 qpair failed and we were unable to recover it. 00:26:28.541 [2024-07-26 14:20:36.342498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.541 [2024-07-26 14:20:36.342539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.541 qpair failed and we were unable to recover it. 00:26:28.541 [2024-07-26 14:20:36.342683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.541 [2024-07-26 14:20:36.342719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.541 qpair failed and we were unable to recover it. 00:26:28.541 [2024-07-26 14:20:36.342927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.541 [2024-07-26 14:20:36.342991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.541 qpair failed and we were unable to recover it. 00:26:28.541 [2024-07-26 14:20:36.343219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.541 [2024-07-26 14:20:36.343283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.541 qpair failed and we were unable to recover it. 00:26:28.541 [2024-07-26 14:20:36.343473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.541 [2024-07-26 14:20:36.343554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.541 qpair failed and we were unable to recover it. 00:26:28.541 [2024-07-26 14:20:36.343842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.541 [2024-07-26 14:20:36.343906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.541 qpair failed and we were unable to recover it. 00:26:28.541 [2024-07-26 14:20:36.344119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.541 [2024-07-26 14:20:36.344182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.541 qpair failed and we were unable to recover it. 00:26:28.541 [2024-07-26 14:20:36.344386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.541 [2024-07-26 14:20:36.344453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.541 qpair failed and we were unable to recover it. 00:26:28.541 [2024-07-26 14:20:36.344703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.541 [2024-07-26 14:20:36.344770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.541 qpair failed and we were unable to recover it. 00:26:28.541 [2024-07-26 14:20:36.345014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.541 [2024-07-26 14:20:36.345079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.541 qpair failed and we were unable to recover it. 00:26:28.541 [2024-07-26 14:20:36.345338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.541 [2024-07-26 14:20:36.345403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.541 qpair failed and we were unable to recover it. 00:26:28.541 [2024-07-26 14:20:36.345653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.541 [2024-07-26 14:20:36.345720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.541 qpair failed and we were unable to recover it. 00:26:28.541 [2024-07-26 14:20:36.346009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.541 [2024-07-26 14:20:36.346072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.541 qpair failed and we were unable to recover it. 00:26:28.541 [2024-07-26 14:20:36.346323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.541 [2024-07-26 14:20:36.346358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.541 qpair failed and we were unable to recover it. 00:26:28.541 [2024-07-26 14:20:36.346470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.541 [2024-07-26 14:20:36.346507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.541 qpair failed and we were unable to recover it. 00:26:28.541 [2024-07-26 14:20:36.346644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.541 [2024-07-26 14:20:36.346679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.541 qpair failed and we were unable to recover it. 00:26:28.541 [2024-07-26 14:20:36.346802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.541 [2024-07-26 14:20:36.346838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.541 qpair failed and we were unable to recover it. 00:26:28.541 [2024-07-26 14:20:36.347028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.541 [2024-07-26 14:20:36.347093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.541 qpair failed and we were unable to recover it. 00:26:28.541 [2024-07-26 14:20:36.347324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.541 [2024-07-26 14:20:36.347389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.541 qpair failed and we were unable to recover it. 00:26:28.541 [2024-07-26 14:20:36.347587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.541 [2024-07-26 14:20:36.347652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.541 qpair failed and we were unable to recover it. 00:26:28.541 [2024-07-26 14:20:36.347868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.541 [2024-07-26 14:20:36.347933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.541 qpair failed and we were unable to recover it. 00:26:28.541 [2024-07-26 14:20:36.348156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.541 [2024-07-26 14:20:36.348220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.541 qpair failed and we were unable to recover it. 00:26:28.541 [2024-07-26 14:20:36.348434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.541 [2024-07-26 14:20:36.348500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.541 qpair failed and we were unable to recover it. 00:26:28.541 [2024-07-26 14:20:36.348767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.541 [2024-07-26 14:20:36.348832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.541 qpair failed and we were unable to recover it. 00:26:28.541 [2024-07-26 14:20:36.349077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.541 [2024-07-26 14:20:36.349118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.541 qpair failed and we were unable to recover it. 00:26:28.541 [2024-07-26 14:20:36.349241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.541 [2024-07-26 14:20:36.349277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.541 qpair failed and we were unable to recover it. 00:26:28.541 [2024-07-26 14:20:36.349490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.541 [2024-07-26 14:20:36.349581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.541 qpair failed and we were unable to recover it. 00:26:28.541 [2024-07-26 14:20:36.349842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.541 [2024-07-26 14:20:36.349905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.541 qpair failed and we were unable to recover it. 00:26:28.541 [2024-07-26 14:20:36.350162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.542 [2024-07-26 14:20:36.350202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.542 qpair failed and we were unable to recover it. 00:26:28.542 [2024-07-26 14:20:36.350338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.542 [2024-07-26 14:20:36.350377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.542 qpair failed and we were unable to recover it. 00:26:28.542 [2024-07-26 14:20:36.350641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.542 [2024-07-26 14:20:36.350678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.542 qpair failed and we were unable to recover it. 00:26:28.542 [2024-07-26 14:20:36.350830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.542 [2024-07-26 14:20:36.350865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.542 qpair failed and we were unable to recover it. 00:26:28.542 [2024-07-26 14:20:36.351107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.542 [2024-07-26 14:20:36.351170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.542 qpair failed and we were unable to recover it. 00:26:28.542 [2024-07-26 14:20:36.351381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.542 [2024-07-26 14:20:36.351446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.542 qpair failed and we were unable to recover it. 00:26:28.542 [2024-07-26 14:20:36.351681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.542 [2024-07-26 14:20:36.351745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.542 qpair failed and we were unable to recover it. 00:26:28.542 [2024-07-26 14:20:36.351940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.542 [2024-07-26 14:20:36.352015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.542 qpair failed and we were unable to recover it. 00:26:28.542 [2024-07-26 14:20:36.352299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.542 [2024-07-26 14:20:36.352363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.542 qpair failed and we were unable to recover it. 00:26:28.542 [2024-07-26 14:20:36.352619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.542 [2024-07-26 14:20:36.352686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.542 qpair failed and we were unable to recover it. 00:26:28.542 [2024-07-26 14:20:36.352948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.542 [2024-07-26 14:20:36.353012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.542 qpair failed and we were unable to recover it. 00:26:28.542 [2024-07-26 14:20:36.353226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.542 [2024-07-26 14:20:36.353291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.542 qpair failed and we were unable to recover it. 00:26:28.542 [2024-07-26 14:20:36.353578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.542 [2024-07-26 14:20:36.353642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.542 qpair failed and we were unable to recover it. 00:26:28.542 [2024-07-26 14:20:36.353872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.542 [2024-07-26 14:20:36.353936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.542 qpair failed and we were unable to recover it. 00:26:28.542 [2024-07-26 14:20:36.354233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.542 [2024-07-26 14:20:36.354272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.542 qpair failed and we were unable to recover it. 00:26:28.542 [2024-07-26 14:20:36.354400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.542 [2024-07-26 14:20:36.354438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.542 qpair failed and we were unable to recover it. 00:26:28.542 [2024-07-26 14:20:36.354670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.542 [2024-07-26 14:20:36.354735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.542 qpair failed and we were unable to recover it. 00:26:28.542 [2024-07-26 14:20:36.355019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.542 [2024-07-26 14:20:36.355083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.542 qpair failed and we were unable to recover it. 00:26:28.542 [2024-07-26 14:20:36.355327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.542 [2024-07-26 14:20:36.355392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.542 qpair failed and we were unable to recover it. 00:26:28.542 [2024-07-26 14:20:36.355660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.542 [2024-07-26 14:20:36.355724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.542 qpair failed and we were unable to recover it. 00:26:28.542 [2024-07-26 14:20:36.355920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.542 [2024-07-26 14:20:36.355984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.542 qpair failed and we were unable to recover it. 00:26:28.542 [2024-07-26 14:20:36.356200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.542 [2024-07-26 14:20:36.356262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.542 qpair failed and we were unable to recover it. 00:26:28.542 [2024-07-26 14:20:36.356481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.542 [2024-07-26 14:20:36.356561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.542 qpair failed and we were unable to recover it. 00:26:28.542 [2024-07-26 14:20:36.356799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.542 [2024-07-26 14:20:36.356863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.542 qpair failed and we were unable to recover it. 00:26:28.542 [2024-07-26 14:20:36.357086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.542 [2024-07-26 14:20:36.357150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.542 qpair failed and we were unable to recover it. 00:26:28.542 [2024-07-26 14:20:36.357392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.542 [2024-07-26 14:20:36.357457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.542 qpair failed and we were unable to recover it. 00:26:28.542 [2024-07-26 14:20:36.357687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.542 [2024-07-26 14:20:36.357746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.542 qpair failed and we were unable to recover it. 00:26:28.542 [2024-07-26 14:20:36.357912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.542 [2024-07-26 14:20:36.357947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.542 qpair failed and we were unable to recover it. 00:26:28.542 [2024-07-26 14:20:36.358206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.542 [2024-07-26 14:20:36.358270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.542 qpair failed and we were unable to recover it. 00:26:28.542 [2024-07-26 14:20:36.358480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.542 [2024-07-26 14:20:36.358564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.542 qpair failed and we were unable to recover it. 00:26:28.542 [2024-07-26 14:20:36.358833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.542 [2024-07-26 14:20:36.358897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.543 qpair failed and we were unable to recover it. 00:26:28.543 [2024-07-26 14:20:36.359143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.543 [2024-07-26 14:20:36.359183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.543 qpair failed and we were unable to recover it. 00:26:28.543 [2024-07-26 14:20:36.359308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.543 [2024-07-26 14:20:36.359346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.543 qpair failed and we were unable to recover it. 00:26:28.543 [2024-07-26 14:20:36.359560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.543 [2024-07-26 14:20:36.359625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.543 qpair failed and we were unable to recover it. 00:26:28.543 [2024-07-26 14:20:36.359883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.543 [2024-07-26 14:20:36.359918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.543 qpair failed and we were unable to recover it. 00:26:28.543 [2024-07-26 14:20:36.360062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.543 [2024-07-26 14:20:36.360114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.543 qpair failed and we were unable to recover it. 00:26:28.543 [2024-07-26 14:20:36.360329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.543 [2024-07-26 14:20:36.360406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.543 qpair failed and we were unable to recover it. 00:26:28.543 [2024-07-26 14:20:36.360619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.543 [2024-07-26 14:20:36.360684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.543 qpair failed and we were unable to recover it. 00:26:28.543 [2024-07-26 14:20:36.360892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.543 [2024-07-26 14:20:36.360956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.543 qpair failed and we were unable to recover it. 00:26:28.543 [2024-07-26 14:20:36.361192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.543 [2024-07-26 14:20:36.361257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.543 qpair failed and we were unable to recover it. 00:26:28.543 [2024-07-26 14:20:36.361497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.543 [2024-07-26 14:20:36.361574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.543 qpair failed and we were unable to recover it. 00:26:28.543 [2024-07-26 14:20:36.361703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.543 [2024-07-26 14:20:36.361737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.543 qpair failed and we were unable to recover it. 00:26:28.543 [2024-07-26 14:20:36.361833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.543 [2024-07-26 14:20:36.361866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.543 qpair failed and we were unable to recover it. 00:26:28.543 [2024-07-26 14:20:36.361981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.543 [2024-07-26 14:20:36.362013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.543 qpair failed and we were unable to recover it. 00:26:28.543 [2024-07-26 14:20:36.362123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.543 [2024-07-26 14:20:36.362154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.543 qpair failed and we were unable to recover it. 00:26:28.543 [2024-07-26 14:20:36.362272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.543 [2024-07-26 14:20:36.362304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.543 qpair failed and we were unable to recover it. 00:26:28.543 [2024-07-26 14:20:36.362448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.543 [2024-07-26 14:20:36.362481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.543 qpair failed and we were unable to recover it. 00:26:28.543 [2024-07-26 14:20:36.362600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.543 [2024-07-26 14:20:36.362632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.543 qpair failed and we were unable to recover it. 00:26:28.543 [2024-07-26 14:20:36.362741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.543 [2024-07-26 14:20:36.362774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.543 qpair failed and we were unable to recover it. 00:26:28.543 [2024-07-26 14:20:36.362878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.543 [2024-07-26 14:20:36.362909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.543 qpair failed and we were unable to recover it. 00:26:28.543 [2024-07-26 14:20:36.363045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.543 [2024-07-26 14:20:36.363076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.543 qpair failed and we were unable to recover it. 00:26:28.543 [2024-07-26 14:20:36.363218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.543 [2024-07-26 14:20:36.363251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.543 qpair failed and we were unable to recover it. 00:26:28.543 [2024-07-26 14:20:36.363363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.543 [2024-07-26 14:20:36.363395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.543 qpair failed and we were unable to recover it. 00:26:28.543 [2024-07-26 14:20:36.363549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.543 [2024-07-26 14:20:36.363581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.543 qpair failed and we were unable to recover it. 00:26:28.543 [2024-07-26 14:20:36.363684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.543 [2024-07-26 14:20:36.363714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.543 qpair failed and we were unable to recover it. 00:26:28.543 [2024-07-26 14:20:36.363851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.543 [2024-07-26 14:20:36.363918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.543 qpair failed and we were unable to recover it. 00:26:28.543 [2024-07-26 14:20:36.364110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.543 [2024-07-26 14:20:36.364158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.543 qpair failed and we were unable to recover it. 00:26:28.543 [2024-07-26 14:20:36.364350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.543 [2024-07-26 14:20:36.364398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.543 qpair failed and we were unable to recover it. 00:26:28.543 [2024-07-26 14:20:36.364566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.543 [2024-07-26 14:20:36.364614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.543 qpair failed and we were unable to recover it. 00:26:28.543 [2024-07-26 14:20:36.364751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.543 [2024-07-26 14:20:36.364781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.543 qpair failed and we were unable to recover it. 00:26:28.543 [2024-07-26 14:20:36.364945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.543 [2024-07-26 14:20:36.364995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.543 qpair failed and we were unable to recover it. 00:26:28.543 [2024-07-26 14:20:36.365190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.543 [2024-07-26 14:20:36.365241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.543 qpair failed and we were unable to recover it. 00:26:28.543 [2024-07-26 14:20:36.365431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.543 [2024-07-26 14:20:36.365482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.543 qpair failed and we were unable to recover it. 00:26:28.543 [2024-07-26 14:20:36.365666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.543 [2024-07-26 14:20:36.365696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.543 qpair failed and we were unable to recover it. 00:26:28.543 [2024-07-26 14:20:36.365800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.543 [2024-07-26 14:20:36.365861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.543 qpair failed and we were unable to recover it. 00:26:28.543 [2024-07-26 14:20:36.366069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.543 [2024-07-26 14:20:36.366117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.543 qpair failed and we were unable to recover it. 00:26:28.543 [2024-07-26 14:20:36.366268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.543 [2024-07-26 14:20:36.366316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.543 qpair failed and we were unable to recover it. 00:26:28.543 [2024-07-26 14:20:36.366524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.543 [2024-07-26 14:20:36.366560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.543 qpair failed and we were unable to recover it. 00:26:28.544 [2024-07-26 14:20:36.366654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.544 [2024-07-26 14:20:36.366685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.544 qpair failed and we were unable to recover it. 00:26:28.544 [2024-07-26 14:20:36.366785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.544 [2024-07-26 14:20:36.366853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.544 qpair failed and we were unable to recover it. 00:26:28.544 [2024-07-26 14:20:36.367055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.544 [2024-07-26 14:20:36.367106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.544 qpair failed and we were unable to recover it. 00:26:28.544 [2024-07-26 14:20:36.367276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.544 [2024-07-26 14:20:36.367327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.544 qpair failed and we were unable to recover it. 00:26:28.544 [2024-07-26 14:20:36.367539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.544 [2024-07-26 14:20:36.367607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.544 qpair failed and we were unable to recover it. 00:26:28.544 [2024-07-26 14:20:36.367704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.544 [2024-07-26 14:20:36.367733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.544 qpair failed and we were unable to recover it. 00:26:28.544 [2024-07-26 14:20:36.367826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.544 [2024-07-26 14:20:36.367857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.544 qpair failed and we were unable to recover it. 00:26:28.544 [2024-07-26 14:20:36.368002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.544 [2024-07-26 14:20:36.368037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.544 qpair failed and we were unable to recover it. 00:26:28.544 [2024-07-26 14:20:36.368195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.544 [2024-07-26 14:20:36.368229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.544 qpair failed and we were unable to recover it. 00:26:28.544 [2024-07-26 14:20:36.368371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.544 [2024-07-26 14:20:36.368405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.544 qpair failed and we were unable to recover it. 00:26:28.544 [2024-07-26 14:20:36.368542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.544 [2024-07-26 14:20:36.368593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.544 qpair failed and we were unable to recover it. 00:26:28.544 [2024-07-26 14:20:36.368702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.544 [2024-07-26 14:20:36.368731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.544 qpair failed and we were unable to recover it. 00:26:28.544 [2024-07-26 14:20:36.368904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.544 [2024-07-26 14:20:36.368934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.544 qpair failed and we were unable to recover it. 00:26:28.544 [2024-07-26 14:20:36.369026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.544 [2024-07-26 14:20:36.369055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.544 qpair failed and we were unable to recover it. 00:26:28.544 [2024-07-26 14:20:36.369221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.544 [2024-07-26 14:20:36.369250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.544 qpair failed and we were unable to recover it. 00:26:28.544 [2024-07-26 14:20:36.369374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.544 [2024-07-26 14:20:36.369408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.544 qpair failed and we were unable to recover it. 00:26:28.544 [2024-07-26 14:20:36.369521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.544 [2024-07-26 14:20:36.369581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.544 qpair failed and we were unable to recover it. 00:26:28.544 [2024-07-26 14:20:36.369688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.544 [2024-07-26 14:20:36.369718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.544 qpair failed and we were unable to recover it. 00:26:28.544 [2024-07-26 14:20:36.369836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.544 [2024-07-26 14:20:36.369895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.544 qpair failed and we were unable to recover it. 00:26:28.544 [2024-07-26 14:20:36.370071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.544 [2024-07-26 14:20:36.370121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.544 qpair failed and we were unable to recover it. 00:26:28.544 [2024-07-26 14:20:36.370295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.544 [2024-07-26 14:20:36.370346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.544 qpair failed and we were unable to recover it. 00:26:28.544 [2024-07-26 14:20:36.370509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.544 [2024-07-26 14:20:36.370600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.544 qpair failed and we were unable to recover it. 00:26:28.544 [2024-07-26 14:20:36.370711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.544 [2024-07-26 14:20:36.370742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.544 qpair failed and we were unable to recover it. 00:26:28.544 [2024-07-26 14:20:36.370866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.544 [2024-07-26 14:20:36.370900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.544 qpair failed and we were unable to recover it. 00:26:28.544 [2024-07-26 14:20:36.371049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.544 [2024-07-26 14:20:36.371084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.544 qpair failed and we were unable to recover it. 00:26:28.544 [2024-07-26 14:20:36.371194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.544 [2024-07-26 14:20:36.371229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.544 qpair failed and we were unable to recover it. 00:26:28.544 [2024-07-26 14:20:36.371345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.544 [2024-07-26 14:20:36.371380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.544 qpair failed and we were unable to recover it. 00:26:28.544 [2024-07-26 14:20:36.371495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.544 [2024-07-26 14:20:36.371540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.544 qpair failed and we were unable to recover it. 00:26:28.544 [2024-07-26 14:20:36.371671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.544 [2024-07-26 14:20:36.371702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.544 qpair failed and we were unable to recover it. 00:26:28.544 [2024-07-26 14:20:36.371826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.544 [2024-07-26 14:20:36.371860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.544 qpair failed and we were unable to recover it. 00:26:28.544 [2024-07-26 14:20:36.371987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.544 [2024-07-26 14:20:36.372021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.544 qpair failed and we were unable to recover it. 00:26:28.544 [2024-07-26 14:20:36.372161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.544 [2024-07-26 14:20:36.372196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.544 qpair failed and we were unable to recover it. 00:26:28.544 [2024-07-26 14:20:36.372299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.544 [2024-07-26 14:20:36.372335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.544 qpair failed and we were unable to recover it. 00:26:28.544 [2024-07-26 14:20:36.372473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.544 [2024-07-26 14:20:36.372507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.544 qpair failed and we were unable to recover it. 00:26:28.544 [2024-07-26 14:20:36.372666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.544 [2024-07-26 14:20:36.372699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.544 qpair failed and we were unable to recover it. 00:26:28.544 [2024-07-26 14:20:36.372840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.544 [2024-07-26 14:20:36.372875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.544 qpair failed and we were unable to recover it. 00:26:28.544 [2024-07-26 14:20:36.373010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.544 [2024-07-26 14:20:36.373045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.544 qpair failed and we were unable to recover it. 00:26:28.544 [2024-07-26 14:20:36.373180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.544 [2024-07-26 14:20:36.373214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.544 qpair failed and we were unable to recover it. 00:26:28.545 [2024-07-26 14:20:36.373369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.545 [2024-07-26 14:20:36.373399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.545 qpair failed and we were unable to recover it. 00:26:28.545 [2024-07-26 14:20:36.373518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.545 [2024-07-26 14:20:36.373562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.545 qpair failed and we were unable to recover it. 00:26:28.545 [2024-07-26 14:20:36.373678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.545 [2024-07-26 14:20:36.373708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.545 qpair failed and we were unable to recover it. 00:26:28.545 [2024-07-26 14:20:36.373800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.545 [2024-07-26 14:20:36.373830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.545 qpair failed and we were unable to recover it. 00:26:28.545 [2024-07-26 14:20:36.373926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.545 [2024-07-26 14:20:36.373956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.545 qpair failed and we were unable to recover it. 00:26:28.545 [2024-07-26 14:20:36.374080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.545 [2024-07-26 14:20:36.374114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.545 qpair failed and we were unable to recover it. 00:26:28.545 [2024-07-26 14:20:36.374217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.545 [2024-07-26 14:20:36.374252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.545 qpair failed and we were unable to recover it. 00:26:28.545 [2024-07-26 14:20:36.374408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.545 [2024-07-26 14:20:36.374438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.545 qpair failed and we were unable to recover it. 00:26:28.545 [2024-07-26 14:20:36.374597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.545 [2024-07-26 14:20:36.374629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.545 qpair failed and we were unable to recover it. 00:26:28.545 [2024-07-26 14:20:36.374722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.545 [2024-07-26 14:20:36.374753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.545 qpair failed and we were unable to recover it. 00:26:28.545 [2024-07-26 14:20:36.374869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.545 [2024-07-26 14:20:36.374908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.545 qpair failed and we were unable to recover it. 00:26:28.545 [2024-07-26 14:20:36.375006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.545 [2024-07-26 14:20:36.375041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.545 qpair failed and we were unable to recover it. 00:26:28.545 [2024-07-26 14:20:36.375182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.545 [2024-07-26 14:20:36.375217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.545 qpair failed and we were unable to recover it. 00:26:28.545 [2024-07-26 14:20:36.375343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.545 [2024-07-26 14:20:36.375373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.545 qpair failed and we were unable to recover it. 00:26:28.545 [2024-07-26 14:20:36.375563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.545 [2024-07-26 14:20:36.375594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.545 qpair failed and we were unable to recover it. 00:26:28.545 [2024-07-26 14:20:36.375692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.545 [2024-07-26 14:20:36.375723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.545 qpair failed and we were unable to recover it. 00:26:28.545 [2024-07-26 14:20:36.375878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.545 [2024-07-26 14:20:36.375912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.545 qpair failed and we were unable to recover it. 00:26:28.545 [2024-07-26 14:20:36.376028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.545 [2024-07-26 14:20:36.376063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.545 qpair failed and we were unable to recover it. 00:26:28.545 [2024-07-26 14:20:36.376176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.545 [2024-07-26 14:20:36.376210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.545 qpair failed and we were unable to recover it. 00:26:28.545 [2024-07-26 14:20:36.376354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.545 [2024-07-26 14:20:36.376389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.545 qpair failed and we were unable to recover it. 00:26:28.545 [2024-07-26 14:20:36.376549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.545 [2024-07-26 14:20:36.376581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.545 qpair failed and we were unable to recover it. 00:26:28.545 [2024-07-26 14:20:36.376686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.545 [2024-07-26 14:20:36.376716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.545 qpair failed and we were unable to recover it. 00:26:28.545 [2024-07-26 14:20:36.376841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.545 [2024-07-26 14:20:36.376875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.545 qpair failed and we were unable to recover it. 00:26:28.545 [2024-07-26 14:20:36.377012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.545 [2024-07-26 14:20:36.377047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.545 qpair failed and we were unable to recover it. 00:26:28.545 [2024-07-26 14:20:36.377199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.545 [2024-07-26 14:20:36.377234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.545 qpair failed and we were unable to recover it. 00:26:28.545 [2024-07-26 14:20:36.377340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.545 [2024-07-26 14:20:36.377375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.545 qpair failed and we were unable to recover it. 00:26:28.545 [2024-07-26 14:20:36.377494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.545 [2024-07-26 14:20:36.377535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.545 qpair failed and we were unable to recover it. 00:26:28.545 [2024-07-26 14:20:36.377664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.545 [2024-07-26 14:20:36.377695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.545 qpair failed and we were unable to recover it. 00:26:28.545 [2024-07-26 14:20:36.377793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.545 [2024-07-26 14:20:36.377851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.545 qpair failed and we were unable to recover it. 00:26:28.545 [2024-07-26 14:20:36.377957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.545 [2024-07-26 14:20:36.377991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.545 qpair failed and we were unable to recover it. 00:26:28.545 [2024-07-26 14:20:36.378098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.545 [2024-07-26 14:20:36.378133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.545 qpair failed and we were unable to recover it. 00:26:28.545 [2024-07-26 14:20:36.378282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.545 [2024-07-26 14:20:36.378317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.545 qpair failed and we were unable to recover it. 00:26:28.545 [2024-07-26 14:20:36.378428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.545 [2024-07-26 14:20:36.378463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.545 qpair failed and we were unable to recover it. 00:26:28.545 [2024-07-26 14:20:36.378608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.545 [2024-07-26 14:20:36.378639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.545 qpair failed and we were unable to recover it. 00:26:28.545 [2024-07-26 14:20:36.378770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.545 [2024-07-26 14:20:36.378801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.545 qpair failed and we were unable to recover it. 00:26:28.545 [2024-07-26 14:20:36.378921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.545 [2024-07-26 14:20:36.378955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.545 qpair failed and we were unable to recover it. 00:26:28.545 [2024-07-26 14:20:36.379067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.545 [2024-07-26 14:20:36.379102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.545 qpair failed and we were unable to recover it. 00:26:28.545 [2024-07-26 14:20:36.379233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.546 [2024-07-26 14:20:36.379269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.546 qpair failed and we were unable to recover it. 00:26:28.546 [2024-07-26 14:20:36.379377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.546 [2024-07-26 14:20:36.379411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.546 qpair failed and we were unable to recover it. 00:26:28.546 [2024-07-26 14:20:36.379537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.546 [2024-07-26 14:20:36.379587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.546 qpair failed and we were unable to recover it. 00:26:28.546 [2024-07-26 14:20:36.379720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.546 [2024-07-26 14:20:36.379751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.546 qpair failed and we were unable to recover it. 00:26:28.546 [2024-07-26 14:20:36.379871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.546 [2024-07-26 14:20:36.379906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.546 qpair failed and we were unable to recover it. 00:26:28.546 [2024-07-26 14:20:36.380013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.546 [2024-07-26 14:20:36.380048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.546 qpair failed and we were unable to recover it. 00:26:28.546 [2024-07-26 14:20:36.380160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.546 [2024-07-26 14:20:36.380194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.546 qpair failed and we were unable to recover it. 00:26:28.546 [2024-07-26 14:20:36.380301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.546 [2024-07-26 14:20:36.380336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.546 qpair failed and we were unable to recover it. 00:26:28.546 [2024-07-26 14:20:36.380490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.546 [2024-07-26 14:20:36.380524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.546 qpair failed and we were unable to recover it. 00:26:28.546 [2024-07-26 14:20:36.380655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.546 [2024-07-26 14:20:36.380685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.546 qpair failed and we were unable to recover it. 00:26:28.546 [2024-07-26 14:20:36.380775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.546 [2024-07-26 14:20:36.380806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.546 qpair failed and we were unable to recover it. 00:26:28.546 [2024-07-26 14:20:36.380951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.546 [2024-07-26 14:20:36.380985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.546 qpair failed and we were unable to recover it. 00:26:28.546 [2024-07-26 14:20:36.381109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.546 [2024-07-26 14:20:36.381142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.546 qpair failed and we were unable to recover it. 00:26:28.546 [2024-07-26 14:20:36.381283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.546 [2024-07-26 14:20:36.381327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.546 qpair failed and we were unable to recover it. 00:26:28.546 [2024-07-26 14:20:36.381443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.546 [2024-07-26 14:20:36.381477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.546 qpair failed and we were unable to recover it. 00:26:28.546 [2024-07-26 14:20:36.381612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.546 [2024-07-26 14:20:36.381643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.546 qpair failed and we were unable to recover it. 00:26:28.546 [2024-07-26 14:20:36.381744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.546 [2024-07-26 14:20:36.381774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.546 qpair failed and we were unable to recover it. 00:26:28.546 [2024-07-26 14:20:36.381981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.546 [2024-07-26 14:20:36.382011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.546 qpair failed and we were unable to recover it. 00:26:28.546 [2024-07-26 14:20:36.382142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.546 [2024-07-26 14:20:36.382172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.546 qpair failed and we were unable to recover it. 00:26:28.546 [2024-07-26 14:20:36.382296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.546 [2024-07-26 14:20:36.382330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.546 qpair failed and we were unable to recover it. 00:26:28.546 [2024-07-26 14:20:36.382488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.546 [2024-07-26 14:20:36.382518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.546 qpair failed and we were unable to recover it. 00:26:28.546 [2024-07-26 14:20:36.382623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.546 [2024-07-26 14:20:36.382653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.546 qpair failed and we were unable to recover it. 00:26:28.546 [2024-07-26 14:20:36.382756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.546 [2024-07-26 14:20:36.382787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.546 qpair failed and we were unable to recover it. 00:26:28.546 [2024-07-26 14:20:36.382890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.546 [2024-07-26 14:20:36.382921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.546 qpair failed and we were unable to recover it. 00:26:28.546 [2024-07-26 14:20:36.383038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.546 [2024-07-26 14:20:36.383073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.546 qpair failed and we were unable to recover it. 00:26:28.546 [2024-07-26 14:20:36.383187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.546 [2024-07-26 14:20:36.383235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.546 qpair failed and we were unable to recover it. 00:26:28.546 [2024-07-26 14:20:36.383386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.546 [2024-07-26 14:20:36.383421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.546 qpair failed and we were unable to recover it. 00:26:28.546 [2024-07-26 14:20:36.383583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.546 [2024-07-26 14:20:36.383614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.546 qpair failed and we were unable to recover it. 00:26:28.546 [2024-07-26 14:20:36.383715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.546 [2024-07-26 14:20:36.383746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.546 qpair failed and we were unable to recover it. 00:26:28.546 [2024-07-26 14:20:36.383911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.546 [2024-07-26 14:20:36.383946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.546 qpair failed and we were unable to recover it. 00:26:28.546 [2024-07-26 14:20:36.384064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.546 [2024-07-26 14:20:36.384098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.546 qpair failed and we were unable to recover it. 00:26:28.546 [2024-07-26 14:20:36.384192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.546 [2024-07-26 14:20:36.384227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.546 qpair failed and we were unable to recover it. 00:26:28.546 [2024-07-26 14:20:36.384345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.547 [2024-07-26 14:20:36.384379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.547 qpair failed and we were unable to recover it. 00:26:28.547 [2024-07-26 14:20:36.384523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.547 [2024-07-26 14:20:36.384579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.547 qpair failed and we were unable to recover it. 00:26:28.547 [2024-07-26 14:20:36.384688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.547 [2024-07-26 14:20:36.384719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.547 qpair failed and we were unable to recover it. 00:26:28.547 [2024-07-26 14:20:36.384852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.547 [2024-07-26 14:20:36.384883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.547 qpair failed and we were unable to recover it. 00:26:28.547 [2024-07-26 14:20:36.385027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.547 [2024-07-26 14:20:36.385061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.547 qpair failed and we were unable to recover it. 00:26:28.547 [2024-07-26 14:20:36.385165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.547 [2024-07-26 14:20:36.385199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.547 qpair failed and we were unable to recover it. 00:26:28.547 [2024-07-26 14:20:36.385306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.547 [2024-07-26 14:20:36.385341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.547 qpair failed and we were unable to recover it. 00:26:28.547 [2024-07-26 14:20:36.385455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.547 [2024-07-26 14:20:36.385489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.547 qpair failed and we were unable to recover it. 00:26:28.547 [2024-07-26 14:20:36.385638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.547 [2024-07-26 14:20:36.385669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.547 qpair failed and we were unable to recover it. 00:26:28.547 [2024-07-26 14:20:36.385772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.547 [2024-07-26 14:20:36.385802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.547 qpair failed and we were unable to recover it. 00:26:28.547 [2024-07-26 14:20:36.385934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.547 [2024-07-26 14:20:36.385969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.547 qpair failed and we were unable to recover it. 00:26:28.547 [2024-07-26 14:20:36.386111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.547 [2024-07-26 14:20:36.386147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.547 qpair failed and we were unable to recover it. 00:26:28.547 [2024-07-26 14:20:36.386323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.547 [2024-07-26 14:20:36.386358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.547 qpair failed and we were unable to recover it. 00:26:28.547 [2024-07-26 14:20:36.386459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.547 [2024-07-26 14:20:36.386493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.547 qpair failed and we were unable to recover it. 00:26:28.547 [2024-07-26 14:20:36.386653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.547 [2024-07-26 14:20:36.386684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.547 qpair failed and we were unable to recover it. 00:26:28.547 [2024-07-26 14:20:36.386792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.547 [2024-07-26 14:20:36.386824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.547 qpair failed and we were unable to recover it. 00:26:28.547 [2024-07-26 14:20:36.386949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.547 [2024-07-26 14:20:36.386983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.547 qpair failed and we were unable to recover it. 00:26:28.547 [2024-07-26 14:20:36.387102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.547 [2024-07-26 14:20:36.387138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.547 qpair failed and we were unable to recover it. 00:26:28.547 [2024-07-26 14:20:36.387253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.547 [2024-07-26 14:20:36.387287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.547 qpair failed and we were unable to recover it. 00:26:28.547 [2024-07-26 14:20:36.387429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.547 [2024-07-26 14:20:36.387463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.547 qpair failed and we were unable to recover it. 00:26:28.547 [2024-07-26 14:20:36.387594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.547 [2024-07-26 14:20:36.387625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.547 qpair failed and we were unable to recover it. 00:26:28.547 [2024-07-26 14:20:36.387743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.547 [2024-07-26 14:20:36.387779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.547 qpair failed and we were unable to recover it. 00:26:28.547 [2024-07-26 14:20:36.387911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.547 [2024-07-26 14:20:36.387941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.547 qpair failed and we were unable to recover it. 00:26:28.547 [2024-07-26 14:20:36.388106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.547 [2024-07-26 14:20:36.388141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.547 qpair failed and we were unable to recover it. 00:26:28.547 [2024-07-26 14:20:36.388258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.547 [2024-07-26 14:20:36.388293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.547 qpair failed and we were unable to recover it. 00:26:28.547 [2024-07-26 14:20:36.388404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.547 [2024-07-26 14:20:36.388439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.547 qpair failed and we were unable to recover it. 00:26:28.547 [2024-07-26 14:20:36.388561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.547 [2024-07-26 14:20:36.388609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.547 qpair failed and we were unable to recover it. 00:26:28.547 [2024-07-26 14:20:36.388712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.547 [2024-07-26 14:20:36.388743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.547 qpair failed and we were unable to recover it. 00:26:28.547 [2024-07-26 14:20:36.388871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.547 [2024-07-26 14:20:36.388906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.547 qpair failed and we were unable to recover it. 00:26:28.547 [2024-07-26 14:20:36.389026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.547 [2024-07-26 14:20:36.389061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.547 qpair failed and we were unable to recover it. 00:26:28.547 [2024-07-26 14:20:36.389216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.547 [2024-07-26 14:20:36.389251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.547 qpair failed and we were unable to recover it. 00:26:28.547 [2024-07-26 14:20:36.389362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.547 [2024-07-26 14:20:36.389397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.547 qpair failed and we were unable to recover it. 00:26:28.547 [2024-07-26 14:20:36.389494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.547 [2024-07-26 14:20:36.389535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.547 qpair failed and we were unable to recover it. 00:26:28.547 [2024-07-26 14:20:36.389660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.547 [2024-07-26 14:20:36.389690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.547 qpair failed and we were unable to recover it. 00:26:28.547 [2024-07-26 14:20:36.389797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.547 [2024-07-26 14:20:36.389828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.547 qpair failed and we were unable to recover it. 00:26:28.547 [2024-07-26 14:20:36.389944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.547 [2024-07-26 14:20:36.389974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.547 qpair failed and we were unable to recover it. 00:26:28.547 [2024-07-26 14:20:36.390085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.547 [2024-07-26 14:20:36.390120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.547 qpair failed and we were unable to recover it. 00:26:28.547 [2024-07-26 14:20:36.390246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.548 [2024-07-26 14:20:36.390281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.548 qpair failed and we were unable to recover it. 00:26:28.548 [2024-07-26 14:20:36.390397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.548 [2024-07-26 14:20:36.390431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.548 qpair failed and we were unable to recover it. 00:26:28.548 [2024-07-26 14:20:36.390546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.548 [2024-07-26 14:20:36.390594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.548 qpair failed and we were unable to recover it. 00:26:28.548 [2024-07-26 14:20:36.390706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.548 [2024-07-26 14:20:36.390736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.548 qpair failed and we were unable to recover it. 00:26:28.548 [2024-07-26 14:20:36.390864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.548 [2024-07-26 14:20:36.390907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.548 qpair failed and we were unable to recover it. 00:26:28.548 [2024-07-26 14:20:36.391088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.548 [2024-07-26 14:20:36.391132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.548 qpair failed and we were unable to recover it. 00:26:28.548 [2024-07-26 14:20:36.391280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.548 [2024-07-26 14:20:36.391327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.548 qpair failed and we were unable to recover it. 00:26:28.548 [2024-07-26 14:20:36.391515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.548 [2024-07-26 14:20:36.391593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.548 qpair failed and we were unable to recover it. 00:26:28.548 [2024-07-26 14:20:36.391697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.548 [2024-07-26 14:20:36.391728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.548 qpair failed and we were unable to recover it. 00:26:28.548 [2024-07-26 14:20:36.391881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.548 [2024-07-26 14:20:36.391926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.548 qpair failed and we were unable to recover it. 00:26:28.548 [2024-07-26 14:20:36.392135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.548 [2024-07-26 14:20:36.392179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.548 qpair failed and we were unable to recover it. 00:26:28.548 [2024-07-26 14:20:36.392402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.548 [2024-07-26 14:20:36.392446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.548 qpair failed and we were unable to recover it. 00:26:28.548 [2024-07-26 14:20:36.392653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.548 [2024-07-26 14:20:36.392685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.548 qpair failed and we were unable to recover it. 00:26:28.548 [2024-07-26 14:20:36.392783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.548 [2024-07-26 14:20:36.392814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.548 qpair failed and we were unable to recover it. 00:26:28.548 [2024-07-26 14:20:36.393022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.548 [2024-07-26 14:20:36.393052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.548 qpair failed and we were unable to recover it. 00:26:28.548 [2024-07-26 14:20:36.393147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.548 [2024-07-26 14:20:36.393177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.548 qpair failed and we were unable to recover it. 00:26:28.548 [2024-07-26 14:20:36.393325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.548 [2024-07-26 14:20:36.393360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.548 qpair failed and we were unable to recover it. 00:26:28.548 [2024-07-26 14:20:36.393490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.548 [2024-07-26 14:20:36.393524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.548 qpair failed and we were unable to recover it. 00:26:28.548 [2024-07-26 14:20:36.393643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.548 [2024-07-26 14:20:36.393675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.548 qpair failed and we were unable to recover it. 00:26:28.548 [2024-07-26 14:20:36.393766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.548 [2024-07-26 14:20:36.393797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.548 qpair failed and we were unable to recover it. 00:26:28.548 [2024-07-26 14:20:36.393979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.548 [2024-07-26 14:20:36.394024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.548 qpair failed and we were unable to recover it. 00:26:28.548 [2024-07-26 14:20:36.394195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.548 [2024-07-26 14:20:36.394243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.548 qpair failed and we were unable to recover it. 00:26:28.548 [2024-07-26 14:20:36.394412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.548 [2024-07-26 14:20:36.394456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.548 qpair failed and we were unable to recover it. 00:26:28.548 [2024-07-26 14:20:36.394627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.548 [2024-07-26 14:20:36.394658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.548 qpair failed and we were unable to recover it. 00:26:28.548 [2024-07-26 14:20:36.394752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.548 [2024-07-26 14:20:36.394787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.548 qpair failed and we were unable to recover it. 00:26:28.548 [2024-07-26 14:20:36.395028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.548 [2024-07-26 14:20:36.395058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.548 qpair failed and we were unable to recover it. 00:26:28.548 [2024-07-26 14:20:36.395188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.548 [2024-07-26 14:20:36.395218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.548 qpair failed and we were unable to recover it. 00:26:28.548 [2024-07-26 14:20:36.395350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.548 [2024-07-26 14:20:36.395386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.548 qpair failed and we were unable to recover it. 00:26:28.548 [2024-07-26 14:20:36.395497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.548 [2024-07-26 14:20:36.395538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.548 qpair failed and we were unable to recover it. 00:26:28.548 [2024-07-26 14:20:36.395659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.548 [2024-07-26 14:20:36.395690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.548 qpair failed and we were unable to recover it. 00:26:28.548 [2024-07-26 14:20:36.395787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.548 [2024-07-26 14:20:36.395817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.548 qpair failed and we were unable to recover it. 00:26:28.548 [2024-07-26 14:20:36.395937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.548 [2024-07-26 14:20:36.395967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.548 qpair failed and we were unable to recover it. 00:26:28.548 [2024-07-26 14:20:36.396068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.548 [2024-07-26 14:20:36.396097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.548 qpair failed and we were unable to recover it. 00:26:28.548 [2024-07-26 14:20:36.396268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.548 [2024-07-26 14:20:36.396302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.548 qpair failed and we were unable to recover it. 00:26:28.548 [2024-07-26 14:20:36.396482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.548 [2024-07-26 14:20:36.396538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.548 qpair failed and we were unable to recover it. 00:26:28.548 [2024-07-26 14:20:36.396692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.548 [2024-07-26 14:20:36.396722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.548 qpair failed and we were unable to recover it. 00:26:28.548 [2024-07-26 14:20:36.396911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.548 [2024-07-26 14:20:36.396955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.548 qpair failed and we were unable to recover it. 00:26:28.548 [2024-07-26 14:20:36.397090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.548 [2024-07-26 14:20:36.397134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.548 qpair failed and we were unable to recover it. 00:26:28.549 [2024-07-26 14:20:36.397310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.549 [2024-07-26 14:20:36.397355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.549 qpair failed and we were unable to recover it. 00:26:28.549 [2024-07-26 14:20:36.397567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.549 [2024-07-26 14:20:36.397614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.549 qpair failed and we were unable to recover it. 00:26:28.549 [2024-07-26 14:20:36.397721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.549 [2024-07-26 14:20:36.397752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.549 qpair failed and we were unable to recover it. 00:26:28.549 [2024-07-26 14:20:36.397864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.549 [2024-07-26 14:20:36.397894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.549 qpair failed and we were unable to recover it. 00:26:28.549 [2024-07-26 14:20:36.398016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.549 [2024-07-26 14:20:36.398046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.549 qpair failed and we were unable to recover it. 00:26:28.549 [2024-07-26 14:20:36.398209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.549 [2024-07-26 14:20:36.398252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.549 qpair failed and we were unable to recover it. 00:26:28.549 [2024-07-26 14:20:36.398399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.549 [2024-07-26 14:20:36.398445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.549 qpair failed and we were unable to recover it. 00:26:28.549 [2024-07-26 14:20:36.398621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.549 [2024-07-26 14:20:36.398652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.549 qpair failed and we were unable to recover it. 00:26:28.549 [2024-07-26 14:20:36.398752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.549 [2024-07-26 14:20:36.398782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.549 qpair failed and we were unable to recover it. 00:26:28.549 [2024-07-26 14:20:36.398888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.549 [2024-07-26 14:20:36.398918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.549 qpair failed and we were unable to recover it. 00:26:28.549 [2024-07-26 14:20:36.399031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.549 [2024-07-26 14:20:36.399075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.549 qpair failed and we were unable to recover it. 00:26:28.549 [2024-07-26 14:20:36.399247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.549 [2024-07-26 14:20:36.399293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.549 qpair failed and we were unable to recover it. 00:26:28.549 [2024-07-26 14:20:36.399455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.549 [2024-07-26 14:20:36.399499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.549 qpair failed and we were unable to recover it. 00:26:28.549 [2024-07-26 14:20:36.399655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.549 [2024-07-26 14:20:36.399685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.549 qpair failed and we were unable to recover it. 00:26:28.549 [2024-07-26 14:20:36.399840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.549 [2024-07-26 14:20:36.399875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.549 qpair failed and we were unable to recover it. 00:26:28.549 [2024-07-26 14:20:36.400017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.549 [2024-07-26 14:20:36.400051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.549 qpair failed and we were unable to recover it. 00:26:28.549 [2024-07-26 14:20:36.400193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.549 [2024-07-26 14:20:36.400237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.549 qpair failed and we were unable to recover it. 00:26:28.549 [2024-07-26 14:20:36.400411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.549 [2024-07-26 14:20:36.400455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.549 qpair failed and we were unable to recover it. 00:26:28.549 [2024-07-26 14:20:36.400676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.549 [2024-07-26 14:20:36.400723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.549 qpair failed and we were unable to recover it. 00:26:28.549 [2024-07-26 14:20:36.400945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.549 [2024-07-26 14:20:36.400989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.549 qpair failed and we were unable to recover it. 00:26:28.549 [2024-07-26 14:20:36.401160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.549 [2024-07-26 14:20:36.401206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.549 qpair failed and we were unable to recover it. 00:26:28.549 [2024-07-26 14:20:36.401393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.549 [2024-07-26 14:20:36.401438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.549 qpair failed and we were unable to recover it. 00:26:28.549 [2024-07-26 14:20:36.401582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.549 [2024-07-26 14:20:36.401629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.549 qpair failed and we were unable to recover it. 00:26:28.549 [2024-07-26 14:20:36.401811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.549 [2024-07-26 14:20:36.401855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.549 qpair failed and we were unable to recover it. 00:26:28.549 [2024-07-26 14:20:36.402035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.549 [2024-07-26 14:20:36.402082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.549 qpair failed and we were unable to recover it. 00:26:28.549 [2024-07-26 14:20:36.402299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.549 [2024-07-26 14:20:36.402343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.549 qpair failed and we were unable to recover it. 00:26:28.549 [2024-07-26 14:20:36.402537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.549 [2024-07-26 14:20:36.402591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.549 qpair failed and we were unable to recover it. 00:26:28.549 [2024-07-26 14:20:36.402736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.549 [2024-07-26 14:20:36.402780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.549 qpair failed and we were unable to recover it. 00:26:28.549 [2024-07-26 14:20:36.402969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.549 [2024-07-26 14:20:36.403003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.549 qpair failed and we were unable to recover it. 00:26:28.549 [2024-07-26 14:20:36.403162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.549 [2024-07-26 14:20:36.403206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.549 qpair failed and we were unable to recover it. 00:26:28.549 [2024-07-26 14:20:36.403359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.549 [2024-07-26 14:20:36.403403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.549 qpair failed and we were unable to recover it. 00:26:28.549 [2024-07-26 14:20:36.403559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.549 [2024-07-26 14:20:36.403604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.549 qpair failed and we were unable to recover it. 00:26:28.549 [2024-07-26 14:20:36.403768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.549 [2024-07-26 14:20:36.403812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.549 qpair failed and we were unable to recover it. 00:26:28.549 [2024-07-26 14:20:36.403989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.549 [2024-07-26 14:20:36.404035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.549 qpair failed and we were unable to recover it. 00:26:28.549 [2024-07-26 14:20:36.404202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.549 [2024-07-26 14:20:36.404232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.549 qpair failed and we were unable to recover it. 00:26:28.549 [2024-07-26 14:20:36.404340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.549 [2024-07-26 14:20:36.404370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.549 qpair failed and we were unable to recover it. 00:26:28.549 [2024-07-26 14:20:36.404497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.549 [2024-07-26 14:20:36.404534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.549 qpair failed and we were unable to recover it. 00:26:28.549 [2024-07-26 14:20:36.404717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.549 [2024-07-26 14:20:36.404761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.550 qpair failed and we were unable to recover it. 00:26:28.550 [2024-07-26 14:20:36.404945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.550 [2024-07-26 14:20:36.404991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.550 qpair failed and we were unable to recover it. 00:26:28.550 [2024-07-26 14:20:36.405178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.550 [2024-07-26 14:20:36.405210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.550 qpair failed and we were unable to recover it. 00:26:28.550 [2024-07-26 14:20:36.405322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.550 [2024-07-26 14:20:36.405353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.550 qpair failed and we were unable to recover it. 00:26:28.550 [2024-07-26 14:20:36.405510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.550 [2024-07-26 14:20:36.405566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.550 qpair failed and we were unable to recover it. 00:26:28.550 [2024-07-26 14:20:36.405740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.550 [2024-07-26 14:20:36.405786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.550 qpair failed and we were unable to recover it. 00:26:28.550 [2024-07-26 14:20:36.405959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.550 [2024-07-26 14:20:36.406004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.550 qpair failed and we were unable to recover it. 00:26:28.550 [2024-07-26 14:20:36.406144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.550 [2024-07-26 14:20:36.406189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.550 qpair failed and we were unable to recover it. 00:26:28.550 [2024-07-26 14:20:36.406340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.550 [2024-07-26 14:20:36.406386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.550 qpair failed and we were unable to recover it. 00:26:28.550 [2024-07-26 14:20:36.406590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.550 [2024-07-26 14:20:36.406636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.550 qpair failed and we were unable to recover it. 00:26:28.550 [2024-07-26 14:20:36.406775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.550 [2024-07-26 14:20:36.406821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.550 qpair failed and we were unable to recover it. 00:26:28.550 [2024-07-26 14:20:36.407027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.550 [2024-07-26 14:20:36.407072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.550 qpair failed and we were unable to recover it. 00:26:28.550 [2024-07-26 14:20:36.407260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.550 [2024-07-26 14:20:36.407292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.550 qpair failed and we were unable to recover it. 00:26:28.550 [2024-07-26 14:20:36.407482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.550 [2024-07-26 14:20:36.407534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.550 qpair failed and we were unable to recover it. 00:26:28.550 [2024-07-26 14:20:36.407701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.550 [2024-07-26 14:20:36.407745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.550 qpair failed and we were unable to recover it. 00:26:28.550 [2024-07-26 14:20:36.407933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.550 [2024-07-26 14:20:36.407978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.550 qpair failed and we were unable to recover it. 00:26:28.550 [2024-07-26 14:20:36.408193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.550 [2024-07-26 14:20:36.408238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.550 qpair failed and we were unable to recover it. 00:26:28.550 [2024-07-26 14:20:36.408371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.550 [2024-07-26 14:20:36.408417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.550 qpair failed and we were unable to recover it. 00:26:28.550 [2024-07-26 14:20:36.408590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.550 [2024-07-26 14:20:36.408635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.550 qpair failed and we were unable to recover it. 00:26:28.550 [2024-07-26 14:20:36.408808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.550 [2024-07-26 14:20:36.408854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.550 qpair failed and we were unable to recover it. 00:26:28.550 [2024-07-26 14:20:36.409062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.550 [2024-07-26 14:20:36.409107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.550 qpair failed and we were unable to recover it. 00:26:28.550 [2024-07-26 14:20:36.409317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.550 [2024-07-26 14:20:36.409360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.550 qpair failed and we were unable to recover it. 00:26:28.550 [2024-07-26 14:20:36.409534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.550 [2024-07-26 14:20:36.409579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.550 qpair failed and we were unable to recover it. 00:26:28.550 [2024-07-26 14:20:36.409794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.550 [2024-07-26 14:20:36.409839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.550 qpair failed and we were unable to recover it. 00:26:28.550 [2024-07-26 14:20:36.410014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.550 [2024-07-26 14:20:36.410058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.550 qpair failed and we were unable to recover it. 00:26:28.550 [2024-07-26 14:20:36.410274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.550 [2024-07-26 14:20:36.410320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.550 qpair failed and we were unable to recover it. 00:26:28.550 [2024-07-26 14:20:36.410516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.550 [2024-07-26 14:20:36.410576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.550 qpair failed and we were unable to recover it. 00:26:28.550 [2024-07-26 14:20:36.410732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.550 [2024-07-26 14:20:36.410779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.550 qpair failed and we were unable to recover it. 00:26:28.550 [2024-07-26 14:20:36.410920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.550 [2024-07-26 14:20:36.410967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.550 qpair failed and we were unable to recover it. 00:26:28.550 [2024-07-26 14:20:36.411117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.550 [2024-07-26 14:20:36.411179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.550 qpair failed and we were unable to recover it. 00:26:28.550 [2024-07-26 14:20:36.411372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.550 [2024-07-26 14:20:36.411417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.550 qpair failed and we were unable to recover it. 00:26:28.550 [2024-07-26 14:20:36.411588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.550 [2024-07-26 14:20:36.411634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.550 qpair failed and we were unable to recover it. 00:26:28.550 [2024-07-26 14:20:36.411815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.550 [2024-07-26 14:20:36.411861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.550 qpair failed and we were unable to recover it. 00:26:28.550 [2024-07-26 14:20:36.412044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.550 [2024-07-26 14:20:36.412089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.550 qpair failed and we were unable to recover it. 00:26:28.550 [2024-07-26 14:20:36.412252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.550 [2024-07-26 14:20:36.412296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.550 qpair failed and we were unable to recover it. 00:26:28.550 [2024-07-26 14:20:36.412442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.550 [2024-07-26 14:20:36.412487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.550 qpair failed and we were unable to recover it. 00:26:28.550 [2024-07-26 14:20:36.412680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.550 [2024-07-26 14:20:36.412726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.550 qpair failed and we were unable to recover it. 00:26:28.550 [2024-07-26 14:20:36.412908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.550 [2024-07-26 14:20:36.412952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.550 qpair failed and we were unable to recover it. 00:26:28.550 [2024-07-26 14:20:36.413133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.551 [2024-07-26 14:20:36.413177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.551 qpair failed and we were unable to recover it. 00:26:28.551 [2024-07-26 14:20:36.413348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.551 [2024-07-26 14:20:36.413395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.551 qpair failed and we were unable to recover it. 00:26:28.551 [2024-07-26 14:20:36.413551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.551 [2024-07-26 14:20:36.413602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.551 qpair failed and we were unable to recover it. 00:26:28.551 [2024-07-26 14:20:36.413755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.551 [2024-07-26 14:20:36.413802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.551 qpair failed and we were unable to recover it. 00:26:28.551 [2024-07-26 14:20:36.414026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.551 [2024-07-26 14:20:36.414073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.551 qpair failed and we were unable to recover it. 00:26:28.551 [2024-07-26 14:20:36.414317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.551 [2024-07-26 14:20:36.414364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.551 qpair failed and we were unable to recover it. 00:26:28.551 [2024-07-26 14:20:36.414593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.551 [2024-07-26 14:20:36.414639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.551 qpair failed and we were unable to recover it. 00:26:28.551 [2024-07-26 14:20:36.414845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.551 [2024-07-26 14:20:36.414890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.551 qpair failed and we were unable to recover it. 00:26:28.551 [2024-07-26 14:20:36.415101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.551 [2024-07-26 14:20:36.415145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.551 qpair failed and we were unable to recover it. 00:26:28.551 [2024-07-26 14:20:36.415297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.551 [2024-07-26 14:20:36.415342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.551 qpair failed and we were unable to recover it. 00:26:28.551 [2024-07-26 14:20:36.415545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.551 [2024-07-26 14:20:36.415594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.551 qpair failed and we were unable to recover it. 00:26:28.551 [2024-07-26 14:20:36.415773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.551 [2024-07-26 14:20:36.415820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.551 qpair failed and we were unable to recover it. 00:26:28.551 [2024-07-26 14:20:36.416010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.551 [2024-07-26 14:20:36.416057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.551 qpair failed and we were unable to recover it. 00:26:28.551 [2024-07-26 14:20:36.416236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.551 [2024-07-26 14:20:36.416283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.551 qpair failed and we were unable to recover it. 00:26:28.551 [2024-07-26 14:20:36.416463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.551 [2024-07-26 14:20:36.416510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.551 qpair failed and we were unable to recover it. 00:26:28.551 [2024-07-26 14:20:36.416698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.551 [2024-07-26 14:20:36.416746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.551 qpair failed and we were unable to recover it. 00:26:28.551 [2024-07-26 14:20:36.416928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.551 [2024-07-26 14:20:36.416975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.551 qpair failed and we were unable to recover it. 00:26:28.551 [2024-07-26 14:20:36.417197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.551 [2024-07-26 14:20:36.417244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.551 qpair failed and we were unable to recover it. 00:26:28.551 [2024-07-26 14:20:36.417468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.551 [2024-07-26 14:20:36.417515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.551 qpair failed and we were unable to recover it. 00:26:28.551 [2024-07-26 14:20:36.417687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.551 [2024-07-26 14:20:36.417734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.551 qpair failed and we were unable to recover it. 00:26:28.551 [2024-07-26 14:20:36.417918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.551 [2024-07-26 14:20:36.417965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.551 qpair failed and we were unable to recover it. 00:26:28.551 [2024-07-26 14:20:36.418148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.551 [2024-07-26 14:20:36.418197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.551 qpair failed and we were unable to recover it. 00:26:28.551 [2024-07-26 14:20:36.418357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.551 [2024-07-26 14:20:36.418404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.551 qpair failed and we were unable to recover it. 00:26:28.551 [2024-07-26 14:20:36.418609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.551 [2024-07-26 14:20:36.418658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.551 qpair failed and we were unable to recover it. 00:26:28.551 [2024-07-26 14:20:36.418852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.551 [2024-07-26 14:20:36.418900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.551 qpair failed and we were unable to recover it. 00:26:28.551 [2024-07-26 14:20:36.419084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.551 [2024-07-26 14:20:36.419131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.551 qpair failed and we were unable to recover it. 00:26:28.551 [2024-07-26 14:20:36.419310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.551 [2024-07-26 14:20:36.419358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.551 qpair failed and we were unable to recover it. 00:26:28.551 [2024-07-26 14:20:36.419484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.551 [2024-07-26 14:20:36.419550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.551 qpair failed and we were unable to recover it. 00:26:28.551 [2024-07-26 14:20:36.419721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.551 [2024-07-26 14:20:36.419768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.551 qpair failed and we were unable to recover it. 00:26:28.551 [2024-07-26 14:20:36.419985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.551 [2024-07-26 14:20:36.420032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.551 qpair failed and we were unable to recover it. 00:26:28.551 [2024-07-26 14:20:36.420238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.551 [2024-07-26 14:20:36.420284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.551 qpair failed and we were unable to recover it. 00:26:28.551 [2024-07-26 14:20:36.420471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.551 [2024-07-26 14:20:36.420539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.551 qpair failed and we were unable to recover it. 00:26:28.551 [2024-07-26 14:20:36.420762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.551 [2024-07-26 14:20:36.420810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.552 qpair failed and we were unable to recover it. 00:26:28.552 [2024-07-26 14:20:36.420951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.552 [2024-07-26 14:20:36.420999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.552 qpair failed and we were unable to recover it. 00:26:28.552 [2024-07-26 14:20:36.421220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.552 [2024-07-26 14:20:36.421267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.552 qpair failed and we were unable to recover it. 00:26:28.552 [2024-07-26 14:20:36.421469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.552 [2024-07-26 14:20:36.421516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.552 qpair failed and we were unable to recover it. 00:26:28.552 [2024-07-26 14:20:36.421690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.552 [2024-07-26 14:20:36.421737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.552 qpair failed and we were unable to recover it. 00:26:28.552 [2024-07-26 14:20:36.421901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.552 [2024-07-26 14:20:36.421948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.552 qpair failed and we were unable to recover it. 00:26:28.552 [2024-07-26 14:20:36.422092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.552 [2024-07-26 14:20:36.422140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.552 qpair failed and we were unable to recover it. 00:26:28.552 [2024-07-26 14:20:36.422329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.552 [2024-07-26 14:20:36.422378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.552 qpair failed and we were unable to recover it. 00:26:28.552 [2024-07-26 14:20:36.422581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.552 [2024-07-26 14:20:36.422630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.552 qpair failed and we were unable to recover it. 00:26:28.552 [2024-07-26 14:20:36.422806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.552 [2024-07-26 14:20:36.422852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.552 qpair failed and we were unable to recover it. 00:26:28.552 [2024-07-26 14:20:36.423043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.552 [2024-07-26 14:20:36.423090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.552 qpair failed and we were unable to recover it. 00:26:28.552 [2024-07-26 14:20:36.423282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.552 [2024-07-26 14:20:36.423329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.552 qpair failed and we were unable to recover it. 00:26:28.552 [2024-07-26 14:20:36.423486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.552 [2024-07-26 14:20:36.423543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.552 qpair failed and we were unable to recover it. 00:26:28.552 [2024-07-26 14:20:36.423740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.552 [2024-07-26 14:20:36.423787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.552 qpair failed and we were unable to recover it. 00:26:28.552 [2024-07-26 14:20:36.424005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.552 [2024-07-26 14:20:36.424052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.552 qpair failed and we were unable to recover it. 00:26:28.552 [2024-07-26 14:20:36.424231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.552 [2024-07-26 14:20:36.424278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.552 qpair failed and we were unable to recover it. 00:26:28.552 [2024-07-26 14:20:36.424445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.552 [2024-07-26 14:20:36.424492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.552 qpair failed and we were unable to recover it. 00:26:28.552 [2024-07-26 14:20:36.424694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.552 [2024-07-26 14:20:36.424741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.552 qpair failed and we were unable to recover it. 00:26:28.552 [2024-07-26 14:20:36.424929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.552 [2024-07-26 14:20:36.424976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.552 qpair failed and we were unable to recover it. 00:26:28.552 [2024-07-26 14:20:36.425120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.552 [2024-07-26 14:20:36.425168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.552 qpair failed and we were unable to recover it. 00:26:28.552 [2024-07-26 14:20:36.425366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.552 [2024-07-26 14:20:36.425413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.552 qpair failed and we were unable to recover it. 00:26:28.552 [2024-07-26 14:20:36.425618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.552 [2024-07-26 14:20:36.425666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.552 qpair failed and we were unable to recover it. 00:26:28.552 [2024-07-26 14:20:36.425854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.552 [2024-07-26 14:20:36.425901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.552 qpair failed and we were unable to recover it. 00:26:28.552 [2024-07-26 14:20:36.426119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.552 [2024-07-26 14:20:36.426166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.552 qpair failed and we were unable to recover it. 00:26:28.552 [2024-07-26 14:20:36.426339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.552 [2024-07-26 14:20:36.426386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.552 qpair failed and we were unable to recover it. 00:26:28.552 [2024-07-26 14:20:36.426562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.552 [2024-07-26 14:20:36.426611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.552 qpair failed and we were unable to recover it. 00:26:28.552 [2024-07-26 14:20:36.426849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.552 [2024-07-26 14:20:36.426897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.552 qpair failed and we were unable to recover it. 00:26:28.552 [2024-07-26 14:20:36.427126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.552 [2024-07-26 14:20:36.427173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.552 qpair failed and we were unable to recover it. 00:26:28.552 [2024-07-26 14:20:36.427319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.552 [2024-07-26 14:20:36.427366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.552 qpair failed and we were unable to recover it. 00:26:28.552 [2024-07-26 14:20:36.427570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.552 [2024-07-26 14:20:36.427619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.552 qpair failed and we were unable to recover it. 00:26:28.552 [2024-07-26 14:20:36.427787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.552 [2024-07-26 14:20:36.427834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.552 qpair failed and we were unable to recover it. 00:26:28.552 [2024-07-26 14:20:36.428017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.552 [2024-07-26 14:20:36.428065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.552 qpair failed and we were unable to recover it. 00:26:28.552 [2024-07-26 14:20:36.428283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.552 [2024-07-26 14:20:36.428331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.552 qpair failed and we were unable to recover it. 00:26:28.552 [2024-07-26 14:20:36.428489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.552 [2024-07-26 14:20:36.428544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.552 qpair failed and we were unable to recover it. 00:26:28.552 [2024-07-26 14:20:36.428706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.552 [2024-07-26 14:20:36.428754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.552 qpair failed and we were unable to recover it. 00:26:28.552 [2024-07-26 14:20:36.428914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.552 [2024-07-26 14:20:36.428961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.552 qpair failed and we were unable to recover it. 00:26:28.552 [2024-07-26 14:20:36.429149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.552 [2024-07-26 14:20:36.429196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.553 qpair failed and we were unable to recover it. 00:26:28.553 [2024-07-26 14:20:36.429389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.553 [2024-07-26 14:20:36.429436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.553 qpair failed and we were unable to recover it. 00:26:28.553 [2024-07-26 14:20:36.429623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.553 [2024-07-26 14:20:36.429670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.553 qpair failed and we were unable to recover it. 00:26:28.553 [2024-07-26 14:20:36.429860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.553 [2024-07-26 14:20:36.429915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.553 qpair failed and we were unable to recover it. 00:26:28.553 [2024-07-26 14:20:36.430103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.553 [2024-07-26 14:20:36.430151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.553 qpair failed and we were unable to recover it. 00:26:28.553 [2024-07-26 14:20:36.430307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.553 [2024-07-26 14:20:36.430354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.553 qpair failed and we were unable to recover it. 00:26:28.553 [2024-07-26 14:20:36.430507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.553 [2024-07-26 14:20:36.430569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.553 qpair failed and we were unable to recover it. 00:26:28.553 [2024-07-26 14:20:36.430767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.553 [2024-07-26 14:20:36.430815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.553 qpair failed and we were unable to recover it. 00:26:28.553 [2024-07-26 14:20:36.430996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.553 [2024-07-26 14:20:36.431043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.553 qpair failed and we were unable to recover it. 00:26:28.553 [2024-07-26 14:20:36.431237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.553 [2024-07-26 14:20:36.431285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.553 qpair failed and we were unable to recover it. 00:26:28.553 [2024-07-26 14:20:36.431505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.553 [2024-07-26 14:20:36.431582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.553 qpair failed and we were unable to recover it. 00:26:28.553 [2024-07-26 14:20:36.431807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.553 [2024-07-26 14:20:36.431855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.553 qpair failed and we were unable to recover it. 00:26:28.553 [2024-07-26 14:20:36.432036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.553 [2024-07-26 14:20:36.432084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.553 qpair failed and we were unable to recover it. 00:26:28.553 [2024-07-26 14:20:36.432308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.553 [2024-07-26 14:20:36.432355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.553 qpair failed and we were unable to recover it. 00:26:28.553 [2024-07-26 14:20:36.432573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.553 [2024-07-26 14:20:36.432622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.553 qpair failed and we were unable to recover it. 00:26:28.553 [2024-07-26 14:20:36.432771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.553 [2024-07-26 14:20:36.432818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.553 qpair failed and we were unable to recover it. 00:26:28.553 [2024-07-26 14:20:36.432969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.553 [2024-07-26 14:20:36.433016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.553 qpair failed and we were unable to recover it. 00:26:28.553 [2024-07-26 14:20:36.433181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.553 [2024-07-26 14:20:36.433228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.553 qpair failed and we were unable to recover it. 00:26:28.553 [2024-07-26 14:20:36.433389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.553 [2024-07-26 14:20:36.433436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.553 qpair failed and we were unable to recover it. 00:26:28.553 [2024-07-26 14:20:36.433638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.553 [2024-07-26 14:20:36.433687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.553 qpair failed and we were unable to recover it. 00:26:28.553 [2024-07-26 14:20:36.433884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.553 [2024-07-26 14:20:36.433932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.553 qpair failed and we were unable to recover it. 00:26:28.553 [2024-07-26 14:20:36.434146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.553 [2024-07-26 14:20:36.434194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.553 qpair failed and we were unable to recover it. 00:26:28.553 [2024-07-26 14:20:36.434422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.553 [2024-07-26 14:20:36.434469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.553 qpair failed and we were unable to recover it. 00:26:28.553 [2024-07-26 14:20:36.434656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.553 [2024-07-26 14:20:36.434715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.553 qpair failed and we were unable to recover it. 00:26:28.553 [2024-07-26 14:20:36.434897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.553 [2024-07-26 14:20:36.434946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.553 qpair failed and we were unable to recover it. 00:26:28.553 [2024-07-26 14:20:36.435133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.553 [2024-07-26 14:20:36.435182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.553 qpair failed and we were unable to recover it. 00:26:28.553 [2024-07-26 14:20:36.435343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.553 [2024-07-26 14:20:36.435392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.553 qpair failed and we were unable to recover it. 00:26:28.553 [2024-07-26 14:20:36.435565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.553 [2024-07-26 14:20:36.435614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.553 qpair failed and we were unable to recover it. 00:26:28.553 [2024-07-26 14:20:36.435791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.553 [2024-07-26 14:20:36.435857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.553 qpair failed and we were unable to recover it. 00:26:28.553 [2024-07-26 14:20:36.436072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.553 [2024-07-26 14:20:36.436119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.553 qpair failed and we were unable to recover it. 00:26:28.553 [2024-07-26 14:20:36.436347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.553 [2024-07-26 14:20:36.436394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.553 qpair failed and we were unable to recover it. 00:26:28.553 [2024-07-26 14:20:36.436580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.553 [2024-07-26 14:20:36.436629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.553 qpair failed and we were unable to recover it. 00:26:28.553 [2024-07-26 14:20:36.436816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.553 [2024-07-26 14:20:36.436865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.553 qpair failed and we were unable to recover it. 00:26:28.553 [2024-07-26 14:20:36.437053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.553 [2024-07-26 14:20:36.437101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.553 qpair failed and we were unable to recover it. 00:26:28.553 [2024-07-26 14:20:36.437268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.553 [2024-07-26 14:20:36.437316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.553 qpair failed and we were unable to recover it. 00:26:28.553 [2024-07-26 14:20:36.437474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.553 [2024-07-26 14:20:36.437524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.553 qpair failed and we were unable to recover it. 00:26:28.553 [2024-07-26 14:20:36.437748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.553 [2024-07-26 14:20:36.437812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.553 qpair failed and we were unable to recover it. 00:26:28.553 [2024-07-26 14:20:36.438040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.553 [2024-07-26 14:20:36.438089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.553 qpair failed and we were unable to recover it. 00:26:28.553 [2024-07-26 14:20:36.438295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.554 [2024-07-26 14:20:36.438342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.554 qpair failed and we were unable to recover it. 00:26:28.554 [2024-07-26 14:20:36.438500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.554 [2024-07-26 14:20:36.438557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.554 qpair failed and we were unable to recover it. 00:26:28.554 [2024-07-26 14:20:36.438780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.554 [2024-07-26 14:20:36.438828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.554 qpair failed and we were unable to recover it. 00:26:28.554 [2024-07-26 14:20:36.439011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.554 [2024-07-26 14:20:36.439059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.554 qpair failed and we were unable to recover it. 00:26:28.554 [2024-07-26 14:20:36.439249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.554 [2024-07-26 14:20:36.439296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.554 qpair failed and we were unable to recover it. 00:26:28.554 [2024-07-26 14:20:36.439460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.554 [2024-07-26 14:20:36.439508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.554 qpair failed and we were unable to recover it. 00:26:28.554 [2024-07-26 14:20:36.439729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.554 [2024-07-26 14:20:36.439777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.554 qpair failed and we were unable to recover it. 00:26:28.554 [2024-07-26 14:20:36.439966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.554 [2024-07-26 14:20:36.440013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.554 qpair failed and we were unable to recover it. 00:26:28.554 [2024-07-26 14:20:36.440199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.554 [2024-07-26 14:20:36.440249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.554 qpair failed and we were unable to recover it. 00:26:28.554 [2024-07-26 14:20:36.440447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.554 [2024-07-26 14:20:36.440494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.554 qpair failed and we were unable to recover it. 00:26:28.554 [2024-07-26 14:20:36.440648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.554 [2024-07-26 14:20:36.440697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.554 qpair failed and we were unable to recover it. 00:26:28.554 [2024-07-26 14:20:36.440877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.554 [2024-07-26 14:20:36.440942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.554 qpair failed and we were unable to recover it. 00:26:28.554 [2024-07-26 14:20:36.441134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.554 [2024-07-26 14:20:36.441198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.554 qpair failed and we were unable to recover it. 00:26:28.554 [2024-07-26 14:20:36.441400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.554 [2024-07-26 14:20:36.441447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.554 qpair failed and we were unable to recover it. 00:26:28.554 [2024-07-26 14:20:36.441653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.554 [2024-07-26 14:20:36.441721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.554 qpair failed and we were unable to recover it. 00:26:28.554 [2024-07-26 14:20:36.444732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.554 [2024-07-26 14:20:36.444807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.554 qpair failed and we were unable to recover it. 00:26:28.554 [2024-07-26 14:20:36.445091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.554 [2024-07-26 14:20:36.445159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.554 qpair failed and we were unable to recover it. 00:26:28.554 [2024-07-26 14:20:36.445334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.554 [2024-07-26 14:20:36.445404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.554 qpair failed and we were unable to recover it. 00:26:28.554 [2024-07-26 14:20:36.445574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.554 [2024-07-26 14:20:36.445622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.554 qpair failed and we were unable to recover it. 00:26:28.554 [2024-07-26 14:20:36.445830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.554 [2024-07-26 14:20:36.445902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.554 qpair failed and we were unable to recover it. 00:26:28.554 [2024-07-26 14:20:36.446100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.554 [2024-07-26 14:20:36.446165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.554 qpair failed and we were unable to recover it. 00:26:28.554 [2024-07-26 14:20:36.446328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.554 [2024-07-26 14:20:36.446377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.554 qpair failed and we were unable to recover it. 00:26:28.554 [2024-07-26 14:20:36.446569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.554 [2024-07-26 14:20:36.446620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.554 qpair failed and we were unable to recover it. 00:26:28.554 [2024-07-26 14:20:36.446805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.554 [2024-07-26 14:20:36.446872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.554 qpair failed and we were unable to recover it. 00:26:28.554 [2024-07-26 14:20:36.447102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.554 [2024-07-26 14:20:36.447150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.554 qpair failed and we were unable to recover it. 00:26:28.554 [2024-07-26 14:20:36.447338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.554 [2024-07-26 14:20:36.447385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.554 qpair failed and we were unable to recover it. 00:26:28.554 [2024-07-26 14:20:36.447546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.554 [2024-07-26 14:20:36.447595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.554 qpair failed and we were unable to recover it. 00:26:28.554 [2024-07-26 14:20:36.447783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.554 [2024-07-26 14:20:36.447830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.554 qpair failed and we were unable to recover it. 00:26:28.554 [2024-07-26 14:20:36.448003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.554 [2024-07-26 14:20:36.448068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.554 qpair failed and we were unable to recover it. 00:26:28.554 [2024-07-26 14:20:36.448251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.554 [2024-07-26 14:20:36.448298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.554 qpair failed and we were unable to recover it. 00:26:28.554 [2024-07-26 14:20:36.448485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.554 [2024-07-26 14:20:36.448552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.554 qpair failed and we were unable to recover it. 00:26:28.554 [2024-07-26 14:20:36.448800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.554 [2024-07-26 14:20:36.448872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.554 qpair failed and we were unable to recover it. 00:26:28.554 [2024-07-26 14:20:36.449131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.554 [2024-07-26 14:20:36.449204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.554 qpair failed and we were unable to recover it. 00:26:28.554 [2024-07-26 14:20:36.449353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.554 [2024-07-26 14:20:36.449401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.554 qpair failed and we were unable to recover it. 00:26:28.554 [2024-07-26 14:20:36.449573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.554 [2024-07-26 14:20:36.449622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.554 qpair failed and we were unable to recover it. 00:26:28.554 [2024-07-26 14:20:36.449841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.554 [2024-07-26 14:20:36.449906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.554 qpair failed and we were unable to recover it. 00:26:28.554 [2024-07-26 14:20:36.450109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.554 [2024-07-26 14:20:36.450173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.554 qpair failed and we were unable to recover it. 00:26:28.554 [2024-07-26 14:20:36.450357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.555 [2024-07-26 14:20:36.450405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.555 qpair failed and we were unable to recover it. 00:26:28.555 [2024-07-26 14:20:36.450616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.555 [2024-07-26 14:20:36.450685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.555 qpair failed and we were unable to recover it. 00:26:28.555 [2024-07-26 14:20:36.450874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.555 [2024-07-26 14:20:36.450922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.555 qpair failed and we were unable to recover it. 00:26:28.555 [2024-07-26 14:20:36.451101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.555 [2024-07-26 14:20:36.451167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.555 qpair failed and we were unable to recover it. 00:26:28.555 [2024-07-26 14:20:36.451319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.555 [2024-07-26 14:20:36.451369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.555 qpair failed and we were unable to recover it. 00:26:28.555 [2024-07-26 14:20:36.451565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.555 [2024-07-26 14:20:36.451614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.555 qpair failed and we were unable to recover it. 00:26:28.555 [2024-07-26 14:20:36.451821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.555 [2024-07-26 14:20:36.451886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.555 qpair failed and we were unable to recover it. 00:26:28.555 [2024-07-26 14:20:36.452075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.555 [2024-07-26 14:20:36.452142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.555 qpair failed and we were unable to recover it. 00:26:28.555 [2024-07-26 14:20:36.452329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.555 [2024-07-26 14:20:36.452376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.555 qpair failed and we were unable to recover it. 00:26:28.555 [2024-07-26 14:20:36.452612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.555 [2024-07-26 14:20:36.452679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.555 qpair failed and we were unable to recover it. 00:26:28.555 [2024-07-26 14:20:36.452901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.555 [2024-07-26 14:20:36.452947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.555 qpair failed and we were unable to recover it. 00:26:28.555 [2024-07-26 14:20:36.453168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.555 [2024-07-26 14:20:36.453215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.555 qpair failed and we were unable to recover it. 00:26:28.555 [2024-07-26 14:20:36.453404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.555 [2024-07-26 14:20:36.453451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.555 qpair failed and we were unable to recover it. 00:26:28.555 [2024-07-26 14:20:36.453701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.555 [2024-07-26 14:20:36.453766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.555 qpair failed and we were unable to recover it. 00:26:28.555 [2024-07-26 14:20:36.453950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.555 [2024-07-26 14:20:36.454018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.555 qpair failed and we were unable to recover it. 00:26:28.555 [2024-07-26 14:20:36.454200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.555 [2024-07-26 14:20:36.454248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.555 qpair failed and we were unable to recover it. 00:26:28.555 [2024-07-26 14:20:36.454388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.555 [2024-07-26 14:20:36.454435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.555 qpair failed and we were unable to recover it. 00:26:28.555 [2024-07-26 14:20:36.454681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.555 [2024-07-26 14:20:36.454747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.555 qpair failed and we were unable to recover it. 00:26:28.555 [2024-07-26 14:20:36.454936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.555 [2024-07-26 14:20:36.455003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.555 qpair failed and we were unable to recover it. 00:26:28.555 [2024-07-26 14:20:36.455191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.555 [2024-07-26 14:20:36.455238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.555 qpair failed and we were unable to recover it. 00:26:28.555 [2024-07-26 14:20:36.455427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.555 [2024-07-26 14:20:36.455474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.555 qpair failed and we were unable to recover it. 00:26:28.555 [2024-07-26 14:20:36.455706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.555 [2024-07-26 14:20:36.455773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.555 qpair failed and we were unable to recover it. 00:26:28.555 [2024-07-26 14:20:36.455956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.555 [2024-07-26 14:20:36.456026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.555 qpair failed and we were unable to recover it. 00:26:28.555 [2024-07-26 14:20:36.456184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.555 [2024-07-26 14:20:36.456232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.555 qpair failed and we were unable to recover it. 00:26:28.555 [2024-07-26 14:20:36.456416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.555 [2024-07-26 14:20:36.456464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.555 qpair failed and we were unable to recover it. 00:26:28.555 [2024-07-26 14:20:36.456719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.555 [2024-07-26 14:20:36.456785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.555 qpair failed and we were unable to recover it. 00:26:28.555 [2024-07-26 14:20:36.456992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.555 [2024-07-26 14:20:36.457057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.555 qpair failed and we were unable to recover it. 00:26:28.555 [2024-07-26 14:20:36.457245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.555 [2024-07-26 14:20:36.457292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.555 qpair failed and we were unable to recover it. 00:26:28.555 [2024-07-26 14:20:36.457492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.555 [2024-07-26 14:20:36.457551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.555 qpair failed and we were unable to recover it. 00:26:28.555 [2024-07-26 14:20:36.457771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.555 [2024-07-26 14:20:36.457839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.555 qpair failed and we were unable to recover it. 00:26:28.555 [2024-07-26 14:20:36.458054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.555 [2024-07-26 14:20:36.458119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.555 qpair failed and we were unable to recover it. 00:26:28.555 [2024-07-26 14:20:36.458339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.555 [2024-07-26 14:20:36.458387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.555 qpair failed and we were unable to recover it. 00:26:28.555 [2024-07-26 14:20:36.458578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.555 [2024-07-26 14:20:36.458649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.555 qpair failed and we were unable to recover it. 00:26:28.555 [2024-07-26 14:20:36.458851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.555 [2024-07-26 14:20:36.458899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.555 qpair failed and we were unable to recover it. 00:26:28.555 [2024-07-26 14:20:36.459053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.555 [2024-07-26 14:20:36.459100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.555 qpair failed and we were unable to recover it. 00:26:28.555 [2024-07-26 14:20:36.459313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.555 [2024-07-26 14:20:36.459367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.555 qpair failed and we were unable to recover it. 00:26:28.555 [2024-07-26 14:20:36.459607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.555 [2024-07-26 14:20:36.459655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.555 qpair failed and we were unable to recover it. 00:26:28.555 [2024-07-26 14:20:36.459872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.555 [2024-07-26 14:20:36.459919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.555 qpair failed and we were unable to recover it. 00:26:28.556 [2024-07-26 14:20:36.460136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.556 [2024-07-26 14:20:36.460201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.556 qpair failed and we were unable to recover it. 00:26:28.556 [2024-07-26 14:20:36.460359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.556 [2024-07-26 14:20:36.460406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.556 qpair failed and we were unable to recover it. 00:26:28.556 [2024-07-26 14:20:36.460563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.556 [2024-07-26 14:20:36.460612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.556 qpair failed and we were unable to recover it. 00:26:28.556 [2024-07-26 14:20:36.460829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.556 [2024-07-26 14:20:36.460896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.556 qpair failed and we were unable to recover it. 00:26:28.556 [2024-07-26 14:20:36.461081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.556 [2024-07-26 14:20:36.461128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.556 qpair failed and we were unable to recover it. 00:26:28.556 [2024-07-26 14:20:36.461303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.556 [2024-07-26 14:20:36.461351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.556 qpair failed and we were unable to recover it. 00:26:28.556 [2024-07-26 14:20:36.461580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.556 [2024-07-26 14:20:36.461628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.556 qpair failed and we were unable to recover it. 00:26:28.556 [2024-07-26 14:20:36.461826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.556 [2024-07-26 14:20:36.461894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.556 qpair failed and we were unable to recover it. 00:26:28.556 [2024-07-26 14:20:36.462086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.556 [2024-07-26 14:20:36.462134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.556 qpair failed and we were unable to recover it. 00:26:28.556 [2024-07-26 14:20:36.462354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.556 [2024-07-26 14:20:36.462403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.556 qpair failed and we were unable to recover it. 00:26:28.556 [2024-07-26 14:20:36.462564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.556 [2024-07-26 14:20:36.462614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.556 qpair failed and we were unable to recover it. 00:26:28.556 [2024-07-26 14:20:36.462830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.556 [2024-07-26 14:20:36.462878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.556 qpair failed and we were unable to recover it. 00:26:28.556 [2024-07-26 14:20:36.463107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.556 [2024-07-26 14:20:36.463154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.556 qpair failed and we were unable to recover it. 00:26:28.556 [2024-07-26 14:20:36.463324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.556 [2024-07-26 14:20:36.463371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.556 qpair failed and we were unable to recover it. 00:26:28.556 [2024-07-26 14:20:36.463560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.556 [2024-07-26 14:20:36.463608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.556 qpair failed and we were unable to recover it. 00:26:28.556 [2024-07-26 14:20:36.463815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.556 [2024-07-26 14:20:36.463882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.556 qpair failed and we were unable to recover it. 00:26:28.556 [2024-07-26 14:20:36.464035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.556 [2024-07-26 14:20:36.464081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.556 qpair failed and we were unable to recover it. 00:26:28.556 [2024-07-26 14:20:36.464299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.556 [2024-07-26 14:20:36.464346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.556 qpair failed and we were unable to recover it. 00:26:28.556 [2024-07-26 14:20:36.464539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.556 [2024-07-26 14:20:36.464587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.556 qpair failed and we were unable to recover it. 00:26:28.556 [2024-07-26 14:20:36.464786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.556 [2024-07-26 14:20:36.464851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.556 qpair failed and we were unable to recover it. 00:26:28.556 [2024-07-26 14:20:36.465054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.556 [2024-07-26 14:20:36.465101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.556 qpair failed and we were unable to recover it. 00:26:28.556 [2024-07-26 14:20:36.465285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.556 [2024-07-26 14:20:36.465332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.556 qpair failed and we were unable to recover it. 00:26:28.556 [2024-07-26 14:20:36.465561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.556 [2024-07-26 14:20:36.465609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.556 qpair failed and we were unable to recover it. 00:26:28.556 [2024-07-26 14:20:36.465769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.556 [2024-07-26 14:20:36.465839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.556 qpair failed and we were unable to recover it. 00:26:28.556 [2024-07-26 14:20:36.466027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.556 [2024-07-26 14:20:36.466096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.556 qpair failed and we were unable to recover it. 00:26:28.556 [2024-07-26 14:20:36.466285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.556 [2024-07-26 14:20:36.466334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.556 qpair failed and we were unable to recover it. 00:26:28.556 [2024-07-26 14:20:36.466539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.556 [2024-07-26 14:20:36.466588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.556 qpair failed and we were unable to recover it. 00:26:28.556 [2024-07-26 14:20:36.466767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.556 [2024-07-26 14:20:36.466832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.556 qpair failed and we were unable to recover it. 00:26:28.556 [2024-07-26 14:20:36.467049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.556 [2024-07-26 14:20:36.467115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.556 qpair failed and we were unable to recover it. 00:26:28.556 [2024-07-26 14:20:36.467298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.556 [2024-07-26 14:20:36.467345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.556 qpair failed and we were unable to recover it. 00:26:28.556 [2024-07-26 14:20:36.467525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.556 [2024-07-26 14:20:36.467600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.556 qpair failed and we were unable to recover it. 00:26:28.556 [2024-07-26 14:20:36.467761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.557 [2024-07-26 14:20:36.467808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.557 qpair failed and we were unable to recover it. 00:26:28.557 [2024-07-26 14:20:36.468008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.557 [2024-07-26 14:20:36.468055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.557 qpair failed and we were unable to recover it. 00:26:28.557 [2024-07-26 14:20:36.468239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.557 [2024-07-26 14:20:36.468285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.557 qpair failed and we were unable to recover it. 00:26:28.557 [2024-07-26 14:20:36.468448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.557 [2024-07-26 14:20:36.468494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.557 qpair failed and we were unable to recover it. 00:26:28.557 [2024-07-26 14:20:36.468700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.557 [2024-07-26 14:20:36.468746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.557 qpair failed and we were unable to recover it. 00:26:28.557 [2024-07-26 14:20:36.468903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.557 [2024-07-26 14:20:36.468951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.557 qpair failed and we were unable to recover it. 00:26:28.557 [2024-07-26 14:20:36.469149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.557 [2024-07-26 14:20:36.469203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.557 qpair failed and we were unable to recover it. 00:26:28.557 [2024-07-26 14:20:36.469397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.557 [2024-07-26 14:20:36.469445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.557 qpair failed and we were unable to recover it. 00:26:28.557 [2024-07-26 14:20:36.469677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.557 [2024-07-26 14:20:36.469726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.557 qpair failed and we were unable to recover it. 00:26:28.557 [2024-07-26 14:20:36.469931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.557 [2024-07-26 14:20:36.469980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.557 qpair failed and we were unable to recover it. 00:26:28.557 [2024-07-26 14:20:36.470212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.557 [2024-07-26 14:20:36.470259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.557 qpair failed and we were unable to recover it. 00:26:28.557 [2024-07-26 14:20:36.470415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.557 [2024-07-26 14:20:36.470462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.557 qpair failed and we were unable to recover it. 00:26:28.557 [2024-07-26 14:20:36.470672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.557 [2024-07-26 14:20:36.470739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.557 qpair failed and we were unable to recover it. 00:26:28.557 [2024-07-26 14:20:36.470958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.557 [2024-07-26 14:20:36.471023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.557 qpair failed and we were unable to recover it. 00:26:28.557 [2024-07-26 14:20:36.471239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.557 [2024-07-26 14:20:36.471286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.557 qpair failed and we were unable to recover it. 00:26:28.557 [2024-07-26 14:20:36.471474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.557 [2024-07-26 14:20:36.471521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.557 qpair failed and we were unable to recover it. 00:26:28.557 [2024-07-26 14:20:36.471787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.557 [2024-07-26 14:20:36.471856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.557 qpair failed and we were unable to recover it. 00:26:28.557 [2024-07-26 14:20:36.472110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.557 [2024-07-26 14:20:36.472174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.557 qpair failed and we were unable to recover it. 00:26:28.557 [2024-07-26 14:20:36.472358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.557 [2024-07-26 14:20:36.472408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.557 qpair failed and we were unable to recover it. 00:26:28.557 [2024-07-26 14:20:36.472664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.557 [2024-07-26 14:20:36.472731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.557 qpair failed and we were unable to recover it. 00:26:28.557 [2024-07-26 14:20:36.472984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.557 [2024-07-26 14:20:36.473050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.557 qpair failed and we were unable to recover it. 00:26:28.557 [2024-07-26 14:20:36.473265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.557 [2024-07-26 14:20:36.473313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.557 qpair failed and we were unable to recover it. 00:26:28.557 [2024-07-26 14:20:36.473490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.557 [2024-07-26 14:20:36.473546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.557 qpair failed and we were unable to recover it. 00:26:28.557 [2024-07-26 14:20:36.473759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.557 [2024-07-26 14:20:36.473827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.557 qpair failed and we were unable to recover it. 00:26:28.557 [2024-07-26 14:20:36.474053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.557 [2024-07-26 14:20:36.474118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.557 qpair failed and we were unable to recover it. 00:26:28.557 [2024-07-26 14:20:36.474308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.557 [2024-07-26 14:20:36.474355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.557 qpair failed and we were unable to recover it. 00:26:28.557 [2024-07-26 14:20:36.474549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.557 [2024-07-26 14:20:36.474596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.557 qpair failed and we were unable to recover it. 00:26:28.557 [2024-07-26 14:20:36.474777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.557 [2024-07-26 14:20:36.474842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.557 qpair failed and we were unable to recover it. 00:26:28.557 [2024-07-26 14:20:36.475094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.557 [2024-07-26 14:20:36.475158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.557 qpair failed and we were unable to recover it. 00:26:28.557 [2024-07-26 14:20:36.475330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.557 [2024-07-26 14:20:36.475376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.557 qpair failed and we were unable to recover it. 00:26:28.557 [2024-07-26 14:20:36.475552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.557 [2024-07-26 14:20:36.475601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.557 qpair failed and we were unable to recover it. 00:26:28.557 [2024-07-26 14:20:36.475820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.557 [2024-07-26 14:20:36.475886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.557 qpair failed and we were unable to recover it. 00:26:28.557 [2024-07-26 14:20:36.476051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.557 [2024-07-26 14:20:36.476120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.557 qpair failed and we were unable to recover it. 00:26:28.557 [2024-07-26 14:20:36.476296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.557 [2024-07-26 14:20:36.476345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.557 qpair failed and we were unable to recover it. 00:26:28.557 [2024-07-26 14:20:36.476603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.557 [2024-07-26 14:20:36.476652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.557 qpair failed and we were unable to recover it. 00:26:28.557 [2024-07-26 14:20:36.476852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.557 [2024-07-26 14:20:36.476918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.557 qpair failed and we were unable to recover it. 00:26:28.557 [2024-07-26 14:20:36.477095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.557 [2024-07-26 14:20:36.477142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.557 qpair failed and we were unable to recover it. 00:26:28.557 [2024-07-26 14:20:36.477334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.557 [2024-07-26 14:20:36.477384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.558 qpair failed and we were unable to recover it. 00:26:28.558 [2024-07-26 14:20:36.477570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.558 [2024-07-26 14:20:36.477619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.558 qpair failed and we were unable to recover it. 00:26:28.558 [2024-07-26 14:20:36.477824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.558 [2024-07-26 14:20:36.477890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.558 qpair failed and we were unable to recover it. 00:26:28.558 [2024-07-26 14:20:36.478078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.558 [2024-07-26 14:20:36.478126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.558 qpair failed and we were unable to recover it. 00:26:28.558 [2024-07-26 14:20:36.478349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.558 [2024-07-26 14:20:36.478396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.558 qpair failed and we were unable to recover it. 00:26:28.558 [2024-07-26 14:20:36.478556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.558 [2024-07-26 14:20:36.478605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.558 qpair failed and we were unable to recover it. 00:26:28.558 [2024-07-26 14:20:36.478826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.558 [2024-07-26 14:20:36.478894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.558 qpair failed and we were unable to recover it. 00:26:28.558 [2024-07-26 14:20:36.479084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.558 [2024-07-26 14:20:36.479131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.558 qpair failed and we were unable to recover it. 00:26:28.558 [2024-07-26 14:20:36.479294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.558 [2024-07-26 14:20:36.479339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.558 qpair failed and we were unable to recover it. 00:26:28.558 [2024-07-26 14:20:36.479554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.558 [2024-07-26 14:20:36.479626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.558 qpair failed and we were unable to recover it. 00:26:28.558 [2024-07-26 14:20:36.479798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.558 [2024-07-26 14:20:36.479863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.558 qpair failed and we were unable to recover it. 00:26:28.558 [2024-07-26 14:20:36.480077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.558 [2024-07-26 14:20:36.480123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.558 qpair failed and we were unable to recover it. 00:26:28.558 [2024-07-26 14:20:36.480306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.558 [2024-07-26 14:20:36.480352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.558 qpair failed and we were unable to recover it. 00:26:28.558 [2024-07-26 14:20:36.480519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.558 [2024-07-26 14:20:36.480576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.558 qpair failed and we were unable to recover it. 00:26:28.558 [2024-07-26 14:20:36.480799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.558 [2024-07-26 14:20:36.480864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.558 qpair failed and we were unable to recover it. 00:26:28.558 [2024-07-26 14:20:36.481046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.558 [2024-07-26 14:20:36.481094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.558 qpair failed and we were unable to recover it. 00:26:28.558 [2024-07-26 14:20:36.481282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.558 [2024-07-26 14:20:36.481329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.558 qpair failed and we were unable to recover it. 00:26:28.558 [2024-07-26 14:20:36.481524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.558 [2024-07-26 14:20:36.481584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.558 qpair failed and we were unable to recover it. 00:26:28.558 [2024-07-26 14:20:36.481804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.558 [2024-07-26 14:20:36.481870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.558 qpair failed and we were unable to recover it. 00:26:28.558 [2024-07-26 14:20:36.482124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.558 [2024-07-26 14:20:36.482190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.558 qpair failed and we were unable to recover it. 00:26:28.558 [2024-07-26 14:20:36.482366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.558 [2024-07-26 14:20:36.482413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.558 qpair failed and we were unable to recover it. 00:26:28.558 [2024-07-26 14:20:36.482571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.558 [2024-07-26 14:20:36.482621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.558 qpair failed and we were unable to recover it. 00:26:28.558 [2024-07-26 14:20:36.482800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.558 [2024-07-26 14:20:36.482869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.558 qpair failed and we were unable to recover it. 00:26:28.558 [2024-07-26 14:20:36.483048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.558 [2024-07-26 14:20:36.483115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.558 qpair failed and we were unable to recover it. 00:26:28.558 [2024-07-26 14:20:36.483274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.558 [2024-07-26 14:20:36.483320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.558 qpair failed and we were unable to recover it. 00:26:28.558 [2024-07-26 14:20:36.483511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.558 [2024-07-26 14:20:36.483582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.558 qpair failed and we were unable to recover it. 00:26:28.558 [2024-07-26 14:20:36.483751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.558 [2024-07-26 14:20:36.483819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.558 qpair failed and we were unable to recover it. 00:26:28.558 [2024-07-26 14:20:36.483961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.558 [2024-07-26 14:20:36.484010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.558 qpair failed and we were unable to recover it. 00:26:28.558 [2024-07-26 14:20:36.484208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.558 [2024-07-26 14:20:36.484256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.558 qpair failed and we were unable to recover it. 00:26:28.558 [2024-07-26 14:20:36.484445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.558 [2024-07-26 14:20:36.484492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.558 qpair failed and we were unable to recover it. 00:26:28.558 [2024-07-26 14:20:36.484657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.558 [2024-07-26 14:20:36.484705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.558 qpair failed and we were unable to recover it. 00:26:28.558 [2024-07-26 14:20:36.484852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.558 [2024-07-26 14:20:36.484899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.558 qpair failed and we were unable to recover it. 00:26:28.558 [2024-07-26 14:20:36.485050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.558 [2024-07-26 14:20:36.485100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.558 qpair failed and we were unable to recover it. 00:26:28.558 [2024-07-26 14:20:36.485296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.558 [2024-07-26 14:20:36.485343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.558 qpair failed and we were unable to recover it. 00:26:28.558 [2024-07-26 14:20:36.485483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.558 [2024-07-26 14:20:36.485541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.558 qpair failed and we were unable to recover it. 00:26:28.558 [2024-07-26 14:20:36.485690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.558 [2024-07-26 14:20:36.485737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.558 qpair failed and we were unable to recover it. 00:26:28.558 [2024-07-26 14:20:36.485975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.558 [2024-07-26 14:20:36.486022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.558 qpair failed and we were unable to recover it. 00:26:28.558 [2024-07-26 14:20:36.486243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.558 [2024-07-26 14:20:36.486290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.558 qpair failed and we were unable to recover it. 00:26:28.558 [2024-07-26 14:20:36.486503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.559 [2024-07-26 14:20:36.486563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.559 qpair failed and we were unable to recover it. 00:26:28.559 [2024-07-26 14:20:36.486734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.559 [2024-07-26 14:20:36.486802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.559 qpair failed and we were unable to recover it. 00:26:28.559 [2024-07-26 14:20:36.486989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.559 [2024-07-26 14:20:36.487053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.559 qpair failed and we were unable to recover it. 00:26:28.559 [2024-07-26 14:20:36.487283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.559 [2024-07-26 14:20:36.487352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.559 qpair failed and we were unable to recover it. 00:26:28.559 [2024-07-26 14:20:36.487523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.559 [2024-07-26 14:20:36.487598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.559 qpair failed and we were unable to recover it. 00:26:28.559 [2024-07-26 14:20:36.487814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.559 [2024-07-26 14:20:36.487879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.559 qpair failed and we were unable to recover it. 00:26:28.559 [2024-07-26 14:20:36.488092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.559 [2024-07-26 14:20:36.488159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.559 qpair failed and we were unable to recover it. 00:26:28.559 [2024-07-26 14:20:36.488330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.559 [2024-07-26 14:20:36.488377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.559 qpair failed and we were unable to recover it. 00:26:28.559 [2024-07-26 14:20:36.488570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.559 [2024-07-26 14:20:36.488618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.559 qpair failed and we were unable to recover it. 00:26:28.559 [2024-07-26 14:20:36.488829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.559 [2024-07-26 14:20:36.488894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.559 qpair failed and we were unable to recover it. 00:26:28.559 [2024-07-26 14:20:36.489100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.559 [2024-07-26 14:20:36.489167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.559 qpair failed and we were unable to recover it. 00:26:28.559 [2024-07-26 14:20:36.489327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.559 [2024-07-26 14:20:36.489381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.559 qpair failed and we were unable to recover it. 00:26:28.559 [2024-07-26 14:20:36.489579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.559 [2024-07-26 14:20:36.489627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.559 qpair failed and we were unable to recover it. 00:26:28.559 [2024-07-26 14:20:36.489786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.559 [2024-07-26 14:20:36.489832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.559 qpair failed and we were unable to recover it. 00:26:28.559 [2024-07-26 14:20:36.490054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.559 [2024-07-26 14:20:36.490101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.559 qpair failed and we were unable to recover it. 00:26:28.559 [2024-07-26 14:20:36.490285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.559 [2024-07-26 14:20:36.490334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.559 qpair failed and we were unable to recover it. 00:26:28.559 [2024-07-26 14:20:36.490520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.559 [2024-07-26 14:20:36.490584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.559 qpair failed and we were unable to recover it. 00:26:28.559 [2024-07-26 14:20:36.490744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.559 [2024-07-26 14:20:36.490791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.559 qpair failed and we were unable to recover it. 00:26:28.559 [2024-07-26 14:20:36.491011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.559 [2024-07-26 14:20:36.491058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.559 qpair failed and we were unable to recover it. 00:26:28.559 [2024-07-26 14:20:36.491242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.559 [2024-07-26 14:20:36.491290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.559 qpair failed and we were unable to recover it. 00:26:28.559 [2024-07-26 14:20:36.491489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.559 [2024-07-26 14:20:36.491545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.559 qpair failed and we were unable to recover it. 00:26:28.559 [2024-07-26 14:20:36.491733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.559 [2024-07-26 14:20:36.491799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.559 qpair failed and we were unable to recover it. 00:26:28.559 [2024-07-26 14:20:36.492025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.559 [2024-07-26 14:20:36.492071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.559 qpair failed and we were unable to recover it. 00:26:28.559 [2024-07-26 14:20:36.492259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.559 [2024-07-26 14:20:36.492306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.559 qpair failed and we were unable to recover it. 00:26:28.559 [2024-07-26 14:20:36.492480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.559 [2024-07-26 14:20:36.492542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.559 qpair failed and we were unable to recover it. 00:26:28.559 [2024-07-26 14:20:36.492762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.559 [2024-07-26 14:20:36.492810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.559 qpair failed and we were unable to recover it. 00:26:28.559 [2024-07-26 14:20:36.493004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.559 [2024-07-26 14:20:36.493072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.559 qpair failed and we were unable to recover it. 00:26:28.559 [2024-07-26 14:20:36.493257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.559 [2024-07-26 14:20:36.493304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.559 qpair failed and we were unable to recover it. 00:26:28.559 [2024-07-26 14:20:36.493488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.559 [2024-07-26 14:20:36.493547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.559 qpair failed and we were unable to recover it. 00:26:28.559 [2024-07-26 14:20:36.493709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.559 [2024-07-26 14:20:36.493756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.559 qpair failed and we were unable to recover it. 00:26:28.559 [2024-07-26 14:20:36.493981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.559 [2024-07-26 14:20:36.494046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.559 qpair failed and we were unable to recover it. 00:26:28.559 [2024-07-26 14:20:36.494274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.559 [2024-07-26 14:20:36.494321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.559 qpair failed and we were unable to recover it. 00:26:28.559 [2024-07-26 14:20:36.494515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.559 [2024-07-26 14:20:36.494584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.559 qpair failed and we were unable to recover it. 00:26:28.559 [2024-07-26 14:20:36.494768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.559 [2024-07-26 14:20:36.494815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.559 qpair failed and we were unable to recover it. 00:26:28.559 [2024-07-26 14:20:36.495033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.559 [2024-07-26 14:20:36.495080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.559 qpair failed and we were unable to recover it. 00:26:28.559 [2024-07-26 14:20:36.495263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.559 [2024-07-26 14:20:36.495309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.559 qpair failed and we were unable to recover it. 00:26:28.559 [2024-07-26 14:20:36.495526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.559 [2024-07-26 14:20:36.495599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.559 qpair failed and we were unable to recover it. 00:26:28.559 [2024-07-26 14:20:36.495796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.560 [2024-07-26 14:20:36.495861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.560 qpair failed and we were unable to recover it. 00:26:28.560 [2024-07-26 14:20:36.496090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.560 [2024-07-26 14:20:36.496136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.560 qpair failed and we were unable to recover it. 00:26:28.560 [2024-07-26 14:20:36.496392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.560 [2024-07-26 14:20:36.496441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.560 qpair failed and we were unable to recover it. 00:26:28.560 [2024-07-26 14:20:36.496651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.560 [2024-07-26 14:20:36.496723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.560 qpair failed and we were unable to recover it. 00:26:28.560 [2024-07-26 14:20:36.496936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.560 [2024-07-26 14:20:36.497002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.560 qpair failed and we were unable to recover it. 00:26:28.560 [2024-07-26 14:20:36.497185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.560 [2024-07-26 14:20:36.497234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.560 qpair failed and we were unable to recover it. 00:26:28.560 [2024-07-26 14:20:36.497392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.560 [2024-07-26 14:20:36.497440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.560 qpair failed and we were unable to recover it. 00:26:28.560 [2024-07-26 14:20:36.497645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.560 [2024-07-26 14:20:36.497694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.560 qpair failed and we were unable to recover it. 00:26:28.560 [2024-07-26 14:20:36.497850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.560 [2024-07-26 14:20:36.497898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.560 qpair failed and we were unable to recover it. 00:26:28.560 [2024-07-26 14:20:36.498038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.560 [2024-07-26 14:20:36.498086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.560 qpair failed and we were unable to recover it. 00:26:28.560 [2024-07-26 14:20:36.498234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.560 [2024-07-26 14:20:36.498281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.560 qpair failed and we were unable to recover it. 00:26:28.560 [2024-07-26 14:20:36.498466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.560 [2024-07-26 14:20:36.498513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.560 qpair failed and we were unable to recover it. 00:26:28.560 [2024-07-26 14:20:36.498674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.560 [2024-07-26 14:20:36.498726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.560 qpair failed and we were unable to recover it. 00:26:28.560 [2024-07-26 14:20:36.498949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.560 [2024-07-26 14:20:36.498996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.560 qpair failed and we were unable to recover it. 00:26:28.560 [2024-07-26 14:20:36.499211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.560 [2024-07-26 14:20:36.499265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.560 qpair failed and we were unable to recover it. 00:26:28.560 [2024-07-26 14:20:36.499412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.560 [2024-07-26 14:20:36.499459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.560 qpair failed and we were unable to recover it. 00:26:28.560 [2024-07-26 14:20:36.499669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.560 [2024-07-26 14:20:36.499717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.560 qpair failed and we were unable to recover it. 00:26:28.560 [2024-07-26 14:20:36.499949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.560 [2024-07-26 14:20:36.500016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.560 qpair failed and we were unable to recover it. 00:26:28.560 [2024-07-26 14:20:36.500181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.560 [2024-07-26 14:20:36.500228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.560 qpair failed and we were unable to recover it. 00:26:28.560 [2024-07-26 14:20:36.500454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.560 [2024-07-26 14:20:36.500501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.560 qpair failed and we were unable to recover it. 00:26:28.560 [2024-07-26 14:20:36.500699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.560 [2024-07-26 14:20:36.500768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.560 qpair failed and we were unable to recover it. 00:26:28.560 [2024-07-26 14:20:36.501029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.560 [2024-07-26 14:20:36.501095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.560 qpair failed and we were unable to recover it. 00:26:28.560 [2024-07-26 14:20:36.501315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.560 [2024-07-26 14:20:36.501363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.560 qpair failed and we were unable to recover it. 00:26:28.560 [2024-07-26 14:20:36.501566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.560 [2024-07-26 14:20:36.501625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.560 qpair failed and we were unable to recover it. 00:26:28.560 [2024-07-26 14:20:36.501811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.560 [2024-07-26 14:20:36.501880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.560 qpair failed and we were unable to recover it. 00:26:28.560 [2024-07-26 14:20:36.502093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.560 [2024-07-26 14:20:36.502157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.560 qpair failed and we were unable to recover it. 00:26:28.560 [2024-07-26 14:20:36.502390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.560 [2024-07-26 14:20:36.502437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.560 qpair failed and we were unable to recover it. 00:26:28.560 [2024-07-26 14:20:36.502646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.560 [2024-07-26 14:20:36.502715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.560 qpair failed and we were unable to recover it. 00:26:28.560 [2024-07-26 14:20:36.502950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.560 [2024-07-26 14:20:36.503016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.560 qpair failed and we were unable to recover it. 00:26:28.560 [2024-07-26 14:20:36.503239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.560 [2024-07-26 14:20:36.503287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.560 qpair failed and we were unable to recover it. 00:26:28.560 [2024-07-26 14:20:36.503518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.560 [2024-07-26 14:20:36.503574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.560 qpair failed and we were unable to recover it. 00:26:28.560 [2024-07-26 14:20:36.503807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.560 [2024-07-26 14:20:36.503875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.560 qpair failed and we were unable to recover it. 00:26:28.560 [2024-07-26 14:20:36.504091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.560 [2024-07-26 14:20:36.504157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.560 qpair failed and we were unable to recover it. 00:26:28.561 [2024-07-26 14:20:36.504340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.561 [2024-07-26 14:20:36.504388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.561 qpair failed and we were unable to recover it. 00:26:28.561 [2024-07-26 14:20:36.504547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.561 [2024-07-26 14:20:36.504600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.561 qpair failed and we were unable to recover it. 00:26:28.561 [2024-07-26 14:20:36.504852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.561 [2024-07-26 14:20:36.504917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.561 qpair failed and we were unable to recover it. 00:26:28.561 [2024-07-26 14:20:36.505153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.561 [2024-07-26 14:20:36.505219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.561 qpair failed and we were unable to recover it. 00:26:28.561 [2024-07-26 14:20:36.505408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.561 [2024-07-26 14:20:36.505457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.561 qpair failed and we were unable to recover it. 00:26:28.561 [2024-07-26 14:20:36.505706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.561 [2024-07-26 14:20:36.505773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.561 qpair failed and we were unable to recover it. 00:26:28.561 [2024-07-26 14:20:36.505964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.561 [2024-07-26 14:20:36.506034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.561 qpair failed and we were unable to recover it. 00:26:28.561 [2024-07-26 14:20:36.506249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.561 [2024-07-26 14:20:36.506314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.561 qpair failed and we were unable to recover it. 00:26:28.561 [2024-07-26 14:20:36.506590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.561 [2024-07-26 14:20:36.506663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.561 qpair failed and we were unable to recover it. 00:26:28.561 [2024-07-26 14:20:36.506889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.561 [2024-07-26 14:20:36.506955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.561 qpair failed and we were unable to recover it. 00:26:28.561 [2024-07-26 14:20:36.507213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.561 [2024-07-26 14:20:36.507282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.561 qpair failed and we were unable to recover it. 00:26:28.561 [2024-07-26 14:20:36.507473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.561 [2024-07-26 14:20:36.507521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.561 qpair failed and we were unable to recover it. 00:26:28.561 [2024-07-26 14:20:36.507712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.561 [2024-07-26 14:20:36.507784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.561 qpair failed and we were unable to recover it. 00:26:28.561 [2024-07-26 14:20:36.507962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.561 [2024-07-26 14:20:36.508030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.561 qpair failed and we were unable to recover it. 00:26:28.561 [2024-07-26 14:20:36.508254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.561 [2024-07-26 14:20:36.508321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.561 qpair failed and we were unable to recover it. 00:26:28.561 [2024-07-26 14:20:36.508506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.561 [2024-07-26 14:20:36.508579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.561 qpair failed and we were unable to recover it. 00:26:28.561 [2024-07-26 14:20:36.508802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.561 [2024-07-26 14:20:36.508870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.561 qpair failed and we were unable to recover it. 00:26:28.561 [2024-07-26 14:20:36.509028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.561 [2024-07-26 14:20:36.509075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.561 qpair failed and we were unable to recover it. 00:26:28.561 [2024-07-26 14:20:36.509203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.561 [2024-07-26 14:20:36.509250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.561 qpair failed and we were unable to recover it. 00:26:28.561 [2024-07-26 14:20:36.509410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.561 [2024-07-26 14:20:36.509458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.561 qpair failed and we were unable to recover it. 00:26:28.561 [2024-07-26 14:20:36.509670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.561 [2024-07-26 14:20:36.509719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.561 qpair failed and we were unable to recover it. 00:26:28.561 [2024-07-26 14:20:36.509907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.561 [2024-07-26 14:20:36.509988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.561 qpair failed and we were unable to recover it. 00:26:28.561 [2024-07-26 14:20:36.510201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.561 [2024-07-26 14:20:36.510267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.561 qpair failed and we were unable to recover it. 00:26:28.561 [2024-07-26 14:20:36.510449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.561 [2024-07-26 14:20:36.510496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.561 qpair failed and we were unable to recover it. 00:26:28.561 [2024-07-26 14:20:36.510769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.561 [2024-07-26 14:20:36.510836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.561 qpair failed and we were unable to recover it. 00:26:28.561 [2024-07-26 14:20:36.511090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.561 [2024-07-26 14:20:36.511156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.561 qpair failed and we were unable to recover it. 00:26:28.561 [2024-07-26 14:20:36.511381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.561 [2024-07-26 14:20:36.511429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.561 qpair failed and we were unable to recover it. 00:26:28.561 [2024-07-26 14:20:36.511645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.561 [2024-07-26 14:20:36.511711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.561 qpair failed and we were unable to recover it. 00:26:28.561 [2024-07-26 14:20:36.511926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.561 [2024-07-26 14:20:36.511994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.561 qpair failed and we were unable to recover it. 00:26:28.561 [2024-07-26 14:20:36.512189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.561 [2024-07-26 14:20:36.512254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.561 qpair failed and we were unable to recover it. 00:26:28.561 [2024-07-26 14:20:36.512475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.561 [2024-07-26 14:20:36.512522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.561 qpair failed and we were unable to recover it. 00:26:28.561 [2024-07-26 14:20:36.512787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.561 [2024-07-26 14:20:36.512857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.561 qpair failed and we were unable to recover it. 00:26:28.561 [2024-07-26 14:20:36.513070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.561 [2024-07-26 14:20:36.513137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.561 qpair failed and we were unable to recover it. 00:26:28.561 [2024-07-26 14:20:36.513312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.561 [2024-07-26 14:20:36.513358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.561 qpair failed and we were unable to recover it. 00:26:28.561 [2024-07-26 14:20:36.513545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.561 [2024-07-26 14:20:36.513600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.561 qpair failed and we were unable to recover it. 00:26:28.561 [2024-07-26 14:20:36.513817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.561 [2024-07-26 14:20:36.513880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.561 qpair failed and we were unable to recover it. 00:26:28.561 [2024-07-26 14:20:36.514084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.561 [2024-07-26 14:20:36.514148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.561 qpair failed and we were unable to recover it. 00:26:28.562 [2024-07-26 14:20:36.514341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.562 [2024-07-26 14:20:36.514389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.562 qpair failed and we were unable to recover it. 00:26:28.562 [2024-07-26 14:20:36.514589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.562 [2024-07-26 14:20:36.514636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.562 qpair failed and we were unable to recover it. 00:26:28.562 [2024-07-26 14:20:36.514852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.562 [2024-07-26 14:20:36.514899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.562 qpair failed and we were unable to recover it. 00:26:28.562 [2024-07-26 14:20:36.515096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.562 [2024-07-26 14:20:36.515162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.562 qpair failed and we were unable to recover it. 00:26:28.562 [2024-07-26 14:20:36.515317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.562 [2024-07-26 14:20:36.515365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.562 qpair failed and we were unable to recover it. 00:26:28.562 [2024-07-26 14:20:36.515543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.562 [2024-07-26 14:20:36.515591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.562 qpair failed and we were unable to recover it. 00:26:28.562 [2024-07-26 14:20:36.515811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.562 [2024-07-26 14:20:36.515857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.562 qpair failed and we were unable to recover it. 00:26:28.562 [2024-07-26 14:20:36.516119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.562 [2024-07-26 14:20:36.516166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.562 qpair failed and we were unable to recover it. 00:26:28.562 [2024-07-26 14:20:36.516355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.562 [2024-07-26 14:20:36.516402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.562 qpair failed and we were unable to recover it. 00:26:28.562 [2024-07-26 14:20:36.516623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.562 [2024-07-26 14:20:36.516692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.562 qpair failed and we were unable to recover it. 00:26:28.562 [2024-07-26 14:20:36.516841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.562 [2024-07-26 14:20:36.516890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.562 qpair failed and we were unable to recover it. 00:26:28.562 [2024-07-26 14:20:36.517092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.562 [2024-07-26 14:20:36.517140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.562 qpair failed and we were unable to recover it. 00:26:28.562 [2024-07-26 14:20:36.517326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.562 [2024-07-26 14:20:36.517373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.562 qpair failed and we were unable to recover it. 00:26:28.562 [2024-07-26 14:20:36.517565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.562 [2024-07-26 14:20:36.517613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.562 qpair failed and we were unable to recover it. 00:26:28.562 [2024-07-26 14:20:36.517801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.562 [2024-07-26 14:20:36.517848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.562 qpair failed and we were unable to recover it. 00:26:28.562 [2024-07-26 14:20:36.518035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.562 [2024-07-26 14:20:36.518081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.562 qpair failed and we were unable to recover it. 00:26:28.562 [2024-07-26 14:20:36.518245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.562 [2024-07-26 14:20:36.518292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.562 qpair failed and we were unable to recover it. 00:26:28.562 [2024-07-26 14:20:36.518460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.562 [2024-07-26 14:20:36.518507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.562 qpair failed and we were unable to recover it. 00:26:28.562 [2024-07-26 14:20:36.518735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.562 [2024-07-26 14:20:36.518801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.562 qpair failed and we were unable to recover it. 00:26:28.562 [2024-07-26 14:20:36.519021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.562 [2024-07-26 14:20:36.519086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.562 qpair failed and we were unable to recover it. 00:26:28.562 [2024-07-26 14:20:36.519296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.562 [2024-07-26 14:20:36.519343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.562 qpair failed and we were unable to recover it. 00:26:28.562 [2024-07-26 14:20:36.519515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.562 [2024-07-26 14:20:36.519598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.562 qpair failed and we were unable to recover it. 00:26:28.562 [2024-07-26 14:20:36.519792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.562 [2024-07-26 14:20:36.519840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.562 qpair failed and we were unable to recover it. 00:26:28.562 [2024-07-26 14:20:36.520052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.562 [2024-07-26 14:20:36.520122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.562 qpair failed and we were unable to recover it. 00:26:28.562 [2024-07-26 14:20:36.520310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.562 [2024-07-26 14:20:36.520366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.562 qpair failed and we were unable to recover it. 00:26:28.562 [2024-07-26 14:20:36.520519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.562 [2024-07-26 14:20:36.520576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.562 qpair failed and we were unable to recover it. 00:26:28.562 [2024-07-26 14:20:36.520765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.562 [2024-07-26 14:20:36.520812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.562 qpair failed and we were unable to recover it. 00:26:28.562 [2024-07-26 14:20:36.521006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.562 [2024-07-26 14:20:36.521054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.562 qpair failed and we were unable to recover it. 00:26:28.562 [2024-07-26 14:20:36.521212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.562 [2024-07-26 14:20:36.521259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.562 qpair failed and we were unable to recover it. 00:26:28.562 [2024-07-26 14:20:36.521416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.562 [2024-07-26 14:20:36.521463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.562 qpair failed and we were unable to recover it. 00:26:28.562 [2024-07-26 14:20:36.521625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.562 [2024-07-26 14:20:36.521673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.562 qpair failed and we were unable to recover it. 00:26:28.562 [2024-07-26 14:20:36.521827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.562 [2024-07-26 14:20:36.521873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.562 qpair failed and we were unable to recover it. 00:26:28.562 [2024-07-26 14:20:36.522021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.562 [2024-07-26 14:20:36.522068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.562 qpair failed and we were unable to recover it. 00:26:28.562 [2024-07-26 14:20:36.522262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.562 [2024-07-26 14:20:36.522310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.562 qpair failed and we were unable to recover it. 00:26:28.562 [2024-07-26 14:20:36.522504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.562 [2024-07-26 14:20:36.522563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.562 qpair failed and we were unable to recover it. 00:26:28.562 [2024-07-26 14:20:36.522722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.562 [2024-07-26 14:20:36.522770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.562 qpair failed and we were unable to recover it. 00:26:28.562 [2024-07-26 14:20:36.522960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.562 [2024-07-26 14:20:36.523007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.562 qpair failed and we were unable to recover it. 00:26:28.562 [2024-07-26 14:20:36.523179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.562 [2024-07-26 14:20:36.523226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.563 qpair failed and we were unable to recover it. 00:26:28.563 [2024-07-26 14:20:36.523394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.563 [2024-07-26 14:20:36.523442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.563 qpair failed and we were unable to recover it. 00:26:28.563 [2024-07-26 14:20:36.523641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.563 [2024-07-26 14:20:36.523708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.563 qpair failed and we were unable to recover it. 00:26:28.563 [2024-07-26 14:20:36.523932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.563 [2024-07-26 14:20:36.523997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.563 qpair failed and we were unable to recover it. 00:26:28.563 [2024-07-26 14:20:36.524154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.563 [2024-07-26 14:20:36.524203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.563 qpair failed and we were unable to recover it. 00:26:28.563 [2024-07-26 14:20:36.524378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.563 [2024-07-26 14:20:36.524424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.563 qpair failed and we were unable to recover it. 00:26:28.563 [2024-07-26 14:20:36.524604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.563 [2024-07-26 14:20:36.524653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.563 qpair failed and we were unable to recover it. 00:26:28.563 [2024-07-26 14:20:36.524850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.563 [2024-07-26 14:20:36.524896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.563 qpair failed and we were unable to recover it. 00:26:28.563 [2024-07-26 14:20:36.525057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.563 [2024-07-26 14:20:36.525106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.563 qpair failed and we were unable to recover it. 00:26:28.563 [2024-07-26 14:20:36.525264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.563 [2024-07-26 14:20:36.525311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.563 qpair failed and we were unable to recover it. 00:26:28.563 [2024-07-26 14:20:36.525509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.563 [2024-07-26 14:20:36.525567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.840 qpair failed and we were unable to recover it. 00:26:28.840 [2024-07-26 14:20:36.525760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.840 [2024-07-26 14:20:36.525827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.840 qpair failed and we were unable to recover it. 00:26:28.840 [2024-07-26 14:20:36.526050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.840 [2024-07-26 14:20:36.526118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.840 qpair failed and we were unable to recover it. 00:26:28.840 [2024-07-26 14:20:36.526296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.840 [2024-07-26 14:20:36.526343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.840 qpair failed and we were unable to recover it. 00:26:28.840 [2024-07-26 14:20:36.526510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.840 [2024-07-26 14:20:36.526570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.840 qpair failed and we were unable to recover it. 00:26:28.840 [2024-07-26 14:20:36.526765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.840 [2024-07-26 14:20:36.526812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.840 qpair failed and we were unable to recover it. 00:26:28.840 [2024-07-26 14:20:36.526978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.840 [2024-07-26 14:20:36.527026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.840 qpair failed and we were unable to recover it. 00:26:28.840 [2024-07-26 14:20:36.527190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.840 [2024-07-26 14:20:36.527237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.840 qpair failed and we were unable to recover it. 00:26:28.840 [2024-07-26 14:20:36.527389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.840 [2024-07-26 14:20:36.527437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.840 qpair failed and we were unable to recover it. 00:26:28.840 [2024-07-26 14:20:36.527659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.840 [2024-07-26 14:20:36.527707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.840 qpair failed and we were unable to recover it. 00:26:28.840 [2024-07-26 14:20:36.527862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.840 [2024-07-26 14:20:36.527909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.840 qpair failed and we were unable to recover it. 00:26:28.840 [2024-07-26 14:20:36.528095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.840 [2024-07-26 14:20:36.528141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.840 qpair failed and we were unable to recover it. 00:26:28.840 [2024-07-26 14:20:36.528361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.840 [2024-07-26 14:20:36.528408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.840 qpair failed and we were unable to recover it. 00:26:28.840 [2024-07-26 14:20:36.528628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.840 [2024-07-26 14:20:36.528699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.840 qpair failed and we were unable to recover it. 00:26:28.840 [2024-07-26 14:20:36.528877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.840 [2024-07-26 14:20:36.528924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.840 qpair failed and we were unable to recover it. 00:26:28.840 [2024-07-26 14:20:36.529059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.840 [2024-07-26 14:20:36.529106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.840 qpair failed and we were unable to recover it. 00:26:28.840 [2024-07-26 14:20:36.529261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.840 [2024-07-26 14:20:36.529309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.840 qpair failed and we were unable to recover it. 00:26:28.840 [2024-07-26 14:20:36.529474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.840 [2024-07-26 14:20:36.529537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.840 qpair failed and we were unable to recover it. 00:26:28.840 [2024-07-26 14:20:36.529734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.840 [2024-07-26 14:20:36.529781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.840 qpair failed and we were unable to recover it. 00:26:28.840 [2024-07-26 14:20:36.529996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.840 [2024-07-26 14:20:36.530043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.840 qpair failed and we were unable to recover it. 00:26:28.840 [2024-07-26 14:20:36.530200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.840 [2024-07-26 14:20:36.530247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.840 qpair failed and we were unable to recover it. 00:26:28.840 [2024-07-26 14:20:36.530396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.840 [2024-07-26 14:20:36.530445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.840 qpair failed and we were unable to recover it. 00:26:28.840 [2024-07-26 14:20:36.530587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.840 [2024-07-26 14:20:36.530651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.840 qpair failed and we were unable to recover it. 00:26:28.840 [2024-07-26 14:20:36.530804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.840 [2024-07-26 14:20:36.530851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.840 qpair failed and we were unable to recover it. 00:26:28.841 [2024-07-26 14:20:36.530995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.841 [2024-07-26 14:20:36.531043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.841 qpair failed and we were unable to recover it. 00:26:28.841 [2024-07-26 14:20:36.531204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.841 [2024-07-26 14:20:36.531251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.841 qpair failed and we were unable to recover it. 00:26:28.841 [2024-07-26 14:20:36.531441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.841 [2024-07-26 14:20:36.531488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.841 qpair failed and we were unable to recover it. 00:26:28.841 [2024-07-26 14:20:36.531660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.841 [2024-07-26 14:20:36.531709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.841 qpair failed and we were unable to recover it. 00:26:28.841 [2024-07-26 14:20:36.531884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.841 [2024-07-26 14:20:36.531931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.841 qpair failed and we were unable to recover it. 00:26:28.841 [2024-07-26 14:20:36.532075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.841 [2024-07-26 14:20:36.532124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.841 qpair failed and we were unable to recover it. 00:26:28.841 [2024-07-26 14:20:36.532266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.841 [2024-07-26 14:20:36.532314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.841 qpair failed and we were unable to recover it. 00:26:28.841 [2024-07-26 14:20:36.532521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.841 [2024-07-26 14:20:36.532597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.841 qpair failed and we were unable to recover it. 00:26:28.841 [2024-07-26 14:20:36.532802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.841 [2024-07-26 14:20:36.532875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.841 qpair failed and we were unable to recover it. 00:26:28.841 [2024-07-26 14:20:36.533038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.841 [2024-07-26 14:20:36.533085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.841 qpair failed and we were unable to recover it. 00:26:28.841 [2024-07-26 14:20:36.533235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.841 [2024-07-26 14:20:36.533283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.841 qpair failed and we were unable to recover it. 00:26:28.841 [2024-07-26 14:20:36.533439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.841 [2024-07-26 14:20:36.533488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.841 qpair failed and we were unable to recover it. 00:26:28.841 [2024-07-26 14:20:36.533697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.841 [2024-07-26 14:20:36.533745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.841 qpair failed and we were unable to recover it. 00:26:28.841 [2024-07-26 14:20:36.533900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.841 [2024-07-26 14:20:36.533948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.841 qpair failed and we were unable to recover it. 00:26:28.841 [2024-07-26 14:20:36.534141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.841 [2024-07-26 14:20:36.534188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.841 qpair failed and we were unable to recover it. 00:26:28.841 [2024-07-26 14:20:36.534342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.841 [2024-07-26 14:20:36.534390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.841 qpair failed and we were unable to recover it. 00:26:28.841 [2024-07-26 14:20:36.534564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.841 [2024-07-26 14:20:36.534613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.841 qpair failed and we were unable to recover it. 00:26:28.841 [2024-07-26 14:20:36.534757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.841 [2024-07-26 14:20:36.534802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.841 qpair failed and we were unable to recover it. 00:26:28.841 [2024-07-26 14:20:36.534948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.841 [2024-07-26 14:20:36.534997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.841 qpair failed and we were unable to recover it. 00:26:28.841 [2024-07-26 14:20:36.535164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.841 [2024-07-26 14:20:36.535211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.841 qpair failed and we were unable to recover it. 00:26:28.841 [2024-07-26 14:20:36.535416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.841 [2024-07-26 14:20:36.535464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.841 qpair failed and we were unable to recover it. 00:26:28.841 [2024-07-26 14:20:36.535655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.841 [2024-07-26 14:20:36.535703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.841 qpair failed and we were unable to recover it. 00:26:28.841 [2024-07-26 14:20:36.535858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.841 [2024-07-26 14:20:36.535905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.841 qpair failed and we were unable to recover it. 00:26:28.841 [2024-07-26 14:20:36.536094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.841 [2024-07-26 14:20:36.536140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.841 qpair failed and we were unable to recover it. 00:26:28.841 [2024-07-26 14:20:36.536288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.841 [2024-07-26 14:20:36.536336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.841 qpair failed and we were unable to recover it. 00:26:28.841 [2024-07-26 14:20:36.536503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.841 [2024-07-26 14:20:36.536561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.841 qpair failed and we were unable to recover it. 00:26:28.841 [2024-07-26 14:20:36.536702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.841 [2024-07-26 14:20:36.536749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.841 qpair failed and we were unable to recover it. 00:26:28.841 [2024-07-26 14:20:36.536934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.841 [2024-07-26 14:20:36.536982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.841 qpair failed and we were unable to recover it. 00:26:28.841 [2024-07-26 14:20:36.537188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.841 [2024-07-26 14:20:36.537236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.841 qpair failed and we were unable to recover it. 00:26:28.841 [2024-07-26 14:20:36.537407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.841 [2024-07-26 14:20:36.537454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.841 qpair failed and we were unable to recover it. 00:26:28.841 [2024-07-26 14:20:36.537613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.841 [2024-07-26 14:20:36.537662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.841 qpair failed and we were unable to recover it. 00:26:28.841 [2024-07-26 14:20:36.537844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.841 [2024-07-26 14:20:36.537893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.841 qpair failed and we were unable to recover it. 00:26:28.841 [2024-07-26 14:20:36.538105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.841 [2024-07-26 14:20:36.538152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.841 qpair failed and we were unable to recover it. 00:26:28.841 [2024-07-26 14:20:36.538328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.841 [2024-07-26 14:20:36.538382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.841 qpair failed and we were unable to recover it. 00:26:28.841 [2024-07-26 14:20:36.538573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.841 [2024-07-26 14:20:36.538622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.841 qpair failed and we were unable to recover it. 00:26:28.841 [2024-07-26 14:20:36.538780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.841 [2024-07-26 14:20:36.538829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.841 qpair failed and we were unable to recover it. 00:26:28.841 [2024-07-26 14:20:36.538990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.841 [2024-07-26 14:20:36.539039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.841 qpair failed and we were unable to recover it. 00:26:28.842 [2024-07-26 14:20:36.539221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.842 [2024-07-26 14:20:36.539269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.842 qpair failed and we were unable to recover it. 00:26:28.842 [2024-07-26 14:20:36.539416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.842 [2024-07-26 14:20:36.539464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.842 qpair failed and we were unable to recover it. 00:26:28.842 [2024-07-26 14:20:36.539643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.842 [2024-07-26 14:20:36.539692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.842 qpair failed and we were unable to recover it. 00:26:28.842 [2024-07-26 14:20:36.539848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.842 [2024-07-26 14:20:36.539896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.842 qpair failed and we were unable to recover it. 00:26:28.842 [2024-07-26 14:20:36.540105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.842 [2024-07-26 14:20:36.540153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.842 qpair failed and we were unable to recover it. 00:26:28.842 [2024-07-26 14:20:36.540346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.842 [2024-07-26 14:20:36.540393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.842 qpair failed and we were unable to recover it. 00:26:28.842 [2024-07-26 14:20:36.540543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.842 [2024-07-26 14:20:36.540593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.842 qpair failed and we were unable to recover it. 00:26:28.842 [2024-07-26 14:20:36.540795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.842 [2024-07-26 14:20:36.540842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.842 qpair failed and we were unable to recover it. 00:26:28.842 [2024-07-26 14:20:36.540986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.842 [2024-07-26 14:20:36.541035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.842 qpair failed and we were unable to recover it. 00:26:28.842 [2024-07-26 14:20:36.541231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.842 [2024-07-26 14:20:36.541279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.842 qpair failed and we were unable to recover it. 00:26:28.842 [2024-07-26 14:20:36.541472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.842 [2024-07-26 14:20:36.541519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.842 qpair failed and we were unable to recover it. 00:26:28.842 [2024-07-26 14:20:36.541733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.842 [2024-07-26 14:20:36.541782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.842 qpair failed and we were unable to recover it. 00:26:28.842 [2024-07-26 14:20:36.541994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.842 [2024-07-26 14:20:36.542060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.842 qpair failed and we were unable to recover it. 00:26:28.842 [2024-07-26 14:20:36.542248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.842 [2024-07-26 14:20:36.542295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.842 qpair failed and we were unable to recover it. 00:26:28.842 [2024-07-26 14:20:36.542488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.842 [2024-07-26 14:20:36.542551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.842 qpair failed and we were unable to recover it. 00:26:28.842 [2024-07-26 14:20:36.542768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.842 [2024-07-26 14:20:36.542836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.842 qpair failed and we were unable to recover it. 00:26:28.842 [2024-07-26 14:20:36.543047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.842 [2024-07-26 14:20:36.543114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.842 qpair failed and we were unable to recover it. 00:26:28.842 [2024-07-26 14:20:36.543278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.842 [2024-07-26 14:20:36.543327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.842 qpair failed and we were unable to recover it. 00:26:28.842 [2024-07-26 14:20:36.543542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.842 [2024-07-26 14:20:36.543591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.842 qpair failed and we were unable to recover it. 00:26:28.842 [2024-07-26 14:20:36.543804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.842 [2024-07-26 14:20:36.543853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.842 qpair failed and we were unable to recover it. 00:26:28.842 [2024-07-26 14:20:36.544029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.842 [2024-07-26 14:20:36.544095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.842 qpair failed and we were unable to recover it. 00:26:28.842 [2024-07-26 14:20:36.544294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.842 [2024-07-26 14:20:36.544342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.842 qpair failed and we were unable to recover it. 00:26:28.842 [2024-07-26 14:20:36.544508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.842 [2024-07-26 14:20:36.544570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.842 qpair failed and we were unable to recover it. 00:26:28.842 [2024-07-26 14:20:36.544745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.842 [2024-07-26 14:20:36.544793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.842 qpair failed and we were unable to recover it. 00:26:28.842 [2024-07-26 14:20:36.544942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.842 [2024-07-26 14:20:36.544989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.842 qpair failed and we were unable to recover it. 00:26:28.842 [2024-07-26 14:20:36.545210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.842 [2024-07-26 14:20:36.545257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.842 qpair failed and we were unable to recover it. 00:26:28.842 [2024-07-26 14:20:36.545460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.842 [2024-07-26 14:20:36.545507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.842 qpair failed and we were unable to recover it. 00:26:28.842 [2024-07-26 14:20:36.545715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.842 [2024-07-26 14:20:36.545763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.842 qpair failed and we were unable to recover it. 00:26:28.842 [2024-07-26 14:20:36.545934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.842 [2024-07-26 14:20:36.546009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.842 qpair failed and we were unable to recover it. 00:26:28.842 [2024-07-26 14:20:36.546200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.842 [2024-07-26 14:20:36.546249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.842 qpair failed and we were unable to recover it. 00:26:28.842 [2024-07-26 14:20:36.546413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.842 [2024-07-26 14:20:36.546460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.842 qpair failed and we were unable to recover it. 00:26:28.842 [2024-07-26 14:20:36.546642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.842 [2024-07-26 14:20:36.546692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.842 qpair failed and we were unable to recover it. 00:26:28.842 [2024-07-26 14:20:36.546849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.842 [2024-07-26 14:20:36.546896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.842 qpair failed and we were unable to recover it. 00:26:28.842 [2024-07-26 14:20:36.547060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.842 [2024-07-26 14:20:36.547108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.842 qpair failed and we were unable to recover it. 00:26:28.842 [2024-07-26 14:20:36.547291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.842 [2024-07-26 14:20:36.547338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.842 qpair failed and we were unable to recover it. 00:26:28.842 [2024-07-26 14:20:36.547499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.842 [2024-07-26 14:20:36.547559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.842 qpair failed and we were unable to recover it. 00:26:28.842 [2024-07-26 14:20:36.547714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.842 [2024-07-26 14:20:36.547769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.842 qpair failed and we were unable to recover it. 00:26:28.842 [2024-07-26 14:20:36.547939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.843 [2024-07-26 14:20:36.547988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.843 qpair failed and we were unable to recover it. 00:26:28.843 [2024-07-26 14:20:36.548180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.843 [2024-07-26 14:20:36.548228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.843 qpair failed and we were unable to recover it. 00:26:28.843 [2024-07-26 14:20:36.548447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.843 [2024-07-26 14:20:36.548495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.843 qpair failed and we were unable to recover it. 00:26:28.843 [2024-07-26 14:20:36.548688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.843 [2024-07-26 14:20:36.548736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.843 qpair failed and we were unable to recover it. 00:26:28.843 [2024-07-26 14:20:36.548919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.843 [2024-07-26 14:20:36.548968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.843 qpair failed and we were unable to recover it. 00:26:28.843 [2024-07-26 14:20:36.549115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.843 [2024-07-26 14:20:36.549167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.843 qpair failed and we were unable to recover it. 00:26:28.843 [2024-07-26 14:20:36.549337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.843 [2024-07-26 14:20:36.549384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.843 qpair failed and we were unable to recover it. 00:26:28.843 [2024-07-26 14:20:36.549581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.843 [2024-07-26 14:20:36.549631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.843 qpair failed and we were unable to recover it. 00:26:28.843 [2024-07-26 14:20:36.549805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.843 [2024-07-26 14:20:36.549875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.843 qpair failed and we were unable to recover it. 00:26:28.843 [2024-07-26 14:20:36.550064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.843 [2024-07-26 14:20:36.550112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.843 qpair failed and we were unable to recover it. 00:26:28.843 [2024-07-26 14:20:36.550268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.843 [2024-07-26 14:20:36.550317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.843 qpair failed and we were unable to recover it. 00:26:28.843 [2024-07-26 14:20:36.550505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.843 [2024-07-26 14:20:36.550566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.843 qpair failed and we were unable to recover it. 00:26:28.843 [2024-07-26 14:20:36.550737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.843 [2024-07-26 14:20:36.550787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.843 qpair failed and we were unable to recover it. 00:26:28.843 [2024-07-26 14:20:36.550982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.843 [2024-07-26 14:20:36.551032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.843 qpair failed and we were unable to recover it. 00:26:28.843 [2024-07-26 14:20:36.551199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.843 [2024-07-26 14:20:36.551246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.843 qpair failed and we were unable to recover it. 00:26:28.843 [2024-07-26 14:20:36.551440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.843 [2024-07-26 14:20:36.551487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.843 qpair failed and we were unable to recover it. 00:26:28.843 [2024-07-26 14:20:36.551677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.843 [2024-07-26 14:20:36.551751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.843 qpair failed and we were unable to recover it. 00:26:28.843 [2024-07-26 14:20:36.551913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.843 [2024-07-26 14:20:36.551963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.843 qpair failed and we were unable to recover it. 00:26:28.843 [2024-07-26 14:20:36.552182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.843 [2024-07-26 14:20:36.552230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.843 qpair failed and we were unable to recover it. 00:26:28.843 [2024-07-26 14:20:36.552426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.843 [2024-07-26 14:20:36.552474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.843 qpair failed and we were unable to recover it. 00:26:28.843 [2024-07-26 14:20:36.552678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.843 [2024-07-26 14:20:36.552748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.843 qpair failed and we were unable to recover it. 00:26:28.843 [2024-07-26 14:20:36.552997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.843 [2024-07-26 14:20:36.553063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.843 qpair failed and we were unable to recover it. 00:26:28.843 [2024-07-26 14:20:36.553207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.843 [2024-07-26 14:20:36.553254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.843 qpair failed and we were unable to recover it. 00:26:28.843 [2024-07-26 14:20:36.553444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.843 [2024-07-26 14:20:36.553492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:28.843 qpair failed and we were unable to recover it. 00:26:28.843 [2024-07-26 14:20:36.553702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.843 [2024-07-26 14:20:36.553775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.843 qpair failed and we were unable to recover it. 00:26:28.843 [2024-07-26 14:20:36.553987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.843 [2024-07-26 14:20:36.554037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.843 qpair failed and we were unable to recover it. 00:26:28.843 [2024-07-26 14:20:36.554190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.843 [2024-07-26 14:20:36.554240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.843 qpair failed and we were unable to recover it. 00:26:28.843 [2024-07-26 14:20:36.554427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.843 [2024-07-26 14:20:36.554476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.843 qpair failed and we were unable to recover it. 00:26:28.843 [2024-07-26 14:20:36.554638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.843 [2024-07-26 14:20:36.554688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.843 qpair failed and we were unable to recover it. 00:26:28.843 [2024-07-26 14:20:36.554930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.843 [2024-07-26 14:20:36.554985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.843 qpair failed and we were unable to recover it. 00:26:28.843 [2024-07-26 14:20:36.555160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.843 [2024-07-26 14:20:36.555215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.843 qpair failed and we were unable to recover it. 00:26:28.843 [2024-07-26 14:20:36.555459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.843 [2024-07-26 14:20:36.555513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.843 qpair failed and we were unable to recover it. 00:26:28.843 [2024-07-26 14:20:36.555718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.843 [2024-07-26 14:20:36.555768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.843 qpair failed and we were unable to recover it. 00:26:28.843 [2024-07-26 14:20:36.556025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.843 [2024-07-26 14:20:36.556081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.843 qpair failed and we were unable to recover it. 00:26:28.843 [2024-07-26 14:20:36.556273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.843 [2024-07-26 14:20:36.556328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.843 qpair failed and we were unable to recover it. 00:26:28.843 [2024-07-26 14:20:36.556553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.843 [2024-07-26 14:20:36.556619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.843 qpair failed and we were unable to recover it. 00:26:28.843 [2024-07-26 14:20:36.556806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.843 [2024-07-26 14:20:36.556854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.843 qpair failed and we were unable to recover it. 00:26:28.843 [2024-07-26 14:20:36.557044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.843 [2024-07-26 14:20:36.557101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.844 qpair failed and we were unable to recover it. 00:26:28.844 [2024-07-26 14:20:36.557297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.844 [2024-07-26 14:20:36.557351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.844 qpair failed and we were unable to recover it. 00:26:28.844 [2024-07-26 14:20:36.557552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.844 [2024-07-26 14:20:36.557610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.844 qpair failed and we were unable to recover it. 00:26:28.844 [2024-07-26 14:20:36.557779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.844 [2024-07-26 14:20:36.557828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.844 qpair failed and we were unable to recover it. 00:26:28.844 [2024-07-26 14:20:36.558025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.844 [2024-07-26 14:20:36.558096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.844 qpair failed and we were unable to recover it. 00:26:28.844 [2024-07-26 14:20:36.558318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.844 [2024-07-26 14:20:36.558372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.844 qpair failed and we were unable to recover it. 00:26:28.844 [2024-07-26 14:20:36.558559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.844 [2024-07-26 14:20:36.558608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.844 qpair failed and we were unable to recover it. 00:26:28.844 [2024-07-26 14:20:36.558803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.844 [2024-07-26 14:20:36.558875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.844 qpair failed and we were unable to recover it. 00:26:28.844 [2024-07-26 14:20:36.559095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.844 [2024-07-26 14:20:36.559149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.844 qpair failed and we were unable to recover it. 00:26:28.844 [2024-07-26 14:20:36.559354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.844 [2024-07-26 14:20:36.559410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.844 qpair failed and we were unable to recover it. 00:26:28.844 [2024-07-26 14:20:36.559609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.844 [2024-07-26 14:20:36.559659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.844 qpair failed and we were unable to recover it. 00:26:28.844 [2024-07-26 14:20:36.559840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.844 [2024-07-26 14:20:36.559909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.844 qpair failed and we were unable to recover it. 00:26:28.844 [2024-07-26 14:20:36.560170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.844 [2024-07-26 14:20:36.560225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.844 qpair failed and we were unable to recover it. 00:26:28.844 [2024-07-26 14:20:36.560416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.844 [2024-07-26 14:20:36.560470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.844 qpair failed and we were unable to recover it. 00:26:28.844 [2024-07-26 14:20:36.560700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.844 [2024-07-26 14:20:36.560751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.844 qpair failed and we were unable to recover it. 00:26:28.844 [2024-07-26 14:20:36.560982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.844 [2024-07-26 14:20:36.561039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.844 qpair failed and we were unable to recover it. 00:26:28.844 [2024-07-26 14:20:36.561280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.844 [2024-07-26 14:20:36.561351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.844 qpair failed and we were unable to recover it. 00:26:28.844 [2024-07-26 14:20:36.561572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.844 [2024-07-26 14:20:36.561620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.844 qpair failed and we were unable to recover it. 00:26:28.844 [2024-07-26 14:20:36.561772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.844 [2024-07-26 14:20:36.561839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.844 qpair failed and we were unable to recover it. 00:26:28.844 [2024-07-26 14:20:36.562039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.844 [2024-07-26 14:20:36.562094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.844 qpair failed and we were unable to recover it. 00:26:28.844 [2024-07-26 14:20:36.562328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.844 [2024-07-26 14:20:36.562384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.844 qpair failed and we were unable to recover it. 00:26:28.844 [2024-07-26 14:20:36.562608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.844 [2024-07-26 14:20:36.562656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.844 qpair failed and we were unable to recover it. 00:26:28.844 [2024-07-26 14:20:36.562818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.844 [2024-07-26 14:20:36.562888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.844 qpair failed and we were unable to recover it. 00:26:28.844 [2024-07-26 14:20:36.563079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.844 [2024-07-26 14:20:36.563129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.844 qpair failed and we were unable to recover it. 00:26:28.844 [2024-07-26 14:20:36.563352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.844 [2024-07-26 14:20:36.563408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.844 qpair failed and we were unable to recover it. 00:26:28.844 [2024-07-26 14:20:36.563665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.844 [2024-07-26 14:20:36.563715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.844 qpair failed and we were unable to recover it. 00:26:28.844 [2024-07-26 14:20:36.563943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.844 [2024-07-26 14:20:36.563997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.844 qpair failed and we were unable to recover it. 00:26:28.844 [2024-07-26 14:20:36.564167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.844 [2024-07-26 14:20:36.564223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.844 qpair failed and we were unable to recover it. 00:26:28.844 [2024-07-26 14:20:36.564431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.844 [2024-07-26 14:20:36.564486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.844 qpair failed and we were unable to recover it. 00:26:28.844 [2024-07-26 14:20:36.564691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.844 [2024-07-26 14:20:36.564748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.844 qpair failed and we were unable to recover it. 00:26:28.844 [2024-07-26 14:20:36.565008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.844 [2024-07-26 14:20:36.565063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.844 qpair failed and we were unable to recover it. 00:26:28.844 [2024-07-26 14:20:36.565296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.844 [2024-07-26 14:20:36.565352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.844 qpair failed and we were unable to recover it. 00:26:28.844 [2024-07-26 14:20:36.565575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.844 [2024-07-26 14:20:36.565631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.844 qpair failed and we were unable to recover it. 00:26:28.844 [2024-07-26 14:20:36.565833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.844 [2024-07-26 14:20:36.565889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.844 qpair failed and we were unable to recover it. 00:26:28.844 [2024-07-26 14:20:36.566068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.844 [2024-07-26 14:20:36.566125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.845 qpair failed and we were unable to recover it. 00:26:28.845 [2024-07-26 14:20:36.566374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.845 [2024-07-26 14:20:36.566430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.845 qpair failed and we were unable to recover it. 00:26:28.845 [2024-07-26 14:20:36.566678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.845 [2024-07-26 14:20:36.566736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.845 qpair failed and we were unable to recover it. 00:26:28.845 [2024-07-26 14:20:36.566930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.845 [2024-07-26 14:20:36.566986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.845 qpair failed and we were unable to recover it. 00:26:28.845 [2024-07-26 14:20:36.567177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.845 [2024-07-26 14:20:36.567233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.845 qpair failed and we were unable to recover it. 00:26:28.845 [2024-07-26 14:20:36.567495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.845 [2024-07-26 14:20:36.567588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.845 qpair failed and we were unable to recover it. 00:26:28.845 [2024-07-26 14:20:36.567803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.845 [2024-07-26 14:20:36.567861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.845 qpair failed and we were unable to recover it. 00:26:28.845 [2024-07-26 14:20:36.568079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.845 [2024-07-26 14:20:36.568134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.845 qpair failed and we were unable to recover it. 00:26:28.845 [2024-07-26 14:20:36.568346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.845 [2024-07-26 14:20:36.568429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.845 qpair failed and we were unable to recover it. 00:26:28.845 [2024-07-26 14:20:36.568654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.845 [2024-07-26 14:20:36.568710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.845 qpair failed and we were unable to recover it. 00:26:28.845 [2024-07-26 14:20:36.568884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.845 [2024-07-26 14:20:36.568941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.845 qpair failed and we were unable to recover it. 00:26:28.845 [2024-07-26 14:20:36.569129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.845 [2024-07-26 14:20:36.569184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.845 qpair failed and we were unable to recover it. 00:26:28.845 [2024-07-26 14:20:36.569390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.845 [2024-07-26 14:20:36.569445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.845 qpair failed and we were unable to recover it. 00:26:28.845 [2024-07-26 14:20:36.569636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.845 [2024-07-26 14:20:36.569694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.845 qpair failed and we were unable to recover it. 00:26:28.845 [2024-07-26 14:20:36.569874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.845 [2024-07-26 14:20:36.569928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.845 qpair failed and we were unable to recover it. 00:26:28.845 [2024-07-26 14:20:36.570156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.845 [2024-07-26 14:20:36.570212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.845 qpair failed and we were unable to recover it. 00:26:28.845 [2024-07-26 14:20:36.570467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.845 [2024-07-26 14:20:36.570523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.845 qpair failed and we were unable to recover it. 00:26:28.845 [2024-07-26 14:20:36.570785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.845 [2024-07-26 14:20:36.570840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.845 qpair failed and we were unable to recover it. 00:26:28.845 [2024-07-26 14:20:36.571060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.845 [2024-07-26 14:20:36.571115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.845 qpair failed and we were unable to recover it. 00:26:28.845 [2024-07-26 14:20:36.571319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.845 [2024-07-26 14:20:36.571385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.845 qpair failed and we were unable to recover it. 00:26:28.845 [2024-07-26 14:20:36.571590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.845 [2024-07-26 14:20:36.571646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.845 qpair failed and we were unable to recover it. 00:26:28.845 [2024-07-26 14:20:36.571861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.845 [2024-07-26 14:20:36.571915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.845 qpair failed and we were unable to recover it. 00:26:28.845 [2024-07-26 14:20:36.572133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.845 [2024-07-26 14:20:36.572188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.845 qpair failed and we were unable to recover it. 00:26:28.845 [2024-07-26 14:20:36.572433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.845 [2024-07-26 14:20:36.572497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.845 qpair failed and we were unable to recover it. 00:26:28.845 [2024-07-26 14:20:36.572712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.845 [2024-07-26 14:20:36.572767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.845 qpair failed and we were unable to recover it. 00:26:28.845 [2024-07-26 14:20:36.572943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.845 [2024-07-26 14:20:36.572997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.845 qpair failed and we were unable to recover it. 00:26:28.845 [2024-07-26 14:20:36.573203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.845 [2024-07-26 14:20:36.573257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.845 qpair failed and we were unable to recover it. 00:26:28.845 [2024-07-26 14:20:36.573491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.845 [2024-07-26 14:20:36.573584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.845 qpair failed and we were unable to recover it. 00:26:28.845 [2024-07-26 14:20:36.573802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.845 [2024-07-26 14:20:36.573856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.845 qpair failed and we were unable to recover it. 00:26:28.845 [2024-07-26 14:20:36.574057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.845 [2024-07-26 14:20:36.574112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.845 qpair failed and we were unable to recover it. 00:26:28.845 [2024-07-26 14:20:36.574303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.845 [2024-07-26 14:20:36.574358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.845 qpair failed and we were unable to recover it. 00:26:28.845 [2024-07-26 14:20:36.574589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.845 [2024-07-26 14:20:36.574644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.845 qpair failed and we were unable to recover it. 00:26:28.845 [2024-07-26 14:20:36.574860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.845 [2024-07-26 14:20:36.574915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.845 qpair failed and we were unable to recover it. 00:26:28.845 [2024-07-26 14:20:36.575100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.845 [2024-07-26 14:20:36.575159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.845 qpair failed and we were unable to recover it. 00:26:28.845 [2024-07-26 14:20:36.575379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.846 [2024-07-26 14:20:36.575434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.846 qpair failed and we were unable to recover it. 00:26:28.846 [2024-07-26 14:20:36.575643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.846 [2024-07-26 14:20:36.575707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.846 qpair failed and we were unable to recover it. 00:26:28.846 [2024-07-26 14:20:36.575925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.846 [2024-07-26 14:20:36.575981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.846 qpair failed and we were unable to recover it. 00:26:28.846 [2024-07-26 14:20:36.576159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.846 [2024-07-26 14:20:36.576216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.846 qpair failed and we were unable to recover it. 00:26:28.846 [2024-07-26 14:20:36.576391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.846 [2024-07-26 14:20:36.576454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.846 qpair failed and we were unable to recover it. 00:26:28.846 [2024-07-26 14:20:36.576692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.846 [2024-07-26 14:20:36.576750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.846 qpair failed and we were unable to recover it. 00:26:28.846 [2024-07-26 14:20:36.576993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.846 [2024-07-26 14:20:36.577049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.846 qpair failed and we were unable to recover it. 00:26:28.846 [2024-07-26 14:20:36.577278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.846 [2024-07-26 14:20:36.577337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.846 qpair failed and we were unable to recover it. 00:26:28.846 [2024-07-26 14:20:36.577580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.846 [2024-07-26 14:20:36.577637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.846 qpair failed and we were unable to recover it. 00:26:28.846 [2024-07-26 14:20:36.577823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.846 [2024-07-26 14:20:36.577880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.846 qpair failed and we were unable to recover it. 00:26:28.846 [2024-07-26 14:20:36.578067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.846 [2024-07-26 14:20:36.578122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.846 qpair failed and we were unable to recover it. 00:26:28.846 [2024-07-26 14:20:36.578364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.846 [2024-07-26 14:20:36.578420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.846 qpair failed and we were unable to recover it. 00:26:28.846 [2024-07-26 14:20:36.578622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.846 [2024-07-26 14:20:36.578690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.846 qpair failed and we were unable to recover it. 00:26:28.846 [2024-07-26 14:20:36.578909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.846 [2024-07-26 14:20:36.578964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.846 qpair failed and we were unable to recover it. 00:26:28.846 [2024-07-26 14:20:36.579179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.846 [2024-07-26 14:20:36.579235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.846 qpair failed and we were unable to recover it. 00:26:28.846 [2024-07-26 14:20:36.579426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.846 [2024-07-26 14:20:36.579487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.846 qpair failed and we were unable to recover it. 00:26:28.846 [2024-07-26 14:20:36.579683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.846 [2024-07-26 14:20:36.579739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.846 qpair failed and we were unable to recover it. 00:26:28.846 [2024-07-26 14:20:36.579953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.846 [2024-07-26 14:20:36.580008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.846 qpair failed and we were unable to recover it. 00:26:28.846 [2024-07-26 14:20:36.580212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.846 [2024-07-26 14:20:36.580289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.846 qpair failed and we were unable to recover it. 00:26:28.846 [2024-07-26 14:20:36.580521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.846 [2024-07-26 14:20:36.580595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.846 qpair failed and we were unable to recover it. 00:26:28.846 [2024-07-26 14:20:36.580841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.846 [2024-07-26 14:20:36.580896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.846 qpair failed and we were unable to recover it. 00:26:28.846 [2024-07-26 14:20:36.581061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.846 [2024-07-26 14:20:36.581117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.846 qpair failed and we were unable to recover it. 00:26:28.846 [2024-07-26 14:20:36.581372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.846 [2024-07-26 14:20:36.581427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.846 qpair failed and we were unable to recover it. 00:26:28.846 [2024-07-26 14:20:36.581610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.846 [2024-07-26 14:20:36.581664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.846 qpair failed and we were unable to recover it. 00:26:28.846 [2024-07-26 14:20:36.581856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.846 [2024-07-26 14:20:36.581910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.846 qpair failed and we were unable to recover it. 00:26:28.846 [2024-07-26 14:20:36.582163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.846 [2024-07-26 14:20:36.582217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.846 qpair failed and we were unable to recover it. 00:26:28.846 [2024-07-26 14:20:36.582396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.846 [2024-07-26 14:20:36.582453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.846 qpair failed and we were unable to recover it. 00:26:28.846 [2024-07-26 14:20:36.582660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.846 [2024-07-26 14:20:36.582718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.846 qpair failed and we were unable to recover it. 00:26:28.846 [2024-07-26 14:20:36.582947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.846 [2024-07-26 14:20:36.583002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.846 qpair failed and we were unable to recover it. 00:26:28.846 [2024-07-26 14:20:36.583190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.846 [2024-07-26 14:20:36.583244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.846 qpair failed and we were unable to recover it. 00:26:28.846 [2024-07-26 14:20:36.583461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.846 [2024-07-26 14:20:36.583515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.846 qpair failed and we were unable to recover it. 00:26:28.846 [2024-07-26 14:20:36.583780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.846 [2024-07-26 14:20:36.583837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.846 qpair failed and we were unable to recover it. 00:26:28.846 [2024-07-26 14:20:36.584001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.846 [2024-07-26 14:20:36.584056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.846 qpair failed and we were unable to recover it. 00:26:28.846 [2024-07-26 14:20:36.584249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.846 [2024-07-26 14:20:36.584304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.846 qpair failed and we were unable to recover it. 00:26:28.846 [2024-07-26 14:20:36.584499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.846 [2024-07-26 14:20:36.584567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.846 qpair failed and we were unable to recover it. 00:26:28.846 [2024-07-26 14:20:36.584817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.846 [2024-07-26 14:20:36.584875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.846 qpair failed and we were unable to recover it. 00:26:28.846 [2024-07-26 14:20:36.585072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.846 [2024-07-26 14:20:36.585127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.846 qpair failed and we were unable to recover it. 00:26:28.846 [2024-07-26 14:20:36.585311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.847 [2024-07-26 14:20:36.585366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.847 qpair failed and we were unable to recover it. 00:26:28.847 [2024-07-26 14:20:36.585571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.847 [2024-07-26 14:20:36.585627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.847 qpair failed and we were unable to recover it. 00:26:28.847 [2024-07-26 14:20:36.585811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.847 [2024-07-26 14:20:36.585868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.847 qpair failed and we were unable to recover it. 00:26:28.847 [2024-07-26 14:20:36.586084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.847 [2024-07-26 14:20:36.586139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.847 qpair failed and we were unable to recover it. 00:26:28.847 [2024-07-26 14:20:36.586337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.847 [2024-07-26 14:20:36.586401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.847 qpair failed and we were unable to recover it. 00:26:28.847 [2024-07-26 14:20:36.586608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.847 [2024-07-26 14:20:36.586665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.847 qpair failed and we were unable to recover it. 00:26:28.847 [2024-07-26 14:20:36.586843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.847 [2024-07-26 14:20:36.586897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.847 qpair failed and we were unable to recover it. 00:26:28.847 [2024-07-26 14:20:36.587102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.847 [2024-07-26 14:20:36.587156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.847 qpair failed and we were unable to recover it. 00:26:28.847 [2024-07-26 14:20:36.587335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.847 [2024-07-26 14:20:36.587392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.847 qpair failed and we were unable to recover it. 00:26:28.847 [2024-07-26 14:20:36.587615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.847 [2024-07-26 14:20:36.587671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.847 qpair failed and we were unable to recover it. 00:26:28.847 [2024-07-26 14:20:36.587881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.847 [2024-07-26 14:20:36.587964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.847 qpair failed and we were unable to recover it. 00:26:28.847 [2024-07-26 14:20:36.588182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.847 [2024-07-26 14:20:36.588246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.847 qpair failed and we were unable to recover it. 00:26:28.847 [2024-07-26 14:20:36.588484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.847 [2024-07-26 14:20:36.588578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.847 qpair failed and we were unable to recover it. 00:26:28.847 [2024-07-26 14:20:36.588817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.847 [2024-07-26 14:20:36.588873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.847 qpair failed and we were unable to recover it. 00:26:28.847 [2024-07-26 14:20:36.589081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.847 [2024-07-26 14:20:36.589136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.847 qpair failed and we were unable to recover it. 00:26:28.847 [2024-07-26 14:20:36.589370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.847 [2024-07-26 14:20:36.589425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.847 qpair failed and we were unable to recover it. 00:26:28.847 [2024-07-26 14:20:36.589619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.847 [2024-07-26 14:20:36.589677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.847 qpair failed and we were unable to recover it. 00:26:28.847 [2024-07-26 14:20:36.589890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.847 [2024-07-26 14:20:36.589944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.847 qpair failed and we were unable to recover it. 00:26:28.847 [2024-07-26 14:20:36.590162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.847 [2024-07-26 14:20:36.590217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.847 qpair failed and we were unable to recover it. 00:26:28.847 [2024-07-26 14:20:36.590424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.847 [2024-07-26 14:20:36.590480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.847 qpair failed and we were unable to recover it. 00:26:28.847 [2024-07-26 14:20:36.590721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.847 [2024-07-26 14:20:36.590777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.847 qpair failed and we were unable to recover it. 00:26:28.847 [2024-07-26 14:20:36.590961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.847 [2024-07-26 14:20:36.591016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.847 qpair failed and we were unable to recover it. 00:26:28.847 [2024-07-26 14:20:36.591216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.847 [2024-07-26 14:20:36.591270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.847 qpair failed and we were unable to recover it. 00:26:28.847 [2024-07-26 14:20:36.591461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.847 [2024-07-26 14:20:36.591516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.847 qpair failed and we were unable to recover it. 00:26:28.847 [2024-07-26 14:20:36.591719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.847 [2024-07-26 14:20:36.591777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.847 qpair failed and we were unable to recover it. 00:26:28.847 [2024-07-26 14:20:36.592001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.847 [2024-07-26 14:20:36.592056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.847 qpair failed and we were unable to recover it. 00:26:28.847 [2024-07-26 14:20:36.592276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.847 [2024-07-26 14:20:36.592332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.847 qpair failed and we were unable to recover it. 00:26:28.847 [2024-07-26 14:20:36.592519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.847 [2024-07-26 14:20:36.592593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.847 qpair failed and we were unable to recover it. 00:26:28.847 [2024-07-26 14:20:36.592820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.847 [2024-07-26 14:20:36.592874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.847 qpair failed and we were unable to recover it. 00:26:28.847 [2024-07-26 14:20:36.593087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.847 [2024-07-26 14:20:36.593142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.847 qpair failed and we were unable to recover it. 00:26:28.847 [2024-07-26 14:20:36.593366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.847 [2024-07-26 14:20:36.593424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.847 qpair failed and we were unable to recover it. 00:26:28.847 [2024-07-26 14:20:36.593690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.847 [2024-07-26 14:20:36.593746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.847 qpair failed and we were unable to recover it. 00:26:28.847 [2024-07-26 14:20:36.593924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.847 [2024-07-26 14:20:36.593980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.847 qpair failed and we were unable to recover it. 00:26:28.847 [2024-07-26 14:20:36.594147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.847 [2024-07-26 14:20:36.594202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.847 qpair failed and we were unable to recover it. 00:26:28.847 [2024-07-26 14:20:36.594419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.847 [2024-07-26 14:20:36.594473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.847 qpair failed and we were unable to recover it. 00:26:28.847 [2024-07-26 14:20:36.594685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.847 [2024-07-26 14:20:36.594744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.847 qpair failed and we were unable to recover it. 00:26:28.847 [2024-07-26 14:20:36.594958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.847 [2024-07-26 14:20:36.595012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.847 qpair failed and we were unable to recover it. 00:26:28.847 [2024-07-26 14:20:36.595229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.847 [2024-07-26 14:20:36.595283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.847 qpair failed and we were unable to recover it. 00:26:28.848 [2024-07-26 14:20:36.595512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.848 [2024-07-26 14:20:36.595584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.848 qpair failed and we were unable to recover it. 00:26:28.848 [2024-07-26 14:20:36.595796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.848 [2024-07-26 14:20:36.595850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.848 qpair failed and we were unable to recover it. 00:26:28.848 [2024-07-26 14:20:36.596029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.848 [2024-07-26 14:20:36.596086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.848 qpair failed and we were unable to recover it. 00:26:28.848 [2024-07-26 14:20:36.596316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.848 [2024-07-26 14:20:36.596381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.848 qpair failed and we were unable to recover it. 00:26:28.848 [2024-07-26 14:20:36.596662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.848 [2024-07-26 14:20:36.596719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.848 qpair failed and we were unable to recover it. 00:26:28.848 [2024-07-26 14:20:36.596917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.848 [2024-07-26 14:20:36.596972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.848 qpair failed and we were unable to recover it. 00:26:28.848 [2024-07-26 14:20:36.597183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.848 [2024-07-26 14:20:36.597245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.848 qpair failed and we were unable to recover it. 00:26:28.848 [2024-07-26 14:20:36.597464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.848 [2024-07-26 14:20:36.597519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.848 qpair failed and we were unable to recover it. 00:26:28.848 [2024-07-26 14:20:36.597739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.848 [2024-07-26 14:20:36.597795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.848 qpair failed and we were unable to recover it. 00:26:28.848 [2024-07-26 14:20:36.597978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.848 [2024-07-26 14:20:36.598034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.848 qpair failed and we were unable to recover it. 00:26:28.848 [2024-07-26 14:20:36.598251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.848 [2024-07-26 14:20:36.598306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.848 qpair failed and we were unable to recover it. 00:26:28.848 [2024-07-26 14:20:36.598472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.848 [2024-07-26 14:20:36.598549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.848 qpair failed and we were unable to recover it. 00:26:28.848 [2024-07-26 14:20:36.598737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.848 [2024-07-26 14:20:36.598794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.848 qpair failed and we were unable to recover it. 00:26:28.848 [2024-07-26 14:20:36.598971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.848 [2024-07-26 14:20:36.599025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.848 qpair failed and we were unable to recover it. 00:26:28.848 [2024-07-26 14:20:36.599239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.848 [2024-07-26 14:20:36.599294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.848 qpair failed and we were unable to recover it. 00:26:28.848 [2024-07-26 14:20:36.599485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.848 [2024-07-26 14:20:36.599552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.848 qpair failed and we were unable to recover it. 00:26:28.848 [2024-07-26 14:20:36.599770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.848 [2024-07-26 14:20:36.599825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.848 qpair failed and we were unable to recover it. 00:26:28.848 [2024-07-26 14:20:36.600002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.848 [2024-07-26 14:20:36.600056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.848 qpair failed and we were unable to recover it. 00:26:28.848 [2024-07-26 14:20:36.600305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.848 [2024-07-26 14:20:36.600360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.848 qpair failed and we were unable to recover it. 00:26:28.848 [2024-07-26 14:20:36.600595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.848 [2024-07-26 14:20:36.600651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.848 qpair failed and we were unable to recover it. 00:26:28.848 [2024-07-26 14:20:36.600858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.848 [2024-07-26 14:20:36.600913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.848 qpair failed and we were unable to recover it. 00:26:28.848 [2024-07-26 14:20:36.601097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.848 [2024-07-26 14:20:36.601151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.848 qpair failed and we were unable to recover it. 00:26:28.848 [2024-07-26 14:20:36.601363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.848 [2024-07-26 14:20:36.601417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.848 qpair failed and we were unable to recover it. 00:26:28.848 [2024-07-26 14:20:36.601663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.848 [2024-07-26 14:20:36.601720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.848 qpair failed and we were unable to recover it. 00:26:28.848 [2024-07-26 14:20:36.601891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.848 [2024-07-26 14:20:36.601948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.848 qpair failed and we were unable to recover it. 00:26:28.848 [2024-07-26 14:20:36.602195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.848 [2024-07-26 14:20:36.602250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.848 qpair failed and we were unable to recover it. 00:26:28.848 [2024-07-26 14:20:36.602440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.848 [2024-07-26 14:20:36.602495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.848 qpair failed and we were unable to recover it. 00:26:28.848 [2024-07-26 14:20:36.602699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.848 [2024-07-26 14:20:36.602754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.848 qpair failed and we were unable to recover it. 00:26:28.848 [2024-07-26 14:20:36.602977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.848 [2024-07-26 14:20:36.603032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.848 qpair failed and we were unable to recover it. 00:26:28.848 [2024-07-26 14:20:36.603249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.848 [2024-07-26 14:20:36.603303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.848 qpair failed and we were unable to recover it. 00:26:28.848 [2024-07-26 14:20:36.603536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.848 [2024-07-26 14:20:36.603592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.848 qpair failed and we were unable to recover it. 00:26:28.848 [2024-07-26 14:20:36.603824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.848 [2024-07-26 14:20:36.603879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.848 qpair failed and we were unable to recover it. 00:26:28.848 [2024-07-26 14:20:36.604110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.848 [2024-07-26 14:20:36.604171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.848 qpair failed and we were unable to recover it. 00:26:28.848 [2024-07-26 14:20:36.604364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.848 [2024-07-26 14:20:36.604421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.848 qpair failed and we were unable to recover it. 00:26:28.848 [2024-07-26 14:20:36.604690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.848 [2024-07-26 14:20:36.604747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.848 qpair failed and we were unable to recover it. 00:26:28.848 [2024-07-26 14:20:36.604936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.848 [2024-07-26 14:20:36.604991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.848 qpair failed and we were unable to recover it. 00:26:28.848 [2024-07-26 14:20:36.605210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.848 [2024-07-26 14:20:36.605266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.848 qpair failed and we were unable to recover it. 00:26:28.848 [2024-07-26 14:20:36.605445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.848 [2024-07-26 14:20:36.605501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.849 qpair failed and we were unable to recover it. 00:26:28.849 [2024-07-26 14:20:36.605727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.849 [2024-07-26 14:20:36.605783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.849 qpair failed and we were unable to recover it. 00:26:28.849 [2024-07-26 14:20:36.605958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.849 [2024-07-26 14:20:36.606016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.849 qpair failed and we were unable to recover it. 00:26:28.849 [2024-07-26 14:20:36.606209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.849 [2024-07-26 14:20:36.606264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.849 qpair failed and we were unable to recover it. 00:26:28.849 [2024-07-26 14:20:36.606438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.849 [2024-07-26 14:20:36.606492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.849 qpair failed and we were unable to recover it. 00:26:28.849 [2024-07-26 14:20:36.606708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.849 [2024-07-26 14:20:36.606763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.849 qpair failed and we were unable to recover it. 00:26:28.849 [2024-07-26 14:20:36.607005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.849 [2024-07-26 14:20:36.607060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.849 qpair failed and we were unable to recover it. 00:26:28.849 [2024-07-26 14:20:36.607228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.849 [2024-07-26 14:20:36.607283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.849 qpair failed and we were unable to recover it. 00:26:28.849 [2024-07-26 14:20:36.607503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.849 [2024-07-26 14:20:36.607576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.849 qpair failed and we were unable to recover it. 00:26:28.849 [2024-07-26 14:20:36.607799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.849 [2024-07-26 14:20:36.607864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.849 qpair failed and we were unable to recover it. 00:26:28.849 [2024-07-26 14:20:36.608058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.849 [2024-07-26 14:20:36.608112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.849 qpair failed and we were unable to recover it. 00:26:28.849 [2024-07-26 14:20:36.608336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.849 [2024-07-26 14:20:36.608391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.849 qpair failed and we were unable to recover it. 00:26:28.849 [2024-07-26 14:20:36.608617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.849 [2024-07-26 14:20:36.608674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.849 qpair failed and we were unable to recover it. 00:26:28.849 [2024-07-26 14:20:36.608886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.849 [2024-07-26 14:20:36.608939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.849 qpair failed and we were unable to recover it. 00:26:28.849 [2024-07-26 14:20:36.609154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.849 [2024-07-26 14:20:36.609209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.849 qpair failed and we were unable to recover it. 00:26:28.849 [2024-07-26 14:20:36.609460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.849 [2024-07-26 14:20:36.609516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.849 qpair failed and we were unable to recover it. 00:26:28.849 [2024-07-26 14:20:36.609747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.849 [2024-07-26 14:20:36.609804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.849 qpair failed and we were unable to recover it. 00:26:28.849 [2024-07-26 14:20:36.609985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.849 [2024-07-26 14:20:36.610039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.849 qpair failed and we were unable to recover it. 00:26:28.849 [2024-07-26 14:20:36.610222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.849 [2024-07-26 14:20:36.610287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.849 qpair failed and we were unable to recover it. 00:26:28.849 [2024-07-26 14:20:36.610461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.849 [2024-07-26 14:20:36.610518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.849 qpair failed and we were unable to recover it. 00:26:28.849 [2024-07-26 14:20:36.610725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.849 [2024-07-26 14:20:36.610780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.849 qpair failed and we were unable to recover it. 00:26:28.849 [2024-07-26 14:20:36.611031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.849 [2024-07-26 14:20:36.611086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.849 qpair failed and we were unable to recover it. 00:26:28.849 [2024-07-26 14:20:36.611273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.849 [2024-07-26 14:20:36.611352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.849 qpair failed and we were unable to recover it. 00:26:28.849 [2024-07-26 14:20:36.611586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.849 [2024-07-26 14:20:36.611644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.849 qpair failed and we were unable to recover it. 00:26:28.849 [2024-07-26 14:20:36.611870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.849 [2024-07-26 14:20:36.611924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.849 qpair failed and we were unable to recover it. 00:26:28.849 [2024-07-26 14:20:36.612149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.849 [2024-07-26 14:20:36.612204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.849 qpair failed and we were unable to recover it. 00:26:28.849 [2024-07-26 14:20:36.612416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.849 [2024-07-26 14:20:36.612492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.849 qpair failed and we were unable to recover it. 00:26:28.849 [2024-07-26 14:20:36.612739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.849 [2024-07-26 14:20:36.612795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.849 qpair failed and we were unable to recover it. 00:26:28.849 [2024-07-26 14:20:36.613008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.849 [2024-07-26 14:20:36.613062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.849 qpair failed and we were unable to recover it. 00:26:28.849 [2024-07-26 14:20:36.613237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.849 [2024-07-26 14:20:36.613292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.849 qpair failed and we were unable to recover it. 00:26:28.849 [2024-07-26 14:20:36.613467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.849 [2024-07-26 14:20:36.613522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.849 qpair failed and we were unable to recover it. 00:26:28.849 [2024-07-26 14:20:36.613751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.849 [2024-07-26 14:20:36.613806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.849 qpair failed and we were unable to recover it. 00:26:28.849 [2024-07-26 14:20:36.613983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.850 [2024-07-26 14:20:36.614038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.850 qpair failed and we were unable to recover it. 00:26:28.850 [2024-07-26 14:20:36.614218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.850 [2024-07-26 14:20:36.614273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.850 qpair failed and we were unable to recover it. 00:26:28.850 [2024-07-26 14:20:36.614524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.850 [2024-07-26 14:20:36.614592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.850 qpair failed and we were unable to recover it. 00:26:28.850 [2024-07-26 14:20:36.614838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.850 [2024-07-26 14:20:36.614892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.850 qpair failed and we were unable to recover it. 00:26:28.850 [2024-07-26 14:20:36.615093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.850 [2024-07-26 14:20:36.615149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.850 qpair failed and we were unable to recover it. 00:26:28.850 [2024-07-26 14:20:36.615371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.850 [2024-07-26 14:20:36.615426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.850 qpair failed and we were unable to recover it. 00:26:28.850 [2024-07-26 14:20:36.615624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.850 [2024-07-26 14:20:36.615679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.850 qpair failed and we were unable to recover it. 00:26:28.850 [2024-07-26 14:20:36.615853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.850 [2024-07-26 14:20:36.615910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.850 qpair failed and we were unable to recover it. 00:26:28.850 [2024-07-26 14:20:36.616125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.850 [2024-07-26 14:20:36.616191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.850 qpair failed and we were unable to recover it. 00:26:28.850 [2024-07-26 14:20:36.616445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.850 [2024-07-26 14:20:36.616507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.850 qpair failed and we were unable to recover it. 00:26:28.850 [2024-07-26 14:20:36.616731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.850 [2024-07-26 14:20:36.616815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.850 qpair failed and we were unable to recover it. 00:26:28.850 [2024-07-26 14:20:36.617127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.850 [2024-07-26 14:20:36.617193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.850 qpair failed and we were unable to recover it. 00:26:28.850 [2024-07-26 14:20:36.617489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.850 [2024-07-26 14:20:36.617587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.850 qpair failed and we were unable to recover it. 00:26:28.850 [2024-07-26 14:20:36.617800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.850 [2024-07-26 14:20:36.617856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.850 qpair failed and we were unable to recover it. 00:26:28.850 [2024-07-26 14:20:36.618060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.850 [2024-07-26 14:20:36.618114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.850 qpair failed and we were unable to recover it. 00:26:28.850 [2024-07-26 14:20:36.618298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.850 [2024-07-26 14:20:36.618355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.850 qpair failed and we were unable to recover it. 00:26:28.850 [2024-07-26 14:20:36.618580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.850 [2024-07-26 14:20:36.618637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.850 qpair failed and we were unable to recover it. 00:26:28.850 [2024-07-26 14:20:36.618890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.850 [2024-07-26 14:20:36.618952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.850 qpair failed and we were unable to recover it. 00:26:28.850 [2024-07-26 14:20:36.619170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.850 [2024-07-26 14:20:36.619225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.850 qpair failed and we were unable to recover it. 00:26:28.850 [2024-07-26 14:20:36.619400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.850 [2024-07-26 14:20:36.619455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.850 qpair failed and we were unable to recover it. 00:26:28.850 [2024-07-26 14:20:36.619631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.850 [2024-07-26 14:20:36.619688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.850 qpair failed and we were unable to recover it. 00:26:28.850 [2024-07-26 14:20:36.619901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.850 [2024-07-26 14:20:36.619956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.850 qpair failed and we were unable to recover it. 00:26:28.850 [2024-07-26 14:20:36.620111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.850 [2024-07-26 14:20:36.620166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.850 qpair failed and we were unable to recover it. 00:26:28.850 [2024-07-26 14:20:36.620389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.850 [2024-07-26 14:20:36.620455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.850 qpair failed and we were unable to recover it. 00:26:28.850 [2024-07-26 14:20:36.620680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.850 [2024-07-26 14:20:36.620736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.850 qpair failed and we were unable to recover it. 00:26:28.850 [2024-07-26 14:20:36.620955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.850 [2024-07-26 14:20:36.621010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.850 qpair failed and we were unable to recover it. 00:26:28.850 [2024-07-26 14:20:36.621188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.850 [2024-07-26 14:20:36.621242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.850 qpair failed and we were unable to recover it. 00:26:28.850 [2024-07-26 14:20:36.621428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.850 [2024-07-26 14:20:36.621485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.850 qpair failed and we were unable to recover it. 00:26:28.850 [2024-07-26 14:20:36.621693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.850 [2024-07-26 14:20:36.621746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.850 qpair failed and we were unable to recover it. 00:26:28.850 [2024-07-26 14:20:36.621973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.850 [2024-07-26 14:20:36.622029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.850 qpair failed and we were unable to recover it. 00:26:28.850 [2024-07-26 14:20:36.622215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.850 [2024-07-26 14:20:36.622271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.850 qpair failed and we were unable to recover it. 00:26:28.850 [2024-07-26 14:20:36.622495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.850 [2024-07-26 14:20:36.622564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.850 qpair failed and we were unable to recover it. 00:26:28.850 [2024-07-26 14:20:36.622767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.850 [2024-07-26 14:20:36.622822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.850 qpair failed and we were unable to recover it. 00:26:28.850 [2024-07-26 14:20:36.623045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.850 [2024-07-26 14:20:36.623100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.850 qpair failed and we were unable to recover it. 00:26:28.850 [2024-07-26 14:20:36.623322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.850 [2024-07-26 14:20:36.623377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.850 qpair failed and we were unable to recover it. 00:26:28.850 [2024-07-26 14:20:36.623591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.850 [2024-07-26 14:20:36.623647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.850 qpair failed and we were unable to recover it. 00:26:28.850 [2024-07-26 14:20:36.623890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.850 [2024-07-26 14:20:36.623944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.850 qpair failed and we were unable to recover it. 00:26:28.850 [2024-07-26 14:20:36.624144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.851 [2024-07-26 14:20:36.624220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.851 qpair failed and we were unable to recover it. 00:26:28.851 [2024-07-26 14:20:36.624482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.851 [2024-07-26 14:20:36.624555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.851 qpair failed and we were unable to recover it. 00:26:28.851 [2024-07-26 14:20:36.624752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.851 [2024-07-26 14:20:36.624823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.851 qpair failed and we were unable to recover it. 00:26:28.851 [2024-07-26 14:20:36.625059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.851 [2024-07-26 14:20:36.625117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.851 qpair failed and we were unable to recover it. 00:26:28.851 [2024-07-26 14:20:36.625341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.851 [2024-07-26 14:20:36.625396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.851 qpair failed and we were unable to recover it. 00:26:28.851 [2024-07-26 14:20:36.625580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.851 [2024-07-26 14:20:36.625636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.851 qpair failed and we were unable to recover it. 00:26:28.851 [2024-07-26 14:20:36.625824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.851 [2024-07-26 14:20:36.625878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.851 qpair failed and we were unable to recover it. 00:26:28.851 [2024-07-26 14:20:36.626100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.851 [2024-07-26 14:20:36.626156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.851 qpair failed and we were unable to recover it. 00:26:28.851 [2024-07-26 14:20:36.626350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.851 [2024-07-26 14:20:36.626406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.851 qpair failed and we were unable to recover it. 00:26:28.851 [2024-07-26 14:20:36.626591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.851 [2024-07-26 14:20:36.626647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.851 qpair failed and we were unable to recover it. 00:26:28.851 [2024-07-26 14:20:36.626860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.851 [2024-07-26 14:20:36.626916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.851 qpair failed and we were unable to recover it. 00:26:28.851 [2024-07-26 14:20:36.627099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.851 [2024-07-26 14:20:36.627155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.851 qpair failed and we were unable to recover it. 00:26:28.851 [2024-07-26 14:20:36.627350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.851 [2024-07-26 14:20:36.627404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.851 qpair failed and we were unable to recover it. 00:26:28.851 [2024-07-26 14:20:36.627610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.851 [2024-07-26 14:20:36.627666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.851 qpair failed and we were unable to recover it. 00:26:28.851 [2024-07-26 14:20:36.627902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.851 [2024-07-26 14:20:36.627959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.851 qpair failed and we were unable to recover it. 00:26:28.851 [2024-07-26 14:20:36.628177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.851 [2024-07-26 14:20:36.628231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.851 qpair failed and we were unable to recover it. 00:26:28.851 [2024-07-26 14:20:36.628452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.851 [2024-07-26 14:20:36.628516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.851 qpair failed and we were unable to recover it. 00:26:28.851 [2024-07-26 14:20:36.628804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.851 [2024-07-26 14:20:36.628859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.851 qpair failed and we were unable to recover it. 00:26:28.851 [2024-07-26 14:20:36.629112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.851 [2024-07-26 14:20:36.629177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.851 qpair failed and we were unable to recover it. 00:26:28.851 [2024-07-26 14:20:36.629379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.851 [2024-07-26 14:20:36.629445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.851 qpair failed and we were unable to recover it. 00:26:28.851 [2024-07-26 14:20:36.629643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.851 [2024-07-26 14:20:36.629734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.851 qpair failed and we were unable to recover it. 00:26:28.851 [2024-07-26 14:20:36.629999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.851 [2024-07-26 14:20:36.630063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.851 qpair failed and we were unable to recover it. 00:26:28.851 [2024-07-26 14:20:36.630255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.851 [2024-07-26 14:20:36.630335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.851 qpair failed and we were unable to recover it. 00:26:28.851 [2024-07-26 14:20:36.630554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.851 [2024-07-26 14:20:36.630635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.851 qpair failed and we were unable to recover it. 00:26:28.851 [2024-07-26 14:20:36.630857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.851 [2024-07-26 14:20:36.630923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.851 qpair failed and we were unable to recover it. 00:26:28.851 [2024-07-26 14:20:36.631197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.851 [2024-07-26 14:20:36.631260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.851 qpair failed and we were unable to recover it. 00:26:28.851 [2024-07-26 14:20:36.631526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.851 [2024-07-26 14:20:36.631614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.851 qpair failed and we were unable to recover it. 00:26:28.851 [2024-07-26 14:20:36.631790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.851 [2024-07-26 14:20:36.631848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.851 qpair failed and we were unable to recover it. 00:26:28.851 [2024-07-26 14:20:36.632134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.851 [2024-07-26 14:20:36.632191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.851 qpair failed and we were unable to recover it. 00:26:28.851 [2024-07-26 14:20:36.632402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.851 [2024-07-26 14:20:36.632459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.851 qpair failed and we were unable to recover it. 00:26:28.851 [2024-07-26 14:20:36.632682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.851 [2024-07-26 14:20:36.632740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.851 qpair failed and we were unable to recover it. 00:26:28.851 [2024-07-26 14:20:36.632949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.851 [2024-07-26 14:20:36.633005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.851 qpair failed and we were unable to recover it. 00:26:28.851 [2024-07-26 14:20:36.633225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.851 [2024-07-26 14:20:36.633280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.851 qpair failed and we were unable to recover it. 00:26:28.851 [2024-07-26 14:20:36.633478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.851 [2024-07-26 14:20:36.633544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.851 qpair failed and we were unable to recover it. 00:26:28.851 [2024-07-26 14:20:36.633738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.851 [2024-07-26 14:20:36.633793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.851 qpair failed and we were unable to recover it. 00:26:28.851 [2024-07-26 14:20:36.633956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.851 [2024-07-26 14:20:36.634012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.851 qpair failed and we were unable to recover it. 00:26:28.851 [2024-07-26 14:20:36.634260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.851 [2024-07-26 14:20:36.634316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.851 qpair failed and we were unable to recover it. 00:26:28.851 [2024-07-26 14:20:36.634503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.852 [2024-07-26 14:20:36.634576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.852 qpair failed and we were unable to recover it. 00:26:28.852 [2024-07-26 14:20:36.634762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.852 [2024-07-26 14:20:36.634817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.852 qpair failed and we were unable to recover it. 00:26:28.852 [2024-07-26 14:20:36.635056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.852 [2024-07-26 14:20:36.635109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.852 qpair failed and we were unable to recover it. 00:26:28.852 [2024-07-26 14:20:36.635289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.852 [2024-07-26 14:20:36.635346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.852 qpair failed and we were unable to recover it. 00:26:28.852 [2024-07-26 14:20:36.635568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.852 [2024-07-26 14:20:36.635626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.852 qpair failed and we were unable to recover it. 00:26:28.852 [2024-07-26 14:20:36.635855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.852 [2024-07-26 14:20:36.635916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.852 qpair failed and we were unable to recover it. 00:26:28.852 [2024-07-26 14:20:36.636111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.852 [2024-07-26 14:20:36.636169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.852 qpair failed and we were unable to recover it. 00:26:28.852 [2024-07-26 14:20:36.636379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.852 [2024-07-26 14:20:36.636437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.852 qpair failed and we were unable to recover it. 00:26:28.852 [2024-07-26 14:20:36.636679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.852 [2024-07-26 14:20:36.636739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.852 qpair failed and we were unable to recover it. 00:26:28.852 [2024-07-26 14:20:36.636923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.852 [2024-07-26 14:20:36.636982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.852 qpair failed and we were unable to recover it. 00:26:28.852 [2024-07-26 14:20:36.637205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.852 [2024-07-26 14:20:36.637265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.852 qpair failed and we were unable to recover it. 00:26:28.852 [2024-07-26 14:20:36.637443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.852 [2024-07-26 14:20:36.637502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.852 qpair failed and we were unable to recover it. 00:26:28.852 [2024-07-26 14:20:36.637740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.852 [2024-07-26 14:20:36.637799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.852 qpair failed and we were unable to recover it. 00:26:28.852 [2024-07-26 14:20:36.638034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.852 [2024-07-26 14:20:36.638096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.852 qpair failed and we were unable to recover it. 00:26:28.852 [2024-07-26 14:20:36.638280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.852 [2024-07-26 14:20:36.638341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.852 qpair failed and we were unable to recover it. 00:26:28.852 [2024-07-26 14:20:36.638591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.852 [2024-07-26 14:20:36.638647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.852 qpair failed and we were unable to recover it. 00:26:28.852 [2024-07-26 14:20:36.638846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.852 [2024-07-26 14:20:36.638905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.852 qpair failed and we were unable to recover it. 00:26:28.852 [2024-07-26 14:20:36.639121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.852 [2024-07-26 14:20:36.639176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.852 qpair failed and we were unable to recover it. 00:26:28.852 [2024-07-26 14:20:36.639432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.852 [2024-07-26 14:20:36.639487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.852 qpair failed and we were unable to recover it. 00:26:28.852 [2024-07-26 14:20:36.639748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.852 [2024-07-26 14:20:36.639814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.852 qpair failed and we were unable to recover it. 00:26:28.852 [2024-07-26 14:20:36.640075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.852 [2024-07-26 14:20:36.640133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.852 qpair failed and we were unable to recover it. 00:26:28.852 [2024-07-26 14:20:36.640340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.852 [2024-07-26 14:20:36.640399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.852 qpair failed and we were unable to recover it. 00:26:28.852 [2024-07-26 14:20:36.640670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.852 [2024-07-26 14:20:36.640732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.852 qpair failed and we were unable to recover it. 00:26:28.852 [2024-07-26 14:20:36.640995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.852 [2024-07-26 14:20:36.641062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.852 qpair failed and we were unable to recover it. 00:26:28.852 [2024-07-26 14:20:36.641329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.852 [2024-07-26 14:20:36.641388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.852 qpair failed and we were unable to recover it. 00:26:28.852 [2024-07-26 14:20:36.641609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.852 [2024-07-26 14:20:36.641669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.852 qpair failed and we were unable to recover it. 00:26:28.852 [2024-07-26 14:20:36.641928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.852 [2024-07-26 14:20:36.641989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.852 qpair failed and we were unable to recover it. 00:26:28.852 [2024-07-26 14:20:36.642269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.852 [2024-07-26 14:20:36.642328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.852 qpair failed and we were unable to recover it. 00:26:28.852 [2024-07-26 14:20:36.642583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.852 [2024-07-26 14:20:36.642645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.852 qpair failed and we were unable to recover it. 00:26:28.852 [2024-07-26 14:20:36.642838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.852 [2024-07-26 14:20:36.642898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.852 qpair failed and we were unable to recover it. 00:26:28.852 [2024-07-26 14:20:36.643156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.852 [2024-07-26 14:20:36.643216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.852 qpair failed and we were unable to recover it. 00:26:28.852 [2024-07-26 14:20:36.643446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.852 [2024-07-26 14:20:36.643508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.852 qpair failed and we were unable to recover it. 00:26:28.852 [2024-07-26 14:20:36.643764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.852 [2024-07-26 14:20:36.643824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.852 qpair failed and we were unable to recover it. 00:26:28.852 [2024-07-26 14:20:36.644070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.852 [2024-07-26 14:20:36.644128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.852 qpair failed and we were unable to recover it. 00:26:28.852 [2024-07-26 14:20:36.644351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.852 [2024-07-26 14:20:36.644411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.852 qpair failed and we were unable to recover it. 00:26:28.852 [2024-07-26 14:20:36.644650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.852 [2024-07-26 14:20:36.644710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.852 qpair failed and we were unable to recover it. 00:26:28.852 [2024-07-26 14:20:36.644924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.852 [2024-07-26 14:20:36.644983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.852 qpair failed and we were unable to recover it. 00:26:28.852 [2024-07-26 14:20:36.645198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.852 [2024-07-26 14:20:36.645257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.852 qpair failed and we were unable to recover it. 00:26:28.853 [2024-07-26 14:20:36.645519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.853 [2024-07-26 14:20:36.645600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.853 qpair failed and we were unable to recover it. 00:26:28.853 [2024-07-26 14:20:36.645786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.853 [2024-07-26 14:20:36.645844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.853 qpair failed and we were unable to recover it. 00:26:28.853 [2024-07-26 14:20:36.646087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.853 [2024-07-26 14:20:36.646146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.853 qpair failed and we were unable to recover it. 00:26:28.853 [2024-07-26 14:20:36.646336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.853 [2024-07-26 14:20:36.646394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.853 qpair failed and we were unable to recover it. 00:26:28.853 [2024-07-26 14:20:36.646622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.853 [2024-07-26 14:20:36.646684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.853 qpair failed and we were unable to recover it. 00:26:28.853 [2024-07-26 14:20:36.646910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.853 [2024-07-26 14:20:36.646971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.853 qpair failed and we were unable to recover it. 00:26:28.853 [2024-07-26 14:20:36.647251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.853 [2024-07-26 14:20:36.647309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.853 qpair failed and we were unable to recover it. 00:26:28.853 [2024-07-26 14:20:36.647505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.853 [2024-07-26 14:20:36.647583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.853 qpair failed and we were unable to recover it. 00:26:28.853 [2024-07-26 14:20:36.647846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.853 [2024-07-26 14:20:36.647906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.853 qpair failed and we were unable to recover it. 00:26:28.853 [2024-07-26 14:20:36.648125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.853 [2024-07-26 14:20:36.648185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.853 qpair failed and we were unable to recover it. 00:26:28.853 [2024-07-26 14:20:36.648407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.853 [2024-07-26 14:20:36.648466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.853 qpair failed and we were unable to recover it. 00:26:28.853 [2024-07-26 14:20:36.648678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.853 [2024-07-26 14:20:36.648738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.853 qpair failed and we were unable to recover it. 00:26:28.853 [2024-07-26 14:20:36.648975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.853 [2024-07-26 14:20:36.649035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.853 qpair failed and we were unable to recover it. 00:26:28.853 [2024-07-26 14:20:36.649260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.853 [2024-07-26 14:20:36.649320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.853 qpair failed and we were unable to recover it. 00:26:28.853 [2024-07-26 14:20:36.649579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.853 [2024-07-26 14:20:36.649639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.853 qpair failed and we were unable to recover it. 00:26:28.853 [2024-07-26 14:20:36.649879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.853 [2024-07-26 14:20:36.649939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.853 qpair failed and we were unable to recover it. 00:26:28.853 [2024-07-26 14:20:36.650141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.853 [2024-07-26 14:20:36.650201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.853 qpair failed and we were unable to recover it. 00:26:28.853 [2024-07-26 14:20:36.650415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.853 [2024-07-26 14:20:36.650473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.853 qpair failed and we were unable to recover it. 00:26:28.853 [2024-07-26 14:20:36.650778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.853 [2024-07-26 14:20:36.650838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.853 qpair failed and we were unable to recover it. 00:26:28.853 [2024-07-26 14:20:36.651078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.853 [2024-07-26 14:20:36.651137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.853 qpair failed and we were unable to recover it. 00:26:28.853 [2024-07-26 14:20:36.651399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.853 [2024-07-26 14:20:36.651458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.853 qpair failed and we were unable to recover it. 00:26:28.853 [2024-07-26 14:20:36.651705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.853 [2024-07-26 14:20:36.651766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.853 qpair failed and we were unable to recover it. 00:26:28.853 [2024-07-26 14:20:36.652026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.853 [2024-07-26 14:20:36.652086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.853 qpair failed and we were unable to recover it. 00:26:28.853 [2024-07-26 14:20:36.652307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.853 [2024-07-26 14:20:36.652367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.853 qpair failed and we were unable to recover it. 00:26:28.853 [2024-07-26 14:20:36.652568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.853 [2024-07-26 14:20:36.652628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.853 qpair failed and we were unable to recover it. 00:26:28.853 [2024-07-26 14:20:36.652896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.853 [2024-07-26 14:20:36.652964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.853 qpair failed and we were unable to recover it. 00:26:28.853 [2024-07-26 14:20:36.653240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.853 [2024-07-26 14:20:36.653299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.853 qpair failed and we were unable to recover it. 00:26:28.853 [2024-07-26 14:20:36.653512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.853 [2024-07-26 14:20:36.653582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.853 qpair failed and we were unable to recover it. 00:26:28.853 [2024-07-26 14:20:36.653825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.853 [2024-07-26 14:20:36.653885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.853 qpair failed and we were unable to recover it. 00:26:28.853 [2024-07-26 14:20:36.654130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.853 [2024-07-26 14:20:36.654190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.853 qpair failed and we were unable to recover it. 00:26:28.853 [2024-07-26 14:20:36.654386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.853 [2024-07-26 14:20:36.654469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.853 qpair failed and we were unable to recover it. 00:26:28.853 [2024-07-26 14:20:36.654742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.853 [2024-07-26 14:20:36.654804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.853 qpair failed and we were unable to recover it. 00:26:28.853 [2024-07-26 14:20:36.655034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.853 [2024-07-26 14:20:36.655093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.853 qpair failed and we were unable to recover it. 00:26:28.853 [2024-07-26 14:20:36.655283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.853 [2024-07-26 14:20:36.655362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.853 qpair failed and we were unable to recover it. 00:26:28.853 [2024-07-26 14:20:36.655661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.853 [2024-07-26 14:20:36.655721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.853 qpair failed and we were unable to recover it. 00:26:28.853 [2024-07-26 14:20:36.655926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.853 [2024-07-26 14:20:36.655985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.853 qpair failed and we were unable to recover it. 00:26:28.853 [2024-07-26 14:20:36.656163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.853 [2024-07-26 14:20:36.656224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.853 qpair failed and we were unable to recover it. 00:26:28.853 [2024-07-26 14:20:36.656458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.854 [2024-07-26 14:20:36.656517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.854 qpair failed and we were unable to recover it. 00:26:28.854 [2024-07-26 14:20:36.656816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.854 [2024-07-26 14:20:36.656874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.854 qpair failed and we were unable to recover it. 00:26:28.854 [2024-07-26 14:20:36.657156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.854 [2024-07-26 14:20:36.657215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.854 qpair failed and we were unable to recover it. 00:26:28.854 [2024-07-26 14:20:36.657470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.854 [2024-07-26 14:20:36.657543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.854 qpair failed and we were unable to recover it. 00:26:28.854 [2024-07-26 14:20:36.657805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.854 [2024-07-26 14:20:36.657869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.854 qpair failed and we were unable to recover it. 00:26:28.854 [2024-07-26 14:20:36.658083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.854 [2024-07-26 14:20:36.658162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.854 qpair failed and we were unable to recover it. 00:26:28.854 [2024-07-26 14:20:36.658446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.854 [2024-07-26 14:20:36.658509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.854 qpair failed and we were unable to recover it. 00:26:28.854 [2024-07-26 14:20:36.658807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.854 [2024-07-26 14:20:36.658871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.854 qpair failed and we were unable to recover it. 00:26:28.854 [2024-07-26 14:20:36.659194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.854 [2024-07-26 14:20:36.659257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.854 qpair failed and we were unable to recover it. 00:26:28.854 [2024-07-26 14:20:36.659558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.854 [2024-07-26 14:20:36.659636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.854 qpair failed and we were unable to recover it. 00:26:28.854 [2024-07-26 14:20:36.659873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.854 [2024-07-26 14:20:36.659933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.854 qpair failed and we were unable to recover it. 00:26:28.854 [2024-07-26 14:20:36.660126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.854 [2024-07-26 14:20:36.660186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.854 qpair failed and we were unable to recover it. 00:26:28.854 [2024-07-26 14:20:36.660422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.854 [2024-07-26 14:20:36.660482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.854 qpair failed and we were unable to recover it. 00:26:28.854 [2024-07-26 14:20:36.660733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.854 [2024-07-26 14:20:36.660791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.854 qpair failed and we were unable to recover it. 00:26:28.854 [2024-07-26 14:20:36.661054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.854 [2024-07-26 14:20:36.661113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.854 qpair failed and we were unable to recover it. 00:26:28.854 [2024-07-26 14:20:36.661396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.854 [2024-07-26 14:20:36.661456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.854 qpair failed and we were unable to recover it. 00:26:28.854 [2024-07-26 14:20:36.661704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.854 [2024-07-26 14:20:36.661766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.854 qpair failed and we were unable to recover it. 00:26:28.854 [2024-07-26 14:20:36.661993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.854 [2024-07-26 14:20:36.662055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.854 qpair failed and we were unable to recover it. 00:26:28.854 [2024-07-26 14:20:36.662269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.854 [2024-07-26 14:20:36.662328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.854 qpair failed and we were unable to recover it. 00:26:28.854 [2024-07-26 14:20:36.662524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.854 [2024-07-26 14:20:36.662598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.854 qpair failed and we were unable to recover it. 00:26:28.854 [2024-07-26 14:20:36.662860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.854 [2024-07-26 14:20:36.662919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.854 qpair failed and we were unable to recover it. 00:26:28.854 [2024-07-26 14:20:36.663164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.854 [2024-07-26 14:20:36.663223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.854 qpair failed and we were unable to recover it. 00:26:28.854 [2024-07-26 14:20:36.663505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.854 [2024-07-26 14:20:36.663585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.854 qpair failed and we were unable to recover it. 00:26:28.854 [2024-07-26 14:20:36.663802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.854 [2024-07-26 14:20:36.663867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.854 qpair failed and we were unable to recover it. 00:26:28.854 [2024-07-26 14:20:36.664103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.854 [2024-07-26 14:20:36.664167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.854 qpair failed and we were unable to recover it. 00:26:28.854 [2024-07-26 14:20:36.664444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.854 [2024-07-26 14:20:36.664508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.854 qpair failed and we were unable to recover it. 00:26:28.854 [2024-07-26 14:20:36.664811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.854 [2024-07-26 14:20:36.664877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.854 qpair failed and we were unable to recover it. 00:26:28.854 [2024-07-26 14:20:36.665131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.854 [2024-07-26 14:20:36.665190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.854 qpair failed and we were unable to recover it. 00:26:28.854 [2024-07-26 14:20:36.665452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.854 [2024-07-26 14:20:36.665520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.854 qpair failed and we were unable to recover it. 00:26:28.854 [2024-07-26 14:20:36.665773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.854 [2024-07-26 14:20:36.665835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.854 qpair failed and we were unable to recover it. 00:26:28.854 [2024-07-26 14:20:36.666097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.854 [2024-07-26 14:20:36.666160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.854 qpair failed and we were unable to recover it. 00:26:28.854 [2024-07-26 14:20:36.666402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.855 [2024-07-26 14:20:36.666466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.855 qpair failed and we were unable to recover it. 00:26:28.855 [2024-07-26 14:20:36.666788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.855 [2024-07-26 14:20:36.666853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.855 qpair failed and we were unable to recover it. 00:26:28.855 [2024-07-26 14:20:36.667158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.855 [2024-07-26 14:20:36.667222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.855 qpair failed and we were unable to recover it. 00:26:28.855 [2024-07-26 14:20:36.667441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.855 [2024-07-26 14:20:36.667505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.855 qpair failed and we were unable to recover it. 00:26:28.855 [2024-07-26 14:20:36.667821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.855 [2024-07-26 14:20:36.667885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.855 qpair failed and we were unable to recover it. 00:26:28.855 [2024-07-26 14:20:36.668128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.855 [2024-07-26 14:20:36.668197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.855 qpair failed and we were unable to recover it. 00:26:28.855 [2024-07-26 14:20:36.668403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.855 [2024-07-26 14:20:36.668469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.855 qpair failed and we were unable to recover it. 00:26:28.855 [2024-07-26 14:20:36.668766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.855 [2024-07-26 14:20:36.668834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.855 qpair failed and we were unable to recover it. 00:26:28.855 [2024-07-26 14:20:36.669090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.855 [2024-07-26 14:20:36.669153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.855 qpair failed and we were unable to recover it. 00:26:28.855 [2024-07-26 14:20:36.669412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.855 [2024-07-26 14:20:36.669476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.855 qpair failed and we were unable to recover it. 00:26:28.855 [2024-07-26 14:20:36.669739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.855 [2024-07-26 14:20:36.669804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.855 qpair failed and we were unable to recover it. 00:26:28.855 [2024-07-26 14:20:36.670099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.855 [2024-07-26 14:20:36.670164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.855 qpair failed and we were unable to recover it. 00:26:28.855 [2024-07-26 14:20:36.670451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.855 [2024-07-26 14:20:36.670515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.855 qpair failed and we were unable to recover it. 00:26:28.855 [2024-07-26 14:20:36.670826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.855 [2024-07-26 14:20:36.670891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.855 qpair failed and we were unable to recover it. 00:26:28.855 [2024-07-26 14:20:36.671150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.855 [2024-07-26 14:20:36.671214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.855 qpair failed and we were unable to recover it. 00:26:28.855 [2024-07-26 14:20:36.671452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.855 [2024-07-26 14:20:36.671516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.855 qpair failed and we were unable to recover it. 00:26:28.855 [2024-07-26 14:20:36.671744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.855 [2024-07-26 14:20:36.671809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.855 qpair failed and we were unable to recover it. 00:26:28.855 [2024-07-26 14:20:36.672083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.855 [2024-07-26 14:20:36.672148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.855 qpair failed and we were unable to recover it. 00:26:28.855 [2024-07-26 14:20:36.672357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.855 [2024-07-26 14:20:36.672422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.855 qpair failed and we were unable to recover it. 00:26:28.855 [2024-07-26 14:20:36.672722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.855 [2024-07-26 14:20:36.672789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.855 qpair failed and we were unable to recover it. 00:26:28.855 [2024-07-26 14:20:36.673027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.855 [2024-07-26 14:20:36.673091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.855 qpair failed and we were unable to recover it. 00:26:28.855 [2024-07-26 14:20:36.673271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.855 [2024-07-26 14:20:36.673336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.855 qpair failed and we were unable to recover it. 00:26:28.855 [2024-07-26 14:20:36.673619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.855 [2024-07-26 14:20:36.673685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.855 qpair failed and we were unable to recover it. 00:26:28.855 [2024-07-26 14:20:36.673933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.855 [2024-07-26 14:20:36.673997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.855 qpair failed and we were unable to recover it. 00:26:28.855 [2024-07-26 14:20:36.674263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.855 [2024-07-26 14:20:36.674328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.855 qpair failed and we were unable to recover it. 00:26:28.855 [2024-07-26 14:20:36.674610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.855 [2024-07-26 14:20:36.674676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.855 qpair failed and we were unable to recover it. 00:26:28.855 [2024-07-26 14:20:36.674916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.855 [2024-07-26 14:20:36.674981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.855 qpair failed and we were unable to recover it. 00:26:28.855 [2024-07-26 14:20:36.675234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.855 [2024-07-26 14:20:36.675298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.855 qpair failed and we were unable to recover it. 00:26:28.855 [2024-07-26 14:20:36.675543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.855 [2024-07-26 14:20:36.675618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.855 qpair failed and we were unable to recover it. 00:26:28.855 [2024-07-26 14:20:36.675908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.855 [2024-07-26 14:20:36.675971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.855 qpair failed and we were unable to recover it. 00:26:28.855 [2024-07-26 14:20:36.676227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.855 [2024-07-26 14:20:36.676291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.855 qpair failed and we were unable to recover it. 00:26:28.855 [2024-07-26 14:20:36.676494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.855 [2024-07-26 14:20:36.676572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.855 qpair failed and we were unable to recover it. 00:26:28.855 [2024-07-26 14:20:36.676814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.855 [2024-07-26 14:20:36.676877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.855 qpair failed and we were unable to recover it. 00:26:28.855 [2024-07-26 14:20:36.677096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.855 [2024-07-26 14:20:36.677159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.855 qpair failed and we were unable to recover it. 00:26:28.855 [2024-07-26 14:20:36.677404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.855 [2024-07-26 14:20:36.677467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.855 qpair failed and we were unable to recover it. 00:26:28.855 [2024-07-26 14:20:36.677745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.855 [2024-07-26 14:20:36.677820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.855 qpair failed and we were unable to recover it. 00:26:28.855 [2024-07-26 14:20:36.678105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.855 [2024-07-26 14:20:36.678169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.855 qpair failed and we were unable to recover it. 00:26:28.855 [2024-07-26 14:20:36.678419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.855 [2024-07-26 14:20:36.678492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.855 qpair failed and we were unable to recover it. 00:26:28.856 [2024-07-26 14:20:36.678773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.856 [2024-07-26 14:20:36.678848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.856 qpair failed and we were unable to recover it. 00:26:28.856 [2024-07-26 14:20:36.679096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.856 [2024-07-26 14:20:36.679160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.856 qpair failed and we were unable to recover it. 00:26:28.856 [2024-07-26 14:20:36.679374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.856 [2024-07-26 14:20:36.679438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.856 qpair failed and we were unable to recover it. 00:26:28.856 [2024-07-26 14:20:36.679669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.856 [2024-07-26 14:20:36.679735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.856 qpair failed and we were unable to recover it. 00:26:28.856 [2024-07-26 14:20:36.680029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.856 [2024-07-26 14:20:36.680092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.856 qpair failed and we were unable to recover it. 00:26:28.856 [2024-07-26 14:20:36.680369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.856 [2024-07-26 14:20:36.680432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.856 qpair failed and we were unable to recover it. 00:26:28.856 [2024-07-26 14:20:36.680667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.856 [2024-07-26 14:20:36.680734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.856 qpair failed and we were unable to recover it. 00:26:28.856 [2024-07-26 14:20:36.681027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.856 [2024-07-26 14:20:36.681092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.856 qpair failed and we were unable to recover it. 00:26:28.856 [2024-07-26 14:20:36.681379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.856 [2024-07-26 14:20:36.681443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.856 qpair failed and we were unable to recover it. 00:26:28.856 [2024-07-26 14:20:36.681666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.856 [2024-07-26 14:20:36.681733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.856 qpair failed and we were unable to recover it. 00:26:28.856 [2024-07-26 14:20:36.682000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.856 [2024-07-26 14:20:36.682068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.856 qpair failed and we were unable to recover it. 00:26:28.856 [2024-07-26 14:20:36.682364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.856 [2024-07-26 14:20:36.682445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.856 qpair failed and we were unable to recover it. 00:26:28.856 [2024-07-26 14:20:36.682760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.856 [2024-07-26 14:20:36.682827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.856 qpair failed and we were unable to recover it. 00:26:28.856 [2024-07-26 14:20:36.683129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.856 [2024-07-26 14:20:36.683194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.856 qpair failed and we were unable to recover it. 00:26:28.856 [2024-07-26 14:20:36.683446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.856 [2024-07-26 14:20:36.683510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.856 qpair failed and we were unable to recover it. 00:26:28.856 [2024-07-26 14:20:36.683791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.856 [2024-07-26 14:20:36.683855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.856 qpair failed and we were unable to recover it. 00:26:28.856 [2024-07-26 14:20:36.684135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.856 [2024-07-26 14:20:36.684199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.856 qpair failed and we were unable to recover it. 00:26:28.856 [2024-07-26 14:20:36.684454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.856 [2024-07-26 14:20:36.684518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.856 qpair failed and we were unable to recover it. 00:26:28.856 [2024-07-26 14:20:36.684774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.856 [2024-07-26 14:20:36.684838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.856 qpair failed and we were unable to recover it. 00:26:28.856 [2024-07-26 14:20:36.685088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.856 [2024-07-26 14:20:36.685152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.856 qpair failed and we were unable to recover it. 00:26:28.856 [2024-07-26 14:20:36.685397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.856 [2024-07-26 14:20:36.685461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.856 qpair failed and we were unable to recover it. 00:26:28.856 [2024-07-26 14:20:36.685736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.856 [2024-07-26 14:20:36.685809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.856 qpair failed and we were unable to recover it. 00:26:28.856 [2024-07-26 14:20:36.686064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.856 [2024-07-26 14:20:36.686131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.856 qpair failed and we were unable to recover it. 00:26:28.856 [2024-07-26 14:20:36.686374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.856 [2024-07-26 14:20:36.686440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.856 qpair failed and we were unable to recover it. 00:26:28.856 [2024-07-26 14:20:36.686723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.856 [2024-07-26 14:20:36.686789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.856 qpair failed and we were unable to recover it. 00:26:28.856 [2024-07-26 14:20:36.687104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.856 [2024-07-26 14:20:36.687168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.856 qpair failed and we were unable to recover it. 00:26:28.856 [2024-07-26 14:20:36.687427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.856 [2024-07-26 14:20:36.687491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.856 qpair failed and we were unable to recover it. 00:26:28.856 [2024-07-26 14:20:36.687798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.856 [2024-07-26 14:20:36.687863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.856 qpair failed and we were unable to recover it. 00:26:28.856 [2024-07-26 14:20:36.688127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.856 [2024-07-26 14:20:36.688191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.856 qpair failed and we were unable to recover it. 00:26:28.856 [2024-07-26 14:20:36.688473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.856 [2024-07-26 14:20:36.688553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.856 qpair failed and we were unable to recover it. 00:26:28.856 [2024-07-26 14:20:36.688762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.856 [2024-07-26 14:20:36.688826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.856 qpair failed and we were unable to recover it. 00:26:28.856 [2024-07-26 14:20:36.689090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.856 [2024-07-26 14:20:36.689156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.856 qpair failed and we were unable to recover it. 00:26:28.856 [2024-07-26 14:20:36.689343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.856 [2024-07-26 14:20:36.689410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.856 qpair failed and we were unable to recover it. 00:26:28.856 [2024-07-26 14:20:36.689652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.856 [2024-07-26 14:20:36.689718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.856 qpair failed and we were unable to recover it. 00:26:28.856 [2024-07-26 14:20:36.689965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.856 [2024-07-26 14:20:36.690029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.856 qpair failed and we were unable to recover it. 00:26:28.856 [2024-07-26 14:20:36.690226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.856 [2024-07-26 14:20:36.690290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.856 qpair failed and we were unable to recover it. 00:26:28.856 [2024-07-26 14:20:36.690553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.856 [2024-07-26 14:20:36.690617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.856 qpair failed and we were unable to recover it. 00:26:28.856 [2024-07-26 14:20:36.690894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.857 [2024-07-26 14:20:36.690959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.857 qpair failed and we were unable to recover it. 00:26:28.857 [2024-07-26 14:20:36.691257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.857 [2024-07-26 14:20:36.691319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.857 qpair failed and we were unable to recover it. 00:26:28.857 [2024-07-26 14:20:36.691601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.857 [2024-07-26 14:20:36.691676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.857 qpair failed and we were unable to recover it. 00:26:28.857 [2024-07-26 14:20:36.691925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.857 [2024-07-26 14:20:36.691989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.857 qpair failed and we were unable to recover it. 00:26:28.857 [2024-07-26 14:20:36.692187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.857 [2024-07-26 14:20:36.692250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.857 qpair failed and we were unable to recover it. 00:26:28.857 [2024-07-26 14:20:36.692471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.857 [2024-07-26 14:20:36.692557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.857 qpair failed and we were unable to recover it. 00:26:28.857 [2024-07-26 14:20:36.692799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.857 [2024-07-26 14:20:36.692863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.857 qpair failed and we were unable to recover it. 00:26:28.857 [2024-07-26 14:20:36.693085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.857 [2024-07-26 14:20:36.693148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.857 qpair failed and we were unable to recover it. 00:26:28.857 [2024-07-26 14:20:36.693396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.857 [2024-07-26 14:20:36.693468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.857 qpair failed and we were unable to recover it. 00:26:28.857 [2024-07-26 14:20:36.693729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.857 [2024-07-26 14:20:36.693794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.857 qpair failed and we were unable to recover it. 00:26:28.857 [2024-07-26 14:20:36.694042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.857 [2024-07-26 14:20:36.694106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.857 qpair failed and we were unable to recover it. 00:26:28.857 [2024-07-26 14:20:36.694352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.857 [2024-07-26 14:20:36.694417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.857 qpair failed and we were unable to recover it. 00:26:28.857 [2024-07-26 14:20:36.694709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.857 [2024-07-26 14:20:36.694773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.857 qpair failed and we were unable to recover it. 00:26:28.857 [2024-07-26 14:20:36.695015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.857 [2024-07-26 14:20:36.695081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.857 qpair failed and we were unable to recover it. 00:26:28.857 [2024-07-26 14:20:36.695322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.857 [2024-07-26 14:20:36.695387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.857 qpair failed and we were unable to recover it. 00:26:28.857 [2024-07-26 14:20:36.695630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.857 [2024-07-26 14:20:36.695697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.857 qpair failed and we were unable to recover it. 00:26:28.857 [2024-07-26 14:20:36.695932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.857 [2024-07-26 14:20:36.695996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.857 qpair failed and we were unable to recover it. 00:26:28.857 [2024-07-26 14:20:36.696288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.857 [2024-07-26 14:20:36.696351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.857 qpair failed and we were unable to recover it. 00:26:28.857 [2024-07-26 14:20:36.696610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.857 [2024-07-26 14:20:36.696675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.857 qpair failed and we were unable to recover it. 00:26:28.857 [2024-07-26 14:20:36.696919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.857 [2024-07-26 14:20:36.696986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.857 qpair failed and we were unable to recover it. 00:26:28.857 [2024-07-26 14:20:36.697193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.857 [2024-07-26 14:20:36.697256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.857 qpair failed and we were unable to recover it. 00:26:28.857 [2024-07-26 14:20:36.697507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.857 [2024-07-26 14:20:36.697588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.857 qpair failed and we were unable to recover it. 00:26:28.857 [2024-07-26 14:20:36.697840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.857 [2024-07-26 14:20:36.697905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.857 qpair failed and we were unable to recover it. 00:26:28.857 [2024-07-26 14:20:36.698155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.857 [2024-07-26 14:20:36.698218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.857 qpair failed and we were unable to recover it. 00:26:28.857 [2024-07-26 14:20:36.698464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.857 [2024-07-26 14:20:36.698560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.857 qpair failed and we were unable to recover it. 00:26:28.857 [2024-07-26 14:20:36.698781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.857 [2024-07-26 14:20:36.698849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.857 qpair failed and we were unable to recover it. 00:26:28.857 [2024-07-26 14:20:36.699142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.857 [2024-07-26 14:20:36.699206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.857 qpair failed and we were unable to recover it. 00:26:28.857 [2024-07-26 14:20:36.699420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.857 [2024-07-26 14:20:36.699484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.857 qpair failed and we were unable to recover it. 00:26:28.857 [2024-07-26 14:20:36.699789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.857 [2024-07-26 14:20:36.699854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.857 qpair failed and we were unable to recover it. 00:26:28.857 [2024-07-26 14:20:36.700117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.857 [2024-07-26 14:20:36.700183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.857 qpair failed and we were unable to recover it. 00:26:28.857 [2024-07-26 14:20:36.700423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.857 [2024-07-26 14:20:36.700486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.857 qpair failed and we were unable to recover it. 00:26:28.857 [2024-07-26 14:20:36.700753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.857 [2024-07-26 14:20:36.700817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.857 qpair failed and we were unable to recover it. 00:26:28.857 [2024-07-26 14:20:36.701082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.857 [2024-07-26 14:20:36.701148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.857 qpair failed and we were unable to recover it. 00:26:28.857 [2024-07-26 14:20:36.701391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.857 [2024-07-26 14:20:36.701458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.857 qpair failed and we were unable to recover it. 00:26:28.857 [2024-07-26 14:20:36.701715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.857 [2024-07-26 14:20:36.701780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.857 qpair failed and we were unable to recover it. 00:26:28.857 [2024-07-26 14:20:36.702013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.857 [2024-07-26 14:20:36.702076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.857 qpair failed and we were unable to recover it. 00:26:28.857 [2024-07-26 14:20:36.702336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.857 [2024-07-26 14:20:36.702400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.857 qpair failed and we were unable to recover it. 00:26:28.857 [2024-07-26 14:20:36.702634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.857 [2024-07-26 14:20:36.702701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.858 qpair failed and we were unable to recover it. 00:26:28.858 [2024-07-26 14:20:36.702953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.858 [2024-07-26 14:20:36.703019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.858 qpair failed and we were unable to recover it. 00:26:28.858 [2024-07-26 14:20:36.703258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.858 [2024-07-26 14:20:36.703323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.858 qpair failed and we were unable to recover it. 00:26:28.858 [2024-07-26 14:20:36.703569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.858 [2024-07-26 14:20:36.703645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.858 qpair failed and we were unable to recover it. 00:26:28.858 [2024-07-26 14:20:36.703942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.858 [2024-07-26 14:20:36.704006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.858 qpair failed and we were unable to recover it. 00:26:28.858 [2024-07-26 14:20:36.704194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.858 [2024-07-26 14:20:36.704267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.858 qpair failed and we were unable to recover it. 00:26:28.858 [2024-07-26 14:20:36.704506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.858 [2024-07-26 14:20:36.704584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.858 qpair failed and we were unable to recover it. 00:26:28.858 [2024-07-26 14:20:36.704825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.858 [2024-07-26 14:20:36.704888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.858 qpair failed and we were unable to recover it. 00:26:28.858 [2024-07-26 14:20:36.705171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.858 [2024-07-26 14:20:36.705234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.858 qpair failed and we were unable to recover it. 00:26:28.858 [2024-07-26 14:20:36.705478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.858 [2024-07-26 14:20:36.705555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.858 qpair failed and we were unable to recover it. 00:26:28.858 [2024-07-26 14:20:36.705854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.858 [2024-07-26 14:20:36.705918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.858 qpair failed and we were unable to recover it. 00:26:28.858 [2024-07-26 14:20:36.706141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.858 [2024-07-26 14:20:36.706205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.858 qpair failed and we were unable to recover it. 00:26:28.858 [2024-07-26 14:20:36.706441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.858 [2024-07-26 14:20:36.706504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.858 qpair failed and we were unable to recover it. 00:26:28.858 [2024-07-26 14:20:36.706778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.858 [2024-07-26 14:20:36.706845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.858 qpair failed and we were unable to recover it. 00:26:28.858 [2024-07-26 14:20:36.707140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.858 [2024-07-26 14:20:36.707203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.858 qpair failed and we were unable to recover it. 00:26:28.858 [2024-07-26 14:20:36.707453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.858 [2024-07-26 14:20:36.707518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.858 qpair failed and we were unable to recover it. 00:26:28.858 [2024-07-26 14:20:36.707791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.858 [2024-07-26 14:20:36.707855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.858 qpair failed and we were unable to recover it. 00:26:28.858 [2024-07-26 14:20:36.708097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.858 [2024-07-26 14:20:36.708159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.858 qpair failed and we were unable to recover it. 00:26:28.858 [2024-07-26 14:20:36.708438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.858 [2024-07-26 14:20:36.708501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.858 qpair failed and we were unable to recover it. 00:26:28.858 [2024-07-26 14:20:36.708782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.858 [2024-07-26 14:20:36.708846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.858 qpair failed and we were unable to recover it. 00:26:28.858 [2024-07-26 14:20:36.709089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.858 [2024-07-26 14:20:36.709152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.858 qpair failed and we were unable to recover it. 00:26:28.858 [2024-07-26 14:20:36.709337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.858 [2024-07-26 14:20:36.709403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.858 qpair failed and we were unable to recover it. 00:26:28.858 [2024-07-26 14:20:36.709660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.858 [2024-07-26 14:20:36.709726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.858 qpair failed and we were unable to recover it. 00:26:28.858 [2024-07-26 14:20:36.709981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.858 [2024-07-26 14:20:36.710045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.858 qpair failed and we were unable to recover it. 00:26:28.858 [2024-07-26 14:20:36.710292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.858 [2024-07-26 14:20:36.710355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.858 qpair failed and we were unable to recover it. 00:26:28.858 [2024-07-26 14:20:36.710601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.858 [2024-07-26 14:20:36.710668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.858 qpair failed and we were unable to recover it. 00:26:28.858 [2024-07-26 14:20:36.710921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.858 [2024-07-26 14:20:36.710988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.858 qpair failed and we were unable to recover it. 00:26:28.858 [2024-07-26 14:20:36.711274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.858 [2024-07-26 14:20:36.711338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.858 qpair failed and we were unable to recover it. 00:26:28.858 [2024-07-26 14:20:36.711615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.858 [2024-07-26 14:20:36.711680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.858 qpair failed and we were unable to recover it. 00:26:28.858 [2024-07-26 14:20:36.711964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.858 [2024-07-26 14:20:36.712028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.858 qpair failed and we were unable to recover it. 00:26:28.858 [2024-07-26 14:20:36.712307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.858 [2024-07-26 14:20:36.712371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.858 qpair failed and we were unable to recover it. 00:26:28.858 [2024-07-26 14:20:36.712666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.858 [2024-07-26 14:20:36.712741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.858 qpair failed and we were unable to recover it. 00:26:28.858 [2024-07-26 14:20:36.713000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.858 [2024-07-26 14:20:36.713064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.858 qpair failed and we were unable to recover it. 00:26:28.858 [2024-07-26 14:20:36.713312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.858 [2024-07-26 14:20:36.713376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.858 qpair failed and we were unable to recover it. 00:26:28.858 [2024-07-26 14:20:36.713630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.858 [2024-07-26 14:20:36.713694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.858 qpair failed and we were unable to recover it. 00:26:28.858 [2024-07-26 14:20:36.713945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.858 [2024-07-26 14:20:36.714009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.858 qpair failed and we were unable to recover it. 00:26:28.858 [2024-07-26 14:20:36.714290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.858 [2024-07-26 14:20:36.714354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.859 qpair failed and we were unable to recover it. 00:26:28.859 [2024-07-26 14:20:36.714613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.859 [2024-07-26 14:20:36.714678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.859 qpair failed and we were unable to recover it. 00:26:28.859 [2024-07-26 14:20:36.714918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.859 [2024-07-26 14:20:36.714982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.859 qpair failed and we were unable to recover it. 00:26:28.859 [2024-07-26 14:20:36.715233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.859 [2024-07-26 14:20:36.715297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.859 qpair failed and we were unable to recover it. 00:26:28.859 [2024-07-26 14:20:36.715557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.859 [2024-07-26 14:20:36.715622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.859 qpair failed and we were unable to recover it. 00:26:28.859 [2024-07-26 14:20:36.715901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.859 [2024-07-26 14:20:36.715964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.859 qpair failed and we were unable to recover it. 00:26:28.859 [2024-07-26 14:20:36.716243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.859 [2024-07-26 14:20:36.716306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.859 qpair failed and we were unable to recover it. 00:26:28.859 [2024-07-26 14:20:36.716556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.859 [2024-07-26 14:20:36.716621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.859 qpair failed and we were unable to recover it. 00:26:28.859 [2024-07-26 14:20:36.716868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.859 [2024-07-26 14:20:36.716931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.859 qpair failed and we were unable to recover it. 00:26:28.859 [2024-07-26 14:20:36.717174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.859 [2024-07-26 14:20:36.717249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.859 qpair failed and we were unable to recover it. 00:26:28.859 [2024-07-26 14:20:36.717472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.859 [2024-07-26 14:20:36.717550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.859 qpair failed and we were unable to recover it. 00:26:28.859 [2024-07-26 14:20:36.717811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.859 [2024-07-26 14:20:36.717875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.859 qpair failed and we were unable to recover it. 00:26:28.859 [2024-07-26 14:20:36.718064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.859 [2024-07-26 14:20:36.718128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.859 qpair failed and we were unable to recover it. 00:26:28.859 [2024-07-26 14:20:36.718365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.859 [2024-07-26 14:20:36.718429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.859 qpair failed and we were unable to recover it. 00:26:28.859 [2024-07-26 14:20:36.718724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.859 [2024-07-26 14:20:36.718789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.859 qpair failed and we were unable to recover it. 00:26:28.859 [2024-07-26 14:20:36.719033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.859 [2024-07-26 14:20:36.719097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.859 qpair failed and we were unable to recover it. 00:26:28.859 [2024-07-26 14:20:36.719394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.859 [2024-07-26 14:20:36.719458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.859 qpair failed and we were unable to recover it. 00:26:28.859 [2024-07-26 14:20:36.719770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.859 [2024-07-26 14:20:36.719834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.859 qpair failed and we were unable to recover it. 00:26:28.859 [2024-07-26 14:20:36.720113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.859 [2024-07-26 14:20:36.720178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.859 qpair failed and we were unable to recover it. 00:26:28.859 [2024-07-26 14:20:36.720434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.859 [2024-07-26 14:20:36.720498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.859 qpair failed and we were unable to recover it. 00:26:28.859 [2024-07-26 14:20:36.720796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.859 [2024-07-26 14:20:36.720860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.859 qpair failed and we were unable to recover it. 00:26:28.859 [2024-07-26 14:20:36.721106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.859 [2024-07-26 14:20:36.721169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.859 qpair failed and we were unable to recover it. 00:26:28.859 [2024-07-26 14:20:36.721449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.859 [2024-07-26 14:20:36.721513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.859 qpair failed and we were unable to recover it. 00:26:28.859 [2024-07-26 14:20:36.721746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.859 [2024-07-26 14:20:36.721811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.859 qpair failed and we were unable to recover it. 00:26:28.859 [2024-07-26 14:20:36.722045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.859 [2024-07-26 14:20:36.722109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.859 qpair failed and we were unable to recover it. 00:26:28.859 [2024-07-26 14:20:36.722323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.859 [2024-07-26 14:20:36.722387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.859 qpair failed and we were unable to recover it. 00:26:28.859 [2024-07-26 14:20:36.722682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.859 [2024-07-26 14:20:36.722749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.859 qpair failed and we were unable to recover it. 00:26:28.859 [2024-07-26 14:20:36.722953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.859 [2024-07-26 14:20:36.723020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.859 qpair failed and we were unable to recover it. 00:26:28.859 [2024-07-26 14:20:36.723279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.859 [2024-07-26 14:20:36.723342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.859 qpair failed and we were unable to recover it. 00:26:28.859 [2024-07-26 14:20:36.723583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.859 [2024-07-26 14:20:36.723649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.859 qpair failed and we were unable to recover it. 00:26:28.859 [2024-07-26 14:20:36.723889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.859 [2024-07-26 14:20:36.723952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.859 qpair failed and we were unable to recover it. 00:26:28.859 [2024-07-26 14:20:36.724199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.859 [2024-07-26 14:20:36.724263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.859 qpair failed and we were unable to recover it. 00:26:28.859 [2024-07-26 14:20:36.724512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.859 [2024-07-26 14:20:36.724590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.859 qpair failed and we were unable to recover it. 00:26:28.859 [2024-07-26 14:20:36.724872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.859 [2024-07-26 14:20:36.724936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.860 qpair failed and we were unable to recover it. 00:26:28.860 [2024-07-26 14:20:36.725132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.860 [2024-07-26 14:20:36.725198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.860 qpair failed and we were unable to recover it. 00:26:28.860 [2024-07-26 14:20:36.725447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.860 [2024-07-26 14:20:36.725512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.860 qpair failed and we were unable to recover it. 00:26:28.860 [2024-07-26 14:20:36.725782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.860 [2024-07-26 14:20:36.725868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.860 qpair failed and we were unable to recover it. 00:26:28.860 [2024-07-26 14:20:36.726130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.860 [2024-07-26 14:20:36.726194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.860 qpair failed and we were unable to recover it. 00:26:28.860 [2024-07-26 14:20:36.726445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.860 [2024-07-26 14:20:36.726509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.860 qpair failed and we were unable to recover it. 00:26:28.860 [2024-07-26 14:20:36.726763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.860 [2024-07-26 14:20:36.726828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.860 qpair failed and we were unable to recover it. 00:26:28.860 [2024-07-26 14:20:36.727124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.860 [2024-07-26 14:20:36.727188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.860 qpair failed and we were unable to recover it. 00:26:28.860 [2024-07-26 14:20:36.727403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.860 [2024-07-26 14:20:36.727467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.860 qpair failed and we were unable to recover it. 00:26:28.860 [2024-07-26 14:20:36.727759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.860 [2024-07-26 14:20:36.727823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.860 qpair failed and we were unable to recover it. 00:26:28.860 [2024-07-26 14:20:36.728018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.860 [2024-07-26 14:20:36.728082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.860 qpair failed and we were unable to recover it. 00:26:28.860 [2024-07-26 14:20:36.728291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.860 [2024-07-26 14:20:36.728355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.860 qpair failed and we were unable to recover it. 00:26:28.860 [2024-07-26 14:20:36.728598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.860 [2024-07-26 14:20:36.728667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.860 qpair failed and we were unable to recover it. 00:26:28.860 [2024-07-26 14:20:36.728954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.860 [2024-07-26 14:20:36.729018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.860 qpair failed and we were unable to recover it. 00:26:28.860 [2024-07-26 14:20:36.729259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.860 [2024-07-26 14:20:36.729325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.860 qpair failed and we were unable to recover it. 00:26:28.860 [2024-07-26 14:20:36.729613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.860 [2024-07-26 14:20:36.729678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.860 qpair failed and we were unable to recover it. 00:26:28.860 [2024-07-26 14:20:36.729967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.860 [2024-07-26 14:20:36.730033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.860 qpair failed and we were unable to recover it. 00:26:28.860 [2024-07-26 14:20:36.730256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.860 [2024-07-26 14:20:36.730321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.860 qpair failed and we were unable to recover it. 00:26:28.860 [2024-07-26 14:20:36.730579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.860 [2024-07-26 14:20:36.730647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.860 qpair failed and we were unable to recover it. 00:26:28.860 [2024-07-26 14:20:36.730893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.860 [2024-07-26 14:20:36.730958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.860 qpair failed and we were unable to recover it. 00:26:28.860 [2024-07-26 14:20:36.731203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.860 [2024-07-26 14:20:36.731267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.860 qpair failed and we were unable to recover it. 00:26:28.860 [2024-07-26 14:20:36.731519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.860 [2024-07-26 14:20:36.731597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.860 qpair failed and we were unable to recover it. 00:26:28.860 [2024-07-26 14:20:36.731869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.860 [2024-07-26 14:20:36.731932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.860 qpair failed and we were unable to recover it. 00:26:28.860 [2024-07-26 14:20:36.732176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.860 [2024-07-26 14:20:36.732241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.860 qpair failed and we were unable to recover it. 00:26:28.860 [2024-07-26 14:20:36.732540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.860 [2024-07-26 14:20:36.732606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.860 qpair failed and we were unable to recover it. 00:26:28.860 [2024-07-26 14:20:36.732858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.860 [2024-07-26 14:20:36.732922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.860 qpair failed and we were unable to recover it. 00:26:28.860 [2024-07-26 14:20:36.733167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.860 [2024-07-26 14:20:36.733231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.860 qpair failed and we were unable to recover it. 00:26:28.860 [2024-07-26 14:20:36.733473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.860 [2024-07-26 14:20:36.733550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.860 qpair failed and we were unable to recover it. 00:26:28.860 [2024-07-26 14:20:36.733807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.860 [2024-07-26 14:20:36.733869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.860 qpair failed and we were unable to recover it. 00:26:28.860 [2024-07-26 14:20:36.734115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.860 [2024-07-26 14:20:36.734179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.860 qpair failed and we were unable to recover it. 00:26:28.860 [2024-07-26 14:20:36.734471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.860 [2024-07-26 14:20:36.734566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.860 qpair failed and we were unable to recover it. 00:26:28.860 [2024-07-26 14:20:36.734838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.860 [2024-07-26 14:20:36.734902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.860 qpair failed and we were unable to recover it. 00:26:28.860 [2024-07-26 14:20:36.735144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.860 [2024-07-26 14:20:36.735210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.860 qpair failed and we were unable to recover it. 00:26:28.860 [2024-07-26 14:20:36.735483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.860 [2024-07-26 14:20:36.735562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.860 qpair failed and we were unable to recover it. 00:26:28.860 [2024-07-26 14:20:36.735776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.860 [2024-07-26 14:20:36.735840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.860 qpair failed and we were unable to recover it. 00:26:28.860 [2024-07-26 14:20:36.736087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.860 [2024-07-26 14:20:36.736151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.860 qpair failed and we were unable to recover it. 00:26:28.860 [2024-07-26 14:20:36.736383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.860 [2024-07-26 14:20:36.736446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.860 qpair failed and we were unable to recover it. 00:26:28.860 [2024-07-26 14:20:36.736739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.860 [2024-07-26 14:20:36.736803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.860 qpair failed and we were unable to recover it. 00:26:28.860 [2024-07-26 14:20:36.737093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.861 [2024-07-26 14:20:36.737157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.861 qpair failed and we were unable to recover it. 00:26:28.861 [2024-07-26 14:20:36.737430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.861 [2024-07-26 14:20:36.737493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.861 qpair failed and we were unable to recover it. 00:26:28.861 [2024-07-26 14:20:36.737709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.861 [2024-07-26 14:20:36.737773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.861 qpair failed and we were unable to recover it. 00:26:28.861 [2024-07-26 14:20:36.738014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.861 [2024-07-26 14:20:36.738077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.861 qpair failed and we were unable to recover it. 00:26:28.861 [2024-07-26 14:20:36.738361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.861 [2024-07-26 14:20:36.738425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.861 qpair failed and we were unable to recover it. 00:26:28.861 [2024-07-26 14:20:36.738695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.861 [2024-07-26 14:20:36.738770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.861 qpair failed and we were unable to recover it. 00:26:28.861 [2024-07-26 14:20:36.738989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.861 [2024-07-26 14:20:36.739056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.861 qpair failed and we were unable to recover it. 00:26:28.861 [2024-07-26 14:20:36.739305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.861 [2024-07-26 14:20:36.739368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.861 qpair failed and we were unable to recover it. 00:26:28.861 [2024-07-26 14:20:36.739597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.861 [2024-07-26 14:20:36.739664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.861 qpair failed and we were unable to recover it. 00:26:28.861 [2024-07-26 14:20:36.739894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.861 [2024-07-26 14:20:36.739958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.861 qpair failed and we were unable to recover it. 00:26:28.861 [2024-07-26 14:20:36.740200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.861 [2024-07-26 14:20:36.740263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.861 qpair failed and we were unable to recover it. 00:26:28.861 [2024-07-26 14:20:36.740474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.861 [2024-07-26 14:20:36.740554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.861 qpair failed and we were unable to recover it. 00:26:28.861 [2024-07-26 14:20:36.740814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.861 [2024-07-26 14:20:36.740878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.861 qpair failed and we were unable to recover it. 00:26:28.861 [2024-07-26 14:20:36.741121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.861 [2024-07-26 14:20:36.741188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.861 qpair failed and we were unable to recover it. 00:26:28.861 [2024-07-26 14:20:36.741437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.861 [2024-07-26 14:20:36.741503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.861 qpair failed and we were unable to recover it. 00:26:28.861 [2024-07-26 14:20:36.741774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.861 [2024-07-26 14:20:36.741838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.861 qpair failed and we were unable to recover it. 00:26:28.861 [2024-07-26 14:20:36.742077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.861 [2024-07-26 14:20:36.742141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.861 qpair failed and we were unable to recover it. 00:26:28.861 [2024-07-26 14:20:36.742382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.861 [2024-07-26 14:20:36.742448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.861 qpair failed and we were unable to recover it. 00:26:28.861 [2024-07-26 14:20:36.742706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.861 [2024-07-26 14:20:36.742770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.861 qpair failed and we were unable to recover it. 00:26:28.861 [2024-07-26 14:20:36.743065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.861 [2024-07-26 14:20:36.743129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.861 qpair failed and we were unable to recover it. 00:26:28.861 [2024-07-26 14:20:36.743381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.861 [2024-07-26 14:20:36.743448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.861 qpair failed and we were unable to recover it. 00:26:28.861 [2024-07-26 14:20:36.743753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.861 [2024-07-26 14:20:36.743819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.861 qpair failed and we were unable to recover it. 00:26:28.861 [2024-07-26 14:20:36.744053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.861 [2024-07-26 14:20:36.744117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.861 qpair failed and we were unable to recover it. 00:26:28.861 [2024-07-26 14:20:36.744355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.861 [2024-07-26 14:20:36.744421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.861 qpair failed and we were unable to recover it. 00:26:28.861 [2024-07-26 14:20:36.744674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.861 [2024-07-26 14:20:36.744739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.861 qpair failed and we were unable to recover it. 00:26:28.861 [2024-07-26 14:20:36.745016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.861 [2024-07-26 14:20:36.745079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.861 qpair failed and we were unable to recover it. 00:26:28.861 [2024-07-26 14:20:36.745363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.861 [2024-07-26 14:20:36.745426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.861 qpair failed and we were unable to recover it. 00:26:28.861 [2024-07-26 14:20:36.745674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.861 [2024-07-26 14:20:36.745739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.861 qpair failed and we were unable to recover it. 00:26:28.861 [2024-07-26 14:20:36.745985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.861 [2024-07-26 14:20:36.746048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.861 qpair failed and we were unable to recover it. 00:26:28.861 [2024-07-26 14:20:36.746332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.861 [2024-07-26 14:20:36.746394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.861 qpair failed and we were unable to recover it. 00:26:28.861 [2024-07-26 14:20:36.746605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.861 [2024-07-26 14:20:36.746669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.861 qpair failed and we were unable to recover it. 00:26:28.861 [2024-07-26 14:20:36.746937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.861 [2024-07-26 14:20:36.747001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.861 qpair failed and we were unable to recover it. 00:26:28.861 [2024-07-26 14:20:36.747244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.861 [2024-07-26 14:20:36.747307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.861 qpair failed and we were unable to recover it. 00:26:28.861 [2024-07-26 14:20:36.747592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.861 [2024-07-26 14:20:36.747658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.861 qpair failed and we were unable to recover it. 00:26:28.861 [2024-07-26 14:20:36.747939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.861 [2024-07-26 14:20:36.748003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.861 qpair failed and we were unable to recover it. 00:26:28.861 [2024-07-26 14:20:36.748251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.861 [2024-07-26 14:20:36.748314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.861 qpair failed and we were unable to recover it. 00:26:28.861 [2024-07-26 14:20:36.748563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.861 [2024-07-26 14:20:36.748628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.861 qpair failed and we were unable to recover it. 00:26:28.861 [2024-07-26 14:20:36.748825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.861 [2024-07-26 14:20:36.748889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.861 qpair failed and we were unable to recover it. 00:26:28.862 [2024-07-26 14:20:36.749132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.862 [2024-07-26 14:20:36.749197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.862 qpair failed and we were unable to recover it. 00:26:28.862 [2024-07-26 14:20:36.749459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.862 [2024-07-26 14:20:36.749524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.862 qpair failed and we were unable to recover it. 00:26:28.862 [2024-07-26 14:20:36.749783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.862 [2024-07-26 14:20:36.749847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.862 qpair failed and we were unable to recover it. 00:26:28.862 [2024-07-26 14:20:36.750104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.862 [2024-07-26 14:20:36.750168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.862 qpair failed and we were unable to recover it. 00:26:28.862 [2024-07-26 14:20:36.750458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.862 [2024-07-26 14:20:36.750522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.862 qpair failed and we were unable to recover it. 00:26:28.862 [2024-07-26 14:20:36.750836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.862 [2024-07-26 14:20:36.750900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.862 qpair failed and we were unable to recover it. 00:26:28.862 [2024-07-26 14:20:36.751150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.862 [2024-07-26 14:20:36.751216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.862 qpair failed and we were unable to recover it. 00:26:28.862 [2024-07-26 14:20:36.751465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.862 [2024-07-26 14:20:36.751552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.862 qpair failed and we were unable to recover it. 00:26:28.862 [2024-07-26 14:20:36.751839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.862 [2024-07-26 14:20:36.751903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.862 qpair failed and we were unable to recover it. 00:26:28.862 [2024-07-26 14:20:36.752150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.862 [2024-07-26 14:20:36.752214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.862 qpair failed and we were unable to recover it. 00:26:28.862 [2024-07-26 14:20:36.752432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.862 [2024-07-26 14:20:36.752495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.862 qpair failed and we were unable to recover it. 00:26:28.862 [2024-07-26 14:20:36.752802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.862 [2024-07-26 14:20:36.752867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.862 qpair failed and we were unable to recover it. 00:26:28.862 [2024-07-26 14:20:36.753107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.862 [2024-07-26 14:20:36.753172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.862 qpair failed and we were unable to recover it. 00:26:28.862 [2024-07-26 14:20:36.753421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.862 [2024-07-26 14:20:36.753489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.862 qpair failed and we were unable to recover it. 00:26:28.862 [2024-07-26 14:20:36.753793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.862 [2024-07-26 14:20:36.753857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.862 qpair failed and we were unable to recover it. 00:26:28.862 [2024-07-26 14:20:36.754063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.862 [2024-07-26 14:20:36.754127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.862 qpair failed and we were unable to recover it. 00:26:28.862 [2024-07-26 14:20:36.754422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.862 [2024-07-26 14:20:36.754486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.862 qpair failed and we were unable to recover it. 00:26:28.862 [2024-07-26 14:20:36.754742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.862 [2024-07-26 14:20:36.754807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.862 qpair failed and we were unable to recover it. 00:26:28.862 [2024-07-26 14:20:36.755019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.862 [2024-07-26 14:20:36.755082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.862 qpair failed and we were unable to recover it. 00:26:28.862 [2024-07-26 14:20:36.755363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.862 [2024-07-26 14:20:36.755425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.862 qpair failed and we were unable to recover it. 00:26:28.862 [2024-07-26 14:20:36.755651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.862 [2024-07-26 14:20:36.755716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.862 qpair failed and we were unable to recover it. 00:26:28.862 [2024-07-26 14:20:36.755992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.862 [2024-07-26 14:20:36.756056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.862 qpair failed and we were unable to recover it. 00:26:28.862 [2024-07-26 14:20:36.756295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.862 [2024-07-26 14:20:36.756357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.862 qpair failed and we were unable to recover it. 00:26:28.862 [2024-07-26 14:20:36.756606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.862 [2024-07-26 14:20:36.756672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.862 qpair failed and we were unable to recover it. 00:26:28.862 [2024-07-26 14:20:36.756925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.862 [2024-07-26 14:20:36.756989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.862 qpair failed and we were unable to recover it. 00:26:28.862 [2024-07-26 14:20:36.757244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.862 [2024-07-26 14:20:36.757308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.862 qpair failed and we were unable to recover it. 00:26:28.862 [2024-07-26 14:20:36.757596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.862 [2024-07-26 14:20:36.757661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.862 qpair failed and we were unable to recover it. 00:26:28.862 [2024-07-26 14:20:36.757865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.862 [2024-07-26 14:20:36.757929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.862 qpair failed and we were unable to recover it. 00:26:28.862 [2024-07-26 14:20:36.758174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.862 [2024-07-26 14:20:36.758237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.862 qpair failed and we were unable to recover it. 00:26:28.862 [2024-07-26 14:20:36.758521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.862 [2024-07-26 14:20:36.758598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.862 qpair failed and we were unable to recover it. 00:26:28.862 [2024-07-26 14:20:36.758852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.862 [2024-07-26 14:20:36.758915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.862 qpair failed and we were unable to recover it. 00:26:28.862 [2024-07-26 14:20:36.759157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.862 [2024-07-26 14:20:36.759221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.862 qpair failed and we were unable to recover it. 00:26:28.862 [2024-07-26 14:20:36.759498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.862 [2024-07-26 14:20:36.759577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.862 qpair failed and we were unable to recover it. 00:26:28.862 [2024-07-26 14:20:36.759822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.862 [2024-07-26 14:20:36.759885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.862 qpair failed and we were unable to recover it. 00:26:28.862 [2024-07-26 14:20:36.760154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.862 [2024-07-26 14:20:36.760217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.862 qpair failed and we were unable to recover it. 00:26:28.862 [2024-07-26 14:20:36.760452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.862 [2024-07-26 14:20:36.760518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.862 qpair failed and we were unable to recover it. 00:26:28.862 [2024-07-26 14:20:36.760824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.862 [2024-07-26 14:20:36.760888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.862 qpair failed and we were unable to recover it. 00:26:28.862 [2024-07-26 14:20:36.761129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.863 [2024-07-26 14:20:36.761192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.863 qpair failed and we were unable to recover it. 00:26:28.863 [2024-07-26 14:20:36.761477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.863 [2024-07-26 14:20:36.761555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.863 qpair failed and we were unable to recover it. 00:26:28.863 [2024-07-26 14:20:36.761834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.863 [2024-07-26 14:20:36.761897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.863 qpair failed and we were unable to recover it. 00:26:28.863 [2024-07-26 14:20:36.762151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.863 [2024-07-26 14:20:36.762214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.863 qpair failed and we were unable to recover it. 00:26:28.863 [2024-07-26 14:20:36.762466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.863 [2024-07-26 14:20:36.762544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.863 qpair failed and we were unable to recover it. 00:26:28.863 [2024-07-26 14:20:36.762826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.863 [2024-07-26 14:20:36.762889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.863 qpair failed and we were unable to recover it. 00:26:28.863 [2024-07-26 14:20:36.763143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.863 [2024-07-26 14:20:36.763206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.863 qpair failed and we were unable to recover it. 00:26:28.863 [2024-07-26 14:20:36.763414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.863 [2024-07-26 14:20:36.763480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.863 qpair failed and we were unable to recover it. 00:26:28.863 [2024-07-26 14:20:36.763738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.863 [2024-07-26 14:20:36.763804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.863 qpair failed and we were unable to recover it. 00:26:28.863 [2024-07-26 14:20:36.764044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.863 [2024-07-26 14:20:36.764107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.863 qpair failed and we were unable to recover it. 00:26:28.863 [2024-07-26 14:20:36.764345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.863 [2024-07-26 14:20:36.764417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.863 qpair failed and we were unable to recover it. 00:26:28.863 [2024-07-26 14:20:36.764675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.863 [2024-07-26 14:20:36.764740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.863 qpair failed and we were unable to recover it. 00:26:28.863 [2024-07-26 14:20:36.764972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.863 [2024-07-26 14:20:36.765035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.863 qpair failed and we were unable to recover it. 00:26:28.863 [2024-07-26 14:20:36.765218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.863 [2024-07-26 14:20:36.765284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.863 qpair failed and we were unable to recover it. 00:26:28.863 [2024-07-26 14:20:36.765505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.863 [2024-07-26 14:20:36.765587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.863 qpair failed and we were unable to recover it. 00:26:28.863 [2024-07-26 14:20:36.765800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.863 [2024-07-26 14:20:36.765864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.863 qpair failed and we were unable to recover it. 00:26:28.863 [2024-07-26 14:20:36.766116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.863 [2024-07-26 14:20:36.766179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.863 qpair failed and we were unable to recover it. 00:26:28.863 [2024-07-26 14:20:36.766434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.863 [2024-07-26 14:20:36.766496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.863 qpair failed and we were unable to recover it. 00:26:28.863 [2024-07-26 14:20:36.766721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.863 [2024-07-26 14:20:36.766783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.863 qpair failed and we were unable to recover it. 00:26:28.863 [2024-07-26 14:20:36.767025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.863 [2024-07-26 14:20:36.767086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.863 qpair failed and we were unable to recover it. 00:26:28.863 [2024-07-26 14:20:36.767323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.863 [2024-07-26 14:20:36.767384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.863 qpair failed and we were unable to recover it. 00:26:28.863 [2024-07-26 14:20:36.767637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.863 [2024-07-26 14:20:36.767705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.863 qpair failed and we were unable to recover it. 00:26:28.863 [2024-07-26 14:20:36.767967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.863 [2024-07-26 14:20:36.768031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.863 qpair failed and we were unable to recover it. 00:26:28.863 [2024-07-26 14:20:36.768275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.863 [2024-07-26 14:20:36.768339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.863 qpair failed and we were unable to recover it. 00:26:28.863 [2024-07-26 14:20:36.768590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.863 [2024-07-26 14:20:36.768654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.863 qpair failed and we were unable to recover it. 00:26:28.863 [2024-07-26 14:20:36.768890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.863 [2024-07-26 14:20:36.768954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.863 qpair failed and we were unable to recover it. 00:26:28.863 [2024-07-26 14:20:36.769208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.863 [2024-07-26 14:20:36.769271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.863 qpair failed and we were unable to recover it. 00:26:28.863 [2024-07-26 14:20:36.769474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.863 [2024-07-26 14:20:36.769561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.863 qpair failed and we were unable to recover it. 00:26:28.863 [2024-07-26 14:20:36.769845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.863 [2024-07-26 14:20:36.769909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.863 qpair failed and we were unable to recover it. 00:26:28.863 [2024-07-26 14:20:36.770195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.863 [2024-07-26 14:20:36.770258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.863 qpair failed and we were unable to recover it. 00:26:28.863 [2024-07-26 14:20:36.770466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.863 [2024-07-26 14:20:36.770542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.863 qpair failed and we were unable to recover it. 00:26:28.863 [2024-07-26 14:20:36.770797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.863 [2024-07-26 14:20:36.770862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.863 qpair failed and we were unable to recover it. 00:26:28.863 [2024-07-26 14:20:36.771110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.863 [2024-07-26 14:20:36.771175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.864 qpair failed and we were unable to recover it. 00:26:28.864 [2024-07-26 14:20:36.771472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.864 [2024-07-26 14:20:36.771550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.864 qpair failed and we were unable to recover it. 00:26:28.864 [2024-07-26 14:20:36.771797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.864 [2024-07-26 14:20:36.771861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.864 qpair failed and we were unable to recover it. 00:26:28.864 [2024-07-26 14:20:36.772149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.864 [2024-07-26 14:20:36.772212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.864 qpair failed and we were unable to recover it. 00:26:28.864 [2024-07-26 14:20:36.772461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.864 [2024-07-26 14:20:36.772524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.864 qpair failed and we were unable to recover it. 00:26:28.864 [2024-07-26 14:20:36.772835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.864 [2024-07-26 14:20:36.772900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.864 qpair failed and we were unable to recover it. 00:26:28.864 [2024-07-26 14:20:36.773187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.864 [2024-07-26 14:20:36.773249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.864 qpair failed and we were unable to recover it. 00:26:28.864 [2024-07-26 14:20:36.773492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.864 [2024-07-26 14:20:36.773572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.864 qpair failed and we were unable to recover it. 00:26:28.864 [2024-07-26 14:20:36.773818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.864 [2024-07-26 14:20:36.773882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.864 qpair failed and we were unable to recover it. 00:26:28.864 [2024-07-26 14:20:36.774140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.864 [2024-07-26 14:20:36.774203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.864 qpair failed and we were unable to recover it. 00:26:28.864 [2024-07-26 14:20:36.774402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.864 [2024-07-26 14:20:36.774467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.864 qpair failed and we were unable to recover it. 00:26:28.864 [2024-07-26 14:20:36.774700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.864 [2024-07-26 14:20:36.774767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.864 qpair failed and we were unable to recover it. 00:26:28.864 [2024-07-26 14:20:36.775014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.864 [2024-07-26 14:20:36.775078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.864 qpair failed and we were unable to recover it. 00:26:28.864 [2024-07-26 14:20:36.775303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.864 [2024-07-26 14:20:36.775369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.864 qpair failed and we were unable to recover it. 00:26:28.864 [2024-07-26 14:20:36.775622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.864 [2024-07-26 14:20:36.775687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.864 qpair failed and we were unable to recover it. 00:26:28.864 [2024-07-26 14:20:36.775921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.864 [2024-07-26 14:20:36.775985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.864 qpair failed and we were unable to recover it. 00:26:28.864 [2024-07-26 14:20:36.776222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.864 [2024-07-26 14:20:36.776286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.864 qpair failed and we were unable to recover it. 00:26:28.864 [2024-07-26 14:20:36.776550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.864 [2024-07-26 14:20:36.776614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.864 qpair failed and we were unable to recover it. 00:26:28.864 [2024-07-26 14:20:36.776888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.864 [2024-07-26 14:20:36.776960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.864 qpair failed and we were unable to recover it. 00:26:28.864 [2024-07-26 14:20:36.777245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.864 [2024-07-26 14:20:36.777310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.864 qpair failed and we were unable to recover it. 00:26:28.864 [2024-07-26 14:20:36.777586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.864 [2024-07-26 14:20:36.777651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.864 qpair failed and we were unable to recover it. 00:26:28.864 [2024-07-26 14:20:36.777881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.864 [2024-07-26 14:20:36.777945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.864 qpair failed and we were unable to recover it. 00:26:28.864 [2024-07-26 14:20:36.778195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.864 [2024-07-26 14:20:36.778257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.864 qpair failed and we were unable to recover it. 00:26:28.864 [2024-07-26 14:20:36.778502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.864 [2024-07-26 14:20:36.778581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.864 qpair failed and we were unable to recover it. 00:26:28.864 [2024-07-26 14:20:36.778836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.864 [2024-07-26 14:20:36.778900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.864 qpair failed and we were unable to recover it. 00:26:28.864 [2024-07-26 14:20:36.779168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.864 [2024-07-26 14:20:36.779232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.864 qpair failed and we were unable to recover it. 00:26:28.864 [2024-07-26 14:20:36.779521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.864 [2024-07-26 14:20:36.779597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.864 qpair failed and we were unable to recover it. 00:26:28.864 [2024-07-26 14:20:36.779809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.864 [2024-07-26 14:20:36.779873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.864 qpair failed and we were unable to recover it. 00:26:28.864 [2024-07-26 14:20:36.780062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.864 [2024-07-26 14:20:36.780125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.864 qpair failed and we were unable to recover it. 00:26:28.864 [2024-07-26 14:20:36.780322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.864 [2024-07-26 14:20:36.780390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.864 qpair failed and we were unable to recover it. 00:26:28.864 [2024-07-26 14:20:36.780648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.864 [2024-07-26 14:20:36.780714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.864 qpair failed and we were unable to recover it. 00:26:28.864 [2024-07-26 14:20:36.781004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.864 [2024-07-26 14:20:36.781067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.864 qpair failed and we were unable to recover it. 00:26:28.864 [2024-07-26 14:20:36.781321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.864 [2024-07-26 14:20:36.781386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.864 qpair failed and we were unable to recover it. 00:26:28.864 [2024-07-26 14:20:36.781616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.864 [2024-07-26 14:20:36.781681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.864 qpair failed and we were unable to recover it. 00:26:28.864 [2024-07-26 14:20:36.781964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.864 [2024-07-26 14:20:36.782028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.864 qpair failed and we were unable to recover it. 00:26:28.865 [2024-07-26 14:20:36.782267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.865 [2024-07-26 14:20:36.782329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.865 qpair failed and we were unable to recover it. 00:26:28.865 [2024-07-26 14:20:36.782592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.865 [2024-07-26 14:20:36.782658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.865 qpair failed and we were unable to recover it. 00:26:28.865 [2024-07-26 14:20:36.782868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.865 [2024-07-26 14:20:36.782932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.865 qpair failed and we were unable to recover it. 00:26:28.865 [2024-07-26 14:20:36.783187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.865 [2024-07-26 14:20:36.783250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.865 qpair failed and we were unable to recover it. 00:26:28.865 [2024-07-26 14:20:36.783554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.865 [2024-07-26 14:20:36.783619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.865 qpair failed and we were unable to recover it. 00:26:28.865 [2024-07-26 14:20:36.783839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.865 [2024-07-26 14:20:36.783905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.865 qpair failed and we were unable to recover it. 00:26:28.865 [2024-07-26 14:20:36.784171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.865 [2024-07-26 14:20:36.784234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.865 qpair failed and we were unable to recover it. 00:26:28.865 [2024-07-26 14:20:36.784435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.865 [2024-07-26 14:20:36.784498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.865 qpair failed and we were unable to recover it. 00:26:28.865 [2024-07-26 14:20:36.784786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.865 [2024-07-26 14:20:36.784851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.865 qpair failed and we were unable to recover it. 00:26:28.865 [2024-07-26 14:20:36.785102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.865 [2024-07-26 14:20:36.785166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.865 qpair failed and we were unable to recover it. 00:26:28.865 [2024-07-26 14:20:36.785437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.865 [2024-07-26 14:20:36.785502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.865 qpair failed and we were unable to recover it. 00:26:28.865 [2024-07-26 14:20:36.785763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.865 [2024-07-26 14:20:36.785828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.865 qpair failed and we were unable to recover it. 00:26:28.865 [2024-07-26 14:20:36.786081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.865 [2024-07-26 14:20:36.786145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.865 qpair failed and we were unable to recover it. 00:26:28.865 [2024-07-26 14:20:36.786408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.865 [2024-07-26 14:20:36.786471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.865 qpair failed and we were unable to recover it. 00:26:28.865 [2024-07-26 14:20:36.786769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.865 [2024-07-26 14:20:36.786833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.865 qpair failed and we were unable to recover it. 00:26:28.865 [2024-07-26 14:20:36.787056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.865 [2024-07-26 14:20:36.787119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.865 qpair failed and we were unable to recover it. 00:26:28.865 [2024-07-26 14:20:36.787312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.865 [2024-07-26 14:20:36.787378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.865 qpair failed and we were unable to recover it. 00:26:28.865 [2024-07-26 14:20:36.787590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.865 [2024-07-26 14:20:36.787656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.865 qpair failed and we were unable to recover it. 00:26:28.865 [2024-07-26 14:20:36.787832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.865 [2024-07-26 14:20:36.787896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.865 qpair failed and we were unable to recover it. 00:26:28.865 [2024-07-26 14:20:36.788142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.865 [2024-07-26 14:20:36.788206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.865 qpair failed and we were unable to recover it. 00:26:28.865 [2024-07-26 14:20:36.788410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.865 [2024-07-26 14:20:36.788473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:28.865 qpair failed and we were unable to recover it. 00:26:28.865 [2024-07-26 14:20:36.788760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.865 [2024-07-26 14:20:36.788859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.865 qpair failed and we were unable to recover it. 00:26:28.865 [2024-07-26 14:20:36.789128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.865 [2024-07-26 14:20:36.789196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.865 qpair failed and we were unable to recover it. 00:26:28.865 [2024-07-26 14:20:36.789409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.865 [2024-07-26 14:20:36.789486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.865 qpair failed and we were unable to recover it. 00:26:28.865 [2024-07-26 14:20:36.789696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.865 [2024-07-26 14:20:36.789760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.865 qpair failed and we were unable to recover it. 00:26:28.865 [2024-07-26 14:20:36.790013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.865 [2024-07-26 14:20:36.790075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.865 qpair failed and we were unable to recover it. 00:26:28.865 [2024-07-26 14:20:36.790283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.865 [2024-07-26 14:20:36.790346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.865 qpair failed and we were unable to recover it. 00:26:28.865 [2024-07-26 14:20:36.790591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.865 [2024-07-26 14:20:36.790657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.865 qpair failed and we were unable to recover it. 00:26:28.865 [2024-07-26 14:20:36.790914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.865 [2024-07-26 14:20:36.790977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.865 qpair failed and we were unable to recover it. 00:26:28.865 [2024-07-26 14:20:36.791188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.865 [2024-07-26 14:20:36.791251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.866 qpair failed and we were unable to recover it. 00:26:28.866 [2024-07-26 14:20:36.791480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.866 [2024-07-26 14:20:36.791556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.866 qpair failed and we were unable to recover it. 00:26:28.866 [2024-07-26 14:20:36.791774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.866 [2024-07-26 14:20:36.791837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.866 qpair failed and we were unable to recover it. 00:26:28.866 [2024-07-26 14:20:36.792091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.866 [2024-07-26 14:20:36.792155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.866 qpair failed and we were unable to recover it. 00:26:28.866 [2024-07-26 14:20:36.792370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.866 [2024-07-26 14:20:36.792432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.866 qpair failed and we were unable to recover it. 00:26:28.866 [2024-07-26 14:20:36.792689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.866 [2024-07-26 14:20:36.792753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.866 qpair failed and we were unable to recover it. 00:26:28.866 [2024-07-26 14:20:36.793000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.866 [2024-07-26 14:20:36.793063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.866 qpair failed and we were unable to recover it. 00:26:28.866 [2024-07-26 14:20:36.793299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.866 [2024-07-26 14:20:36.793363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.866 qpair failed and we were unable to recover it. 00:26:28.866 [2024-07-26 14:20:36.793606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.866 [2024-07-26 14:20:36.793671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.866 qpair failed and we were unable to recover it. 00:26:28.866 [2024-07-26 14:20:36.793884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.866 [2024-07-26 14:20:36.793947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.866 qpair failed and we were unable to recover it. 00:26:28.866 [2024-07-26 14:20:36.794191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.866 [2024-07-26 14:20:36.794253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.866 qpair failed and we were unable to recover it. 00:26:28.866 [2024-07-26 14:20:36.794468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.866 [2024-07-26 14:20:36.794549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.866 qpair failed and we were unable to recover it. 00:26:28.866 [2024-07-26 14:20:36.794823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.866 [2024-07-26 14:20:36.794887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.866 qpair failed and we were unable to recover it. 00:26:28.866 [2024-07-26 14:20:36.795102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.866 [2024-07-26 14:20:36.795164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.866 qpair failed and we were unable to recover it. 00:26:28.866 [2024-07-26 14:20:36.795382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.866 [2024-07-26 14:20:36.795445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.866 qpair failed and we were unable to recover it. 00:26:28.866 [2024-07-26 14:20:36.795709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.866 [2024-07-26 14:20:36.795773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.866 qpair failed and we were unable to recover it. 00:26:28.866 [2024-07-26 14:20:36.796027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.866 [2024-07-26 14:20:36.796090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.866 qpair failed and we were unable to recover it. 00:26:28.866 [2024-07-26 14:20:36.796338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.866 [2024-07-26 14:20:36.796401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.866 qpair failed and we were unable to recover it. 00:26:28.866 [2024-07-26 14:20:36.796643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.866 [2024-07-26 14:20:36.796708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.866 qpair failed and we were unable to recover it. 00:26:28.866 [2024-07-26 14:20:36.796904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.866 [2024-07-26 14:20:36.796970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.866 qpair failed and we were unable to recover it. 00:26:28.866 [2024-07-26 14:20:36.797213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.866 [2024-07-26 14:20:36.797277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.866 qpair failed and we were unable to recover it. 00:26:28.866 [2024-07-26 14:20:36.797561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.866 [2024-07-26 14:20:36.797634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.866 qpair failed and we were unable to recover it. 00:26:28.866 [2024-07-26 14:20:36.797829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.866 [2024-07-26 14:20:36.797891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.866 qpair failed and we were unable to recover it. 00:26:28.866 [2024-07-26 14:20:36.798138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.866 [2024-07-26 14:20:36.798201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.866 qpair failed and we were unable to recover it. 00:26:28.866 [2024-07-26 14:20:36.798456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.866 [2024-07-26 14:20:36.798519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.866 qpair failed and we were unable to recover it. 00:26:28.866 [2024-07-26 14:20:36.798751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.866 [2024-07-26 14:20:36.798814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.866 qpair failed and we were unable to recover it. 00:26:28.866 [2024-07-26 14:20:36.799065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.866 [2024-07-26 14:20:36.799127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.866 qpair failed and we were unable to recover it. 00:26:28.866 [2024-07-26 14:20:36.799381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.866 [2024-07-26 14:20:36.799443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.866 qpair failed and we were unable to recover it. 00:26:28.866 [2024-07-26 14:20:36.799696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.866 [2024-07-26 14:20:36.799760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.866 qpair failed and we were unable to recover it. 00:26:28.866 [2024-07-26 14:20:36.800052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.866 [2024-07-26 14:20:36.800114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.866 qpair failed and we were unable to recover it. 00:26:28.866 [2024-07-26 14:20:36.800355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.866 [2024-07-26 14:20:36.800420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.866 qpair failed and we were unable to recover it. 00:26:28.866 [2024-07-26 14:20:36.800697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.866 [2024-07-26 14:20:36.800761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.867 qpair failed and we were unable to recover it. 00:26:28.867 [2024-07-26 14:20:36.800975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.867 [2024-07-26 14:20:36.801038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.867 qpair failed and we were unable to recover it. 00:26:28.867 [2024-07-26 14:20:36.801259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.867 [2024-07-26 14:20:36.801324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.867 qpair failed and we were unable to recover it. 00:26:28.867 [2024-07-26 14:20:36.801605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.867 [2024-07-26 14:20:36.801671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.867 qpair failed and we were unable to recover it. 00:26:28.867 [2024-07-26 14:20:36.801942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.867 [2024-07-26 14:20:36.802005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.867 qpair failed and we were unable to recover it. 00:26:28.867 [2024-07-26 14:20:36.802262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.867 [2024-07-26 14:20:36.802325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.867 qpair failed and we were unable to recover it. 00:26:28.867 [2024-07-26 14:20:36.802566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.867 [2024-07-26 14:20:36.802631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.867 qpair failed and we were unable to recover it. 00:26:28.867 [2024-07-26 14:20:36.802881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.867 [2024-07-26 14:20:36.802944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.867 qpair failed and we were unable to recover it. 00:26:28.867 [2024-07-26 14:20:36.803194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.867 [2024-07-26 14:20:36.803258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.867 qpair failed and we were unable to recover it. 00:26:28.867 [2024-07-26 14:20:36.803441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.867 [2024-07-26 14:20:36.803506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.867 qpair failed and we were unable to recover it. 00:26:28.867 [2024-07-26 14:20:36.803747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.867 [2024-07-26 14:20:36.803810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.867 qpair failed and we were unable to recover it. 00:26:28.867 [2024-07-26 14:20:36.804087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.867 [2024-07-26 14:20:36.804150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.867 qpair failed and we were unable to recover it. 00:26:28.867 [2024-07-26 14:20:36.804342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.867 [2024-07-26 14:20:36.804403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.867 qpair failed and we were unable to recover it. 00:26:28.867 [2024-07-26 14:20:36.804658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.867 [2024-07-26 14:20:36.804722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.867 qpair failed and we were unable to recover it. 00:26:28.867 [2024-07-26 14:20:36.804941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.867 [2024-07-26 14:20:36.805004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.867 qpair failed and we were unable to recover it. 00:26:28.867 [2024-07-26 14:20:36.805248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.867 [2024-07-26 14:20:36.805311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.867 qpair failed and we were unable to recover it. 00:26:28.867 [2024-07-26 14:20:36.805509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.867 [2024-07-26 14:20:36.805596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.867 qpair failed and we were unable to recover it. 00:26:28.867 [2024-07-26 14:20:36.805827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.867 [2024-07-26 14:20:36.805890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.867 qpair failed and we were unable to recover it. 00:26:28.867 [2024-07-26 14:20:36.806177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.867 [2024-07-26 14:20:36.806241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.867 qpair failed and we were unable to recover it. 00:26:28.867 [2024-07-26 14:20:36.806490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.867 [2024-07-26 14:20:36.806569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.867 qpair failed and we were unable to recover it. 00:26:28.867 [2024-07-26 14:20:36.806867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.867 [2024-07-26 14:20:36.806930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.867 qpair failed and we were unable to recover it. 00:26:28.867 [2024-07-26 14:20:36.807195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.867 [2024-07-26 14:20:36.807257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.867 qpair failed and we were unable to recover it. 00:26:28.867 [2024-07-26 14:20:36.807504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.867 [2024-07-26 14:20:36.807584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.867 qpair failed and we were unable to recover it. 00:26:28.867 [2024-07-26 14:20:36.807807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.867 [2024-07-26 14:20:36.807870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.867 qpair failed and we were unable to recover it. 00:26:28.867 [2024-07-26 14:20:36.808103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.867 [2024-07-26 14:20:36.808165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.867 qpair failed and we were unable to recover it. 00:26:28.867 [2024-07-26 14:20:36.808444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.867 [2024-07-26 14:20:36.808507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.867 qpair failed and we were unable to recover it. 00:26:28.867 [2024-07-26 14:20:36.808779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.867 [2024-07-26 14:20:36.808843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.867 qpair failed and we were unable to recover it. 00:26:28.867 [2024-07-26 14:20:36.809141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.867 [2024-07-26 14:20:36.809204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.867 qpair failed and we were unable to recover it. 00:26:28.867 [2024-07-26 14:20:36.809400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.867 [2024-07-26 14:20:36.809463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.867 qpair failed and we were unable to recover it. 00:26:28.867 [2024-07-26 14:20:36.809726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.867 [2024-07-26 14:20:36.809791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.867 qpair failed and we were unable to recover it. 00:26:28.867 [2024-07-26 14:20:36.810037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.868 [2024-07-26 14:20:36.810099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.868 qpair failed and we were unable to recover it. 00:26:28.868 [2024-07-26 14:20:36.810293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.868 [2024-07-26 14:20:36.810356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.868 qpair failed and we were unable to recover it. 00:26:28.868 [2024-07-26 14:20:36.810634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.868 [2024-07-26 14:20:36.810698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.868 qpair failed and we were unable to recover it. 00:26:28.868 [2024-07-26 14:20:36.810986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.868 [2024-07-26 14:20:36.811049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.868 qpair failed and we were unable to recover it. 00:26:28.868 [2024-07-26 14:20:36.811329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.868 [2024-07-26 14:20:36.811392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.868 qpair failed and we were unable to recover it. 00:26:28.868 [2024-07-26 14:20:36.811585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.868 [2024-07-26 14:20:36.811648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.868 qpair failed and we were unable to recover it. 00:26:28.868 [2024-07-26 14:20:36.811865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.868 [2024-07-26 14:20:36.811928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.868 qpair failed and we were unable to recover it. 00:26:28.868 [2024-07-26 14:20:36.812158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.868 [2024-07-26 14:20:36.812221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.868 qpair failed and we were unable to recover it. 00:26:28.868 [2024-07-26 14:20:36.812510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.868 [2024-07-26 14:20:36.812606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.868 qpair failed and we were unable to recover it. 00:26:28.868 [2024-07-26 14:20:36.812868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.868 [2024-07-26 14:20:36.812931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.868 qpair failed and we were unable to recover it. 00:26:28.868 [2024-07-26 14:20:36.813177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.868 [2024-07-26 14:20:36.813239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.868 qpair failed and we were unable to recover it. 00:26:28.868 [2024-07-26 14:20:36.813437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.868 [2024-07-26 14:20:36.813500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.868 qpair failed and we were unable to recover it. 00:26:28.868 [2024-07-26 14:20:36.813738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.868 [2024-07-26 14:20:36.813803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.868 qpair failed and we were unable to recover it. 00:26:28.868 [2024-07-26 14:20:36.814038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.868 [2024-07-26 14:20:36.814102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.868 qpair failed and we were unable to recover it. 00:26:28.868 [2024-07-26 14:20:36.814318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.868 [2024-07-26 14:20:36.814381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.868 qpair failed and we were unable to recover it. 00:26:28.868 [2024-07-26 14:20:36.814588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.868 [2024-07-26 14:20:36.814652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.868 qpair failed and we were unable to recover it. 00:26:28.868 [2024-07-26 14:20:36.814865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.868 [2024-07-26 14:20:36.814928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.868 qpair failed and we were unable to recover it. 00:26:28.868 [2024-07-26 14:20:36.815208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.868 [2024-07-26 14:20:36.815271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.868 qpair failed and we were unable to recover it. 00:26:28.868 [2024-07-26 14:20:36.815478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.868 [2024-07-26 14:20:36.815554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.868 qpair failed and we were unable to recover it. 00:26:28.868 [2024-07-26 14:20:36.815777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.868 [2024-07-26 14:20:36.815840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.868 qpair failed and we were unable to recover it. 00:26:28.868 [2024-07-26 14:20:36.816066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.868 [2024-07-26 14:20:36.816129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.868 qpair failed and we were unable to recover it. 00:26:28.868 [2024-07-26 14:20:36.816342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.868 [2024-07-26 14:20:36.816404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.868 qpair failed and we were unable to recover it. 00:26:28.868 [2024-07-26 14:20:36.816622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.868 [2024-07-26 14:20:36.816688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.868 qpair failed and we were unable to recover it. 00:26:28.868 [2024-07-26 14:20:36.816969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.868 [2024-07-26 14:20:36.817031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.868 qpair failed and we were unable to recover it. 00:26:28.868 [2024-07-26 14:20:36.817310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.868 [2024-07-26 14:20:36.817373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.868 qpair failed and we were unable to recover it. 00:26:28.868 [2024-07-26 14:20:36.817587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.868 [2024-07-26 14:20:36.817655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.868 qpair failed and we were unable to recover it. 00:26:28.868 [2024-07-26 14:20:36.817866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.868 [2024-07-26 14:20:36.817928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.868 qpair failed and we were unable to recover it. 00:26:28.868 [2024-07-26 14:20:36.818116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.868 [2024-07-26 14:20:36.818178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.868 qpair failed and we were unable to recover it. 00:26:28.868 [2024-07-26 14:20:36.818413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.868 [2024-07-26 14:20:36.818488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.868 qpair failed and we were unable to recover it. 00:26:28.868 [2024-07-26 14:20:36.818796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.869 [2024-07-26 14:20:36.818859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.869 qpair failed and we were unable to recover it. 00:26:28.869 [2024-07-26 14:20:36.819109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.869 [2024-07-26 14:20:36.819172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.869 qpair failed and we were unable to recover it. 00:26:28.869 [2024-07-26 14:20:36.819398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.869 [2024-07-26 14:20:36.819461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.869 qpair failed and we were unable to recover it. 00:26:28.869 [2024-07-26 14:20:36.819731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.869 [2024-07-26 14:20:36.819795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.869 qpair failed and we were unable to recover it. 00:26:28.869 [2024-07-26 14:20:36.819978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.869 [2024-07-26 14:20:36.820042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.869 qpair failed and we were unable to recover it. 00:26:28.869 [2024-07-26 14:20:36.820322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.869 [2024-07-26 14:20:36.820385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.869 qpair failed and we were unable to recover it. 00:26:28.869 [2024-07-26 14:20:36.820584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.869 [2024-07-26 14:20:36.820649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.869 qpair failed and we were unable to recover it. 00:26:28.869 [2024-07-26 14:20:36.820866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.869 [2024-07-26 14:20:36.820928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.869 qpair failed and we were unable to recover it. 00:26:28.869 [2024-07-26 14:20:36.821167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.869 [2024-07-26 14:20:36.821229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.869 qpair failed and we were unable to recover it. 00:26:28.869 [2024-07-26 14:20:36.821485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.869 [2024-07-26 14:20:36.821564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.869 qpair failed and we were unable to recover it. 00:26:28.869 [2024-07-26 14:20:36.821848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.869 [2024-07-26 14:20:36.821910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.869 qpair failed and we were unable to recover it. 00:26:28.869 [2024-07-26 14:20:36.822133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.869 [2024-07-26 14:20:36.822196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.869 qpair failed and we were unable to recover it. 00:26:28.869 [2024-07-26 14:20:36.822407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.869 [2024-07-26 14:20:36.822470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.869 qpair failed and we were unable to recover it. 00:26:28.869 [2024-07-26 14:20:36.822726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.869 [2024-07-26 14:20:36.822788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.869 qpair failed and we were unable to recover it. 00:26:28.869 [2024-07-26 14:20:36.823075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.869 [2024-07-26 14:20:36.823137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.869 qpair failed and we were unable to recover it. 00:26:28.869 [2024-07-26 14:20:36.823384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.869 [2024-07-26 14:20:36.823447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.869 qpair failed and we were unable to recover it. 00:26:28.869 [2024-07-26 14:20:36.823668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.869 [2024-07-26 14:20:36.823732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.869 qpair failed and we were unable to recover it. 00:26:28.869 [2024-07-26 14:20:36.823943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.869 [2024-07-26 14:20:36.824007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.869 qpair failed and we were unable to recover it. 00:26:28.869 [2024-07-26 14:20:36.824219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.869 [2024-07-26 14:20:36.824282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.869 qpair failed and we were unable to recover it. 00:26:28.869 [2024-07-26 14:20:36.824559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.869 [2024-07-26 14:20:36.824624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.869 qpair failed and we were unable to recover it. 00:26:28.869 [2024-07-26 14:20:36.824892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.869 [2024-07-26 14:20:36.824955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.869 qpair failed and we were unable to recover it. 00:26:28.869 [2024-07-26 14:20:36.825200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.869 [2024-07-26 14:20:36.825263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.869 qpair failed and we were unable to recover it. 00:26:28.869 [2024-07-26 14:20:36.825471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.869 [2024-07-26 14:20:36.825551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.869 qpair failed and we were unable to recover it. 00:26:28.869 [2024-07-26 14:20:36.825761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.869 [2024-07-26 14:20:36.825825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.869 qpair failed and we were unable to recover it. 00:26:28.869 [2024-07-26 14:20:36.826079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.869 [2024-07-26 14:20:36.826141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.869 qpair failed and we were unable to recover it. 00:26:28.869 [2024-07-26 14:20:36.826429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.869 [2024-07-26 14:20:36.826491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.870 qpair failed and we were unable to recover it. 00:26:28.870 [2024-07-26 14:20:36.826767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.870 [2024-07-26 14:20:36.826840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.870 qpair failed and we were unable to recover it. 00:26:28.870 [2024-07-26 14:20:36.827102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.870 [2024-07-26 14:20:36.827165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.870 qpair failed and we were unable to recover it. 00:26:28.870 [2024-07-26 14:20:36.827418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.870 [2024-07-26 14:20:36.827481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.870 qpair failed and we were unable to recover it. 00:26:28.870 [2024-07-26 14:20:36.827727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.870 [2024-07-26 14:20:36.827792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.870 qpair failed and we were unable to recover it. 00:26:28.870 [2024-07-26 14:20:36.828024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.870 [2024-07-26 14:20:36.828088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.870 qpair failed and we were unable to recover it. 00:26:28.870 [2024-07-26 14:20:36.828288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.870 [2024-07-26 14:20:36.828350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.870 qpair failed and we were unable to recover it. 00:26:28.870 [2024-07-26 14:20:36.828600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.870 [2024-07-26 14:20:36.828665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.870 qpair failed and we were unable to recover it. 00:26:28.870 [2024-07-26 14:20:36.828946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.870 [2024-07-26 14:20:36.829009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.870 qpair failed and we were unable to recover it. 00:26:28.870 [2024-07-26 14:20:36.829266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.870 [2024-07-26 14:20:36.829328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.870 qpair failed and we were unable to recover it. 00:26:28.870 [2024-07-26 14:20:36.829556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.870 [2024-07-26 14:20:36.829620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.870 qpair failed and we were unable to recover it. 00:26:28.870 [2024-07-26 14:20:36.829868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.870 [2024-07-26 14:20:36.829931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.870 qpair failed and we were unable to recover it. 00:26:28.870 [2024-07-26 14:20:36.830147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.870 [2024-07-26 14:20:36.830209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.870 qpair failed and we were unable to recover it. 00:26:28.870 [2024-07-26 14:20:36.830429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.870 [2024-07-26 14:20:36.830492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.870 qpair failed and we were unable to recover it. 00:26:28.870 [2024-07-26 14:20:36.830726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.870 [2024-07-26 14:20:36.830790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.870 qpair failed and we were unable to recover it. 00:26:28.870 [2024-07-26 14:20:36.831083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.870 [2024-07-26 14:20:36.831146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.870 qpair failed and we were unable to recover it. 00:26:28.870 [2024-07-26 14:20:36.831433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.870 [2024-07-26 14:20:36.831496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.870 qpair failed and we were unable to recover it. 00:26:28.870 [2024-07-26 14:20:36.831733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.870 [2024-07-26 14:20:36.831795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.870 qpair failed and we were unable to recover it. 00:26:28.870 [2024-07-26 14:20:36.832071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.870 [2024-07-26 14:20:36.832134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.870 qpair failed and we were unable to recover it. 00:26:28.870 [2024-07-26 14:20:36.832422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.870 [2024-07-26 14:20:36.832484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.870 qpair failed and we were unable to recover it. 00:26:28.870 [2024-07-26 14:20:36.832717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.870 [2024-07-26 14:20:36.832782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.870 qpair failed and we were unable to recover it. 00:26:28.870 [2024-07-26 14:20:36.832991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.870 [2024-07-26 14:20:36.833055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.870 qpair failed and we were unable to recover it. 00:26:28.870 [2024-07-26 14:20:36.833274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.870 [2024-07-26 14:20:36.833337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.870 qpair failed and we were unable to recover it. 00:26:28.870 [2024-07-26 14:20:36.833573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.870 [2024-07-26 14:20:36.833637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.870 qpair failed and we were unable to recover it. 00:26:28.870 [2024-07-26 14:20:36.833883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.870 [2024-07-26 14:20:36.833946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.870 qpair failed and we were unable to recover it. 00:26:28.870 [2024-07-26 14:20:36.834175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.870 [2024-07-26 14:20:36.834239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.870 qpair failed and we were unable to recover it. 00:26:28.870 [2024-07-26 14:20:36.834523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.870 [2024-07-26 14:20:36.834600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.870 qpair failed and we were unable to recover it. 00:26:28.870 [2024-07-26 14:20:36.834835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.870 [2024-07-26 14:20:36.834898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.870 qpair failed and we were unable to recover it. 00:26:28.870 [2024-07-26 14:20:36.835118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.870 [2024-07-26 14:20:36.835180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.870 qpair failed and we were unable to recover it. 00:26:28.870 [2024-07-26 14:20:36.835402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.870 [2024-07-26 14:20:36.835466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.870 qpair failed and we were unable to recover it. 00:26:28.870 [2024-07-26 14:20:36.835697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.870 [2024-07-26 14:20:36.835760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.870 qpair failed and we were unable to recover it. 00:26:28.870 [2024-07-26 14:20:36.835953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.870 [2024-07-26 14:20:36.836015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.870 qpair failed and we were unable to recover it. 00:26:28.870 [2024-07-26 14:20:36.836263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.870 [2024-07-26 14:20:36.836326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.870 qpair failed and we were unable to recover it. 00:26:28.870 [2024-07-26 14:20:36.836555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.870 [2024-07-26 14:20:36.836619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.870 qpair failed and we were unable to recover it. 00:26:28.870 [2024-07-26 14:20:36.836872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.870 [2024-07-26 14:20:36.836934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.870 qpair failed and we were unable to recover it. 00:26:28.870 [2024-07-26 14:20:36.837217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.870 [2024-07-26 14:20:36.837279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.870 qpair failed and we were unable to recover it. 00:26:28.870 [2024-07-26 14:20:36.837562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.871 [2024-07-26 14:20:36.837626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.871 qpair failed and we were unable to recover it. 00:26:28.871 [2024-07-26 14:20:36.837868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.871 [2024-07-26 14:20:36.837931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.871 qpair failed and we were unable to recover it. 00:26:28.871 [2024-07-26 14:20:36.838144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.871 [2024-07-26 14:20:36.838208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.871 qpair failed and we were unable to recover it. 00:26:28.871 [2024-07-26 14:20:36.838425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.871 [2024-07-26 14:20:36.838488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.871 qpair failed and we were unable to recover it. 00:26:28.871 [2024-07-26 14:20:36.838764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.871 [2024-07-26 14:20:36.838828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.871 qpair failed and we were unable to recover it. 00:26:28.871 [2024-07-26 14:20:36.839067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.871 [2024-07-26 14:20:36.839130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.871 qpair failed and we were unable to recover it. 00:26:28.871 [2024-07-26 14:20:36.839360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.871 [2024-07-26 14:20:36.839423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.871 qpair failed and we were unable to recover it. 00:26:28.871 [2024-07-26 14:20:36.839684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.871 [2024-07-26 14:20:36.839749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.871 qpair failed and we were unable to recover it. 00:26:28.871 [2024-07-26 14:20:36.840038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.871 [2024-07-26 14:20:36.840100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.871 qpair failed and we were unable to recover it. 00:26:28.871 [2024-07-26 14:20:36.840343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.871 [2024-07-26 14:20:36.840406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.871 qpair failed and we were unable to recover it. 00:26:28.871 [2024-07-26 14:20:36.840600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.871 [2024-07-26 14:20:36.840666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.871 qpair failed and we were unable to recover it. 00:26:28.871 [2024-07-26 14:20:36.840913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.871 [2024-07-26 14:20:36.840976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:28.871 qpair failed and we were unable to recover it. 00:26:29.146 [2024-07-26 14:20:36.841189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.146 [2024-07-26 14:20:36.841252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.146 qpair failed and we were unable to recover it. 00:26:29.146 [2024-07-26 14:20:36.841472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.146 [2024-07-26 14:20:36.841555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.146 qpair failed and we were unable to recover it. 00:26:29.146 [2024-07-26 14:20:36.841805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.146 [2024-07-26 14:20:36.841868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.146 qpair failed and we were unable to recover it. 00:26:29.146 [2024-07-26 14:20:36.842118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.146 [2024-07-26 14:20:36.842181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.146 qpair failed and we were unable to recover it. 00:26:29.146 [2024-07-26 14:20:36.842417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.146 [2024-07-26 14:20:36.842480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.146 qpair failed and we were unable to recover it. 00:26:29.146 [2024-07-26 14:20:36.842749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.146 [2024-07-26 14:20:36.842814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.146 qpair failed and we were unable to recover it. 00:26:29.146 [2024-07-26 14:20:36.843076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.146 [2024-07-26 14:20:36.843140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.146 qpair failed and we were unable to recover it. 00:26:29.146 [2024-07-26 14:20:36.843383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.146 [2024-07-26 14:20:36.843445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.146 qpair failed and we were unable to recover it. 00:26:29.146 [2024-07-26 14:20:36.843705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.146 [2024-07-26 14:20:36.843769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.146 qpair failed and we were unable to recover it. 00:26:29.146 [2024-07-26 14:20:36.843959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.146 [2024-07-26 14:20:36.844025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.146 qpair failed and we were unable to recover it. 00:26:29.146 [2024-07-26 14:20:36.844248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.146 [2024-07-26 14:20:36.844310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.146 qpair failed and we were unable to recover it. 00:26:29.146 [2024-07-26 14:20:36.844593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.146 [2024-07-26 14:20:36.844659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.146 qpair failed and we were unable to recover it. 00:26:29.146 [2024-07-26 14:20:36.844910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.146 [2024-07-26 14:20:36.844974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.146 qpair failed and we were unable to recover it. 00:26:29.146 [2024-07-26 14:20:36.845229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.146 [2024-07-26 14:20:36.845292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.146 qpair failed and we were unable to recover it. 00:26:29.146 [2024-07-26 14:20:36.845506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.146 [2024-07-26 14:20:36.845583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.146 qpair failed and we were unable to recover it. 00:26:29.146 [2024-07-26 14:20:36.845790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.146 [2024-07-26 14:20:36.845854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.146 qpair failed and we were unable to recover it. 00:26:29.146 [2024-07-26 14:20:36.846095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.146 [2024-07-26 14:20:36.846158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.146 qpair failed and we were unable to recover it. 00:26:29.146 [2024-07-26 14:20:36.846401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.146 [2024-07-26 14:20:36.846464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.146 qpair failed and we were unable to recover it. 00:26:29.146 [2024-07-26 14:20:36.846765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.146 [2024-07-26 14:20:36.846828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.146 qpair failed and we were unable to recover it. 00:26:29.146 [2024-07-26 14:20:36.847083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.146 [2024-07-26 14:20:36.847146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.146 qpair failed and we were unable to recover it. 00:26:29.146 [2024-07-26 14:20:36.847431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.146 [2024-07-26 14:20:36.847494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.146 qpair failed and we were unable to recover it. 00:26:29.146 [2024-07-26 14:20:36.847752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.146 [2024-07-26 14:20:36.847825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.146 qpair failed and we were unable to recover it. 00:26:29.146 [2024-07-26 14:20:36.848108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.146 [2024-07-26 14:20:36.848172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.146 qpair failed and we were unable to recover it. 00:26:29.146 [2024-07-26 14:20:36.848414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.146 [2024-07-26 14:20:36.848476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.146 qpair failed and we were unable to recover it. 00:26:29.146 [2024-07-26 14:20:36.848684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.146 [2024-07-26 14:20:36.848748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.146 qpair failed and we were unable to recover it. 00:26:29.146 [2024-07-26 14:20:36.849005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.146 [2024-07-26 14:20:36.849067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.146 qpair failed and we were unable to recover it. 00:26:29.146 [2024-07-26 14:20:36.849316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.146 [2024-07-26 14:20:36.849378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.146 qpair failed and we were unable to recover it. 00:26:29.146 [2024-07-26 14:20:36.849637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.146 [2024-07-26 14:20:36.849701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.146 qpair failed and we were unable to recover it. 00:26:29.147 [2024-07-26 14:20:36.849947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.147 [2024-07-26 14:20:36.850009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.147 qpair failed and we were unable to recover it. 00:26:29.147 [2024-07-26 14:20:36.850249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.147 [2024-07-26 14:20:36.850312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.147 qpair failed and we were unable to recover it. 00:26:29.147 [2024-07-26 14:20:36.850558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.147 [2024-07-26 14:20:36.850623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.147 qpair failed and we were unable to recover it. 00:26:29.147 [2024-07-26 14:20:36.850917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.147 [2024-07-26 14:20:36.850980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.147 qpair failed and we were unable to recover it. 00:26:29.147 [2024-07-26 14:20:36.851190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.147 [2024-07-26 14:20:36.851253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.147 qpair failed and we were unable to recover it. 00:26:29.147 [2024-07-26 14:20:36.851542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.147 [2024-07-26 14:20:36.851605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.147 qpair failed and we were unable to recover it. 00:26:29.147 [2024-07-26 14:20:36.851807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.147 [2024-07-26 14:20:36.851869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.147 qpair failed and we were unable to recover it. 00:26:29.147 [2024-07-26 14:20:36.852068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.147 [2024-07-26 14:20:36.852130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.147 qpair failed and we were unable to recover it. 00:26:29.147 [2024-07-26 14:20:36.852374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.147 [2024-07-26 14:20:36.852440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.147 qpair failed and we were unable to recover it. 00:26:29.147 [2024-07-26 14:20:36.852727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.147 [2024-07-26 14:20:36.852792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.147 qpair failed and we were unable to recover it. 00:26:29.147 [2024-07-26 14:20:36.853030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.147 [2024-07-26 14:20:36.853094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.147 qpair failed and we were unable to recover it. 00:26:29.147 [2024-07-26 14:20:36.853314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.147 [2024-07-26 14:20:36.853376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.147 qpair failed and we were unable to recover it. 00:26:29.147 [2024-07-26 14:20:36.853655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.147 [2024-07-26 14:20:36.853721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.147 qpair failed and we were unable to recover it. 00:26:29.147 [2024-07-26 14:20:36.853962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.147 [2024-07-26 14:20:36.854026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.147 qpair failed and we were unable to recover it. 00:26:29.147 [2024-07-26 14:20:36.854270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.147 [2024-07-26 14:20:36.854332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.147 qpair failed and we were unable to recover it. 00:26:29.147 [2024-07-26 14:20:36.854604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.147 [2024-07-26 14:20:36.854668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.147 qpair failed and we were unable to recover it. 00:26:29.147 [2024-07-26 14:20:36.854962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.147 [2024-07-26 14:20:36.855024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.147 qpair failed and we were unable to recover it. 00:26:29.147 [2024-07-26 14:20:36.855259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.147 [2024-07-26 14:20:36.855322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.147 qpair failed and we were unable to recover it. 00:26:29.147 [2024-07-26 14:20:36.855551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.147 [2024-07-26 14:20:36.855615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.147 qpair failed and we were unable to recover it. 00:26:29.147 [2024-07-26 14:20:36.855836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.147 [2024-07-26 14:20:36.855898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.147 qpair failed and we were unable to recover it. 00:26:29.147 [2024-07-26 14:20:36.856140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.147 [2024-07-26 14:20:36.856213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.147 qpair failed and we were unable to recover it. 00:26:29.147 [2024-07-26 14:20:36.856439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.147 [2024-07-26 14:20:36.856502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.147 qpair failed and we were unable to recover it. 00:26:29.147 [2024-07-26 14:20:36.856736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.147 [2024-07-26 14:20:36.856799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.147 qpair failed and we were unable to recover it. 00:26:29.147 [2024-07-26 14:20:36.856997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.147 [2024-07-26 14:20:36.857060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.147 qpair failed and we were unable to recover it. 00:26:29.147 [2024-07-26 14:20:36.857304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.147 [2024-07-26 14:20:36.857368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.147 qpair failed and we were unable to recover it. 00:26:29.147 [2024-07-26 14:20:36.857598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.147 [2024-07-26 14:20:36.857663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.147 qpair failed and we were unable to recover it. 00:26:29.147 [2024-07-26 14:20:36.857950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.147 [2024-07-26 14:20:36.858013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.147 qpair failed and we were unable to recover it. 00:26:29.147 [2024-07-26 14:20:36.858252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.147 [2024-07-26 14:20:36.858314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.147 qpair failed and we were unable to recover it. 00:26:29.147 [2024-07-26 14:20:36.858595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.147 [2024-07-26 14:20:36.858660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.147 qpair failed and we were unable to recover it. 00:26:29.147 [2024-07-26 14:20:36.858909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.147 [2024-07-26 14:20:36.858972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.147 qpair failed and we were unable to recover it. 00:26:29.147 [2024-07-26 14:20:36.859180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.147 [2024-07-26 14:20:36.859243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.147 qpair failed and we were unable to recover it. 00:26:29.147 [2024-07-26 14:20:36.859477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.147 [2024-07-26 14:20:36.859552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.147 qpair failed and we were unable to recover it. 00:26:29.147 [2024-07-26 14:20:36.859815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.147 [2024-07-26 14:20:36.859878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.147 qpair failed and we were unable to recover it. 00:26:29.147 [2024-07-26 14:20:36.860155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.147 [2024-07-26 14:20:36.860217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.147 qpair failed and we were unable to recover it. 00:26:29.147 [2024-07-26 14:20:36.860504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.147 [2024-07-26 14:20:36.860597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.147 qpair failed and we were unable to recover it. 00:26:29.147 [2024-07-26 14:20:36.860827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.147 [2024-07-26 14:20:36.860892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.147 qpair failed and we were unable to recover it. 00:26:29.147 [2024-07-26 14:20:36.861164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.147 [2024-07-26 14:20:36.861226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.147 qpair failed and we were unable to recover it. 00:26:29.147 [2024-07-26 14:20:36.861486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.147 [2024-07-26 14:20:36.861569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.148 qpair failed and we were unable to recover it. 00:26:29.148 [2024-07-26 14:20:36.861760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.148 [2024-07-26 14:20:36.861827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.148 qpair failed and we were unable to recover it. 00:26:29.148 [2024-07-26 14:20:36.862084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.148 [2024-07-26 14:20:36.862146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.148 qpair failed and we were unable to recover it. 00:26:29.148 [2024-07-26 14:20:36.862360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.148 [2024-07-26 14:20:36.862422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.148 qpair failed and we were unable to recover it. 00:26:29.148 [2024-07-26 14:20:36.862709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.148 [2024-07-26 14:20:36.862774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.148 qpair failed and we were unable to recover it. 00:26:29.148 [2024-07-26 14:20:36.863010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.148 [2024-07-26 14:20:36.863073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.148 qpair failed and we were unable to recover it. 00:26:29.148 [2024-07-26 14:20:36.863281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.148 [2024-07-26 14:20:36.863347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.148 qpair failed and we were unable to recover it. 00:26:29.148 [2024-07-26 14:20:36.863568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.148 [2024-07-26 14:20:36.863632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.148 qpair failed and we were unable to recover it. 00:26:29.148 [2024-07-26 14:20:36.863871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.148 [2024-07-26 14:20:36.863934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.148 qpair failed and we were unable to recover it. 00:26:29.148 [2024-07-26 14:20:36.864186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.148 [2024-07-26 14:20:36.864248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.148 qpair failed and we were unable to recover it. 00:26:29.148 [2024-07-26 14:20:36.864430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.148 [2024-07-26 14:20:36.864503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.148 qpair failed and we were unable to recover it. 00:26:29.148 [2024-07-26 14:20:36.864772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.148 [2024-07-26 14:20:36.864835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.148 qpair failed and we were unable to recover it. 00:26:29.148 [2024-07-26 14:20:36.865108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.148 [2024-07-26 14:20:36.865172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.148 qpair failed and we were unable to recover it. 00:26:29.148 [2024-07-26 14:20:36.865431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.148 [2024-07-26 14:20:36.865493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.148 qpair failed and we were unable to recover it. 00:26:29.148 [2024-07-26 14:20:36.865723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.148 [2024-07-26 14:20:36.865788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.148 qpair failed and we were unable to recover it. 00:26:29.148 [2024-07-26 14:20:36.866072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.148 [2024-07-26 14:20:36.866135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.148 qpair failed and we were unable to recover it. 00:26:29.148 [2024-07-26 14:20:36.866421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.148 [2024-07-26 14:20:36.866484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.148 qpair failed and we were unable to recover it. 00:26:29.148 [2024-07-26 14:20:36.866744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.148 [2024-07-26 14:20:36.866808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.148 qpair failed and we were unable to recover it. 00:26:29.148 [2024-07-26 14:20:36.867039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.148 [2024-07-26 14:20:36.867101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.148 qpair failed and we were unable to recover it. 00:26:29.148 [2024-07-26 14:20:36.867341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.148 [2024-07-26 14:20:36.867404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.148 qpair failed and we were unable to recover it. 00:26:29.148 [2024-07-26 14:20:36.867671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.148 [2024-07-26 14:20:36.867736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.148 qpair failed and we were unable to recover it. 00:26:29.148 [2024-07-26 14:20:36.867982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.148 [2024-07-26 14:20:36.868044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.148 qpair failed and we were unable to recover it. 00:26:29.148 [2024-07-26 14:20:36.868239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.148 [2024-07-26 14:20:36.868302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.148 qpair failed and we were unable to recover it. 00:26:29.148 [2024-07-26 14:20:36.868581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.148 [2024-07-26 14:20:36.868646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.148 qpair failed and we were unable to recover it. 00:26:29.148 [2024-07-26 14:20:36.868939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.148 [2024-07-26 14:20:36.869001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.148 qpair failed and we were unable to recover it. 00:26:29.148 [2024-07-26 14:20:36.869274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.148 [2024-07-26 14:20:36.869337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.148 qpair failed and we were unable to recover it. 00:26:29.148 [2024-07-26 14:20:36.869588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.148 [2024-07-26 14:20:36.869653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.148 qpair failed and we were unable to recover it. 00:26:29.148 [2024-07-26 14:20:36.869897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.148 [2024-07-26 14:20:36.869959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.148 qpair failed and we were unable to recover it. 00:26:29.148 [2024-07-26 14:20:36.870205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.148 [2024-07-26 14:20:36.870268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.148 qpair failed and we were unable to recover it. 00:26:29.148 [2024-07-26 14:20:36.870499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.148 [2024-07-26 14:20:36.870577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.148 qpair failed and we were unable to recover it. 00:26:29.148 [2024-07-26 14:20:36.870794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.148 [2024-07-26 14:20:36.870856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.148 qpair failed and we were unable to recover it. 00:26:29.148 [2024-07-26 14:20:36.871099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.148 [2024-07-26 14:20:36.871161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.148 qpair failed and we were unable to recover it. 00:26:29.148 [2024-07-26 14:20:36.871410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.148 [2024-07-26 14:20:36.871472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.148 qpair failed and we were unable to recover it. 00:26:29.148 [2024-07-26 14:20:36.871740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.148 [2024-07-26 14:20:36.871804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.148 qpair failed and we were unable to recover it. 00:26:29.148 [2024-07-26 14:20:36.872083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.148 [2024-07-26 14:20:36.872146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.148 qpair failed and we were unable to recover it. 00:26:29.148 [2024-07-26 14:20:36.872392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.148 [2024-07-26 14:20:36.872454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.148 qpair failed and we were unable to recover it. 00:26:29.148 [2024-07-26 14:20:36.872769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.148 [2024-07-26 14:20:36.872834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.148 qpair failed and we were unable to recover it. 00:26:29.148 [2024-07-26 14:20:36.873108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.148 [2024-07-26 14:20:36.873172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.148 qpair failed and we were unable to recover it. 00:26:29.148 [2024-07-26 14:20:36.873381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.148 [2024-07-26 14:20:36.873445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.149 qpair failed and we were unable to recover it. 00:26:29.149 [2024-07-26 14:20:36.873661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.149 [2024-07-26 14:20:36.873726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.149 qpair failed and we were unable to recover it. 00:26:29.149 [2024-07-26 14:20:36.873969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.149 [2024-07-26 14:20:36.874032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.149 qpair failed and we were unable to recover it. 00:26:29.149 [2024-07-26 14:20:36.874304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.149 [2024-07-26 14:20:36.874367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.149 qpair failed and we were unable to recover it. 00:26:29.149 [2024-07-26 14:20:36.874641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.149 [2024-07-26 14:20:36.874707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.149 qpair failed and we were unable to recover it. 00:26:29.149 [2024-07-26 14:20:36.874969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.149 [2024-07-26 14:20:36.875032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.149 qpair failed and we were unable to recover it. 00:26:29.149 [2024-07-26 14:20:36.875277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.149 [2024-07-26 14:20:36.875341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.149 qpair failed and we were unable to recover it. 00:26:29.149 [2024-07-26 14:20:36.875556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.149 [2024-07-26 14:20:36.875621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.149 qpair failed and we were unable to recover it. 00:26:29.149 [2024-07-26 14:20:36.875831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.149 [2024-07-26 14:20:36.875896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.149 qpair failed and we were unable to recover it. 00:26:29.149 [2024-07-26 14:20:36.876139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.149 [2024-07-26 14:20:36.876203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.149 qpair failed and we were unable to recover it. 00:26:29.149 [2024-07-26 14:20:36.876409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.149 [2024-07-26 14:20:36.876471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.149 qpair failed and we were unable to recover it. 00:26:29.149 [2024-07-26 14:20:36.876721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.149 [2024-07-26 14:20:36.876785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.149 qpair failed and we were unable to recover it. 00:26:29.149 [2024-07-26 14:20:36.876993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.149 [2024-07-26 14:20:36.877061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.149 qpair failed and we were unable to recover it. 00:26:29.149 [2024-07-26 14:20:36.877334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.149 [2024-07-26 14:20:36.877398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.149 qpair failed and we were unable to recover it. 00:26:29.149 [2024-07-26 14:20:36.877651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.149 [2024-07-26 14:20:36.877715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.149 qpair failed and we were unable to recover it. 00:26:29.149 [2024-07-26 14:20:36.877906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.149 [2024-07-26 14:20:36.877972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.149 qpair failed and we were unable to recover it. 00:26:29.149 [2024-07-26 14:20:36.878222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.149 [2024-07-26 14:20:36.878285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.149 qpair failed and we were unable to recover it. 00:26:29.149 [2024-07-26 14:20:36.878545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.149 [2024-07-26 14:20:36.878609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.149 qpair failed and we were unable to recover it. 00:26:29.149 [2024-07-26 14:20:36.878827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.149 [2024-07-26 14:20:36.878893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.149 qpair failed and we were unable to recover it. 00:26:29.149 [2024-07-26 14:20:36.879168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.149 [2024-07-26 14:20:36.879231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.149 qpair failed and we were unable to recover it. 00:26:29.149 [2024-07-26 14:20:36.879468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.149 [2024-07-26 14:20:36.879548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.149 qpair failed and we were unable to recover it. 00:26:29.149 [2024-07-26 14:20:36.879804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.149 [2024-07-26 14:20:36.879866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.149 qpair failed and we were unable to recover it. 00:26:29.149 [2024-07-26 14:20:36.880108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.149 [2024-07-26 14:20:36.880171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.149 qpair failed and we were unable to recover it. 00:26:29.149 [2024-07-26 14:20:36.880460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.149 [2024-07-26 14:20:36.880523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.149 qpair failed and we were unable to recover it. 00:26:29.149 [2024-07-26 14:20:36.880799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.149 [2024-07-26 14:20:36.880862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.149 qpair failed and we were unable to recover it. 00:26:29.149 [2024-07-26 14:20:36.881107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.149 [2024-07-26 14:20:36.881170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.149 qpair failed and we were unable to recover it. 00:26:29.149 [2024-07-26 14:20:36.881412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.149 [2024-07-26 14:20:36.881475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.149 qpair failed and we were unable to recover it. 00:26:29.149 [2024-07-26 14:20:36.881744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.149 [2024-07-26 14:20:36.881809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.149 qpair failed and we were unable to recover it. 00:26:29.149 [2024-07-26 14:20:36.882026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.149 [2024-07-26 14:20:36.882089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.149 qpair failed and we were unable to recover it. 00:26:29.149 [2024-07-26 14:20:36.882305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.149 [2024-07-26 14:20:36.882367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.149 qpair failed and we were unable to recover it. 00:26:29.149 [2024-07-26 14:20:36.882691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.149 [2024-07-26 14:20:36.882756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.149 qpair failed and we were unable to recover it. 00:26:29.149 [2024-07-26 14:20:36.883013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.149 [2024-07-26 14:20:36.883076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.149 qpair failed and we were unable to recover it. 00:26:29.149 [2024-07-26 14:20:36.883315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.149 [2024-07-26 14:20:36.883380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.149 qpair failed and we were unable to recover it. 00:26:29.149 [2024-07-26 14:20:36.883620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.149 [2024-07-26 14:20:36.883686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.149 qpair failed and we were unable to recover it. 00:26:29.149 [2024-07-26 14:20:36.883881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.149 [2024-07-26 14:20:36.883944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.149 qpair failed and we were unable to recover it. 00:26:29.149 [2024-07-26 14:20:36.884196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.149 [2024-07-26 14:20:36.884259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.149 qpair failed and we were unable to recover it. 00:26:29.149 [2024-07-26 14:20:36.884541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.149 [2024-07-26 14:20:36.884605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.149 qpair failed and we were unable to recover it. 00:26:29.149 [2024-07-26 14:20:36.884843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.149 [2024-07-26 14:20:36.884907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.149 qpair failed and we were unable to recover it. 00:26:29.149 [2024-07-26 14:20:36.885084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.150 [2024-07-26 14:20:36.885147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.150 qpair failed and we were unable to recover it. 00:26:29.150 [2024-07-26 14:20:36.885365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.150 [2024-07-26 14:20:36.885428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.150 qpair failed and we were unable to recover it. 00:26:29.150 [2024-07-26 14:20:36.885692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.150 [2024-07-26 14:20:36.885766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.150 qpair failed and we were unable to recover it. 00:26:29.150 [2024-07-26 14:20:36.886027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.150 [2024-07-26 14:20:36.886089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.150 qpair failed and we were unable to recover it. 00:26:29.150 [2024-07-26 14:20:36.886299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.150 [2024-07-26 14:20:36.886361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.150 qpair failed and we were unable to recover it. 00:26:29.150 [2024-07-26 14:20:36.886599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.150 [2024-07-26 14:20:36.886665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.150 qpair failed and we were unable to recover it. 00:26:29.150 [2024-07-26 14:20:36.886887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.150 [2024-07-26 14:20:36.886949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.150 qpair failed and we were unable to recover it. 00:26:29.150 [2024-07-26 14:20:36.887194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.150 [2024-07-26 14:20:36.887257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.150 qpair failed and we were unable to recover it. 00:26:29.150 [2024-07-26 14:20:36.887490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.150 [2024-07-26 14:20:36.887568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.150 qpair failed and we were unable to recover it. 00:26:29.150 [2024-07-26 14:20:36.887823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.150 [2024-07-26 14:20:36.887885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.150 qpair failed and we were unable to recover it. 00:26:29.150 [2024-07-26 14:20:36.888102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.150 [2024-07-26 14:20:36.888165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.150 qpair failed and we were unable to recover it. 00:26:29.150 [2024-07-26 14:20:36.888404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.150 [2024-07-26 14:20:36.888468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.150 qpair failed and we were unable to recover it. 00:26:29.150 [2024-07-26 14:20:36.888773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.150 [2024-07-26 14:20:36.888837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.150 qpair failed and we were unable to recover it. 00:26:29.150 [2024-07-26 14:20:36.889054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.150 [2024-07-26 14:20:36.889117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.150 qpair failed and we were unable to recover it. 00:26:29.150 [2024-07-26 14:20:36.889357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.150 [2024-07-26 14:20:36.889419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.150 qpair failed and we were unable to recover it. 00:26:29.150 [2024-07-26 14:20:36.889681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.150 [2024-07-26 14:20:36.889746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.150 qpair failed and we were unable to recover it. 00:26:29.150 [2024-07-26 14:20:36.889984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.150 [2024-07-26 14:20:36.890047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.150 qpair failed and we were unable to recover it. 00:26:29.150 [2024-07-26 14:20:36.890243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.150 [2024-07-26 14:20:36.890305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.150 qpair failed and we were unable to recover it. 00:26:29.150 [2024-07-26 14:20:36.890524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.150 [2024-07-26 14:20:36.890621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.150 qpair failed and we were unable to recover it. 00:26:29.150 [2024-07-26 14:20:36.890869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.150 [2024-07-26 14:20:36.890932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.150 qpair failed and we were unable to recover it. 00:26:29.150 [2024-07-26 14:20:36.891186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.150 [2024-07-26 14:20:36.891250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.150 qpair failed and we were unable to recover it. 00:26:29.150 [2024-07-26 14:20:36.891482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.150 [2024-07-26 14:20:36.891562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.150 qpair failed and we were unable to recover it. 00:26:29.150 [2024-07-26 14:20:36.891803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.150 [2024-07-26 14:20:36.891866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.150 qpair failed and we were unable to recover it. 00:26:29.150 [2024-07-26 14:20:36.892145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.150 [2024-07-26 14:20:36.892209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.150 qpair failed and we were unable to recover it. 00:26:29.150 [2024-07-26 14:20:36.892416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.150 [2024-07-26 14:20:36.892478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.150 qpair failed and we were unable to recover it. 00:26:29.150 [2024-07-26 14:20:36.892727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.150 [2024-07-26 14:20:36.892791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.150 qpair failed and we were unable to recover it. 00:26:29.150 [2024-07-26 14:20:36.893075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.150 [2024-07-26 14:20:36.893138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.150 qpair failed and we were unable to recover it. 00:26:29.150 [2024-07-26 14:20:36.893382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.150 [2024-07-26 14:20:36.893444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.150 qpair failed and we were unable to recover it. 00:26:29.150 [2024-07-26 14:20:36.893746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.150 [2024-07-26 14:20:36.893811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.150 qpair failed and we were unable to recover it. 00:26:29.150 [2024-07-26 14:20:36.894087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.150 [2024-07-26 14:20:36.894159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.150 qpair failed and we were unable to recover it. 00:26:29.150 [2024-07-26 14:20:36.894454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.150 [2024-07-26 14:20:36.894517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.150 qpair failed and we were unable to recover it. 00:26:29.150 [2024-07-26 14:20:36.894805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.150 [2024-07-26 14:20:36.894868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.151 qpair failed and we were unable to recover it. 00:26:29.151 [2024-07-26 14:20:36.895105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.151 [2024-07-26 14:20:36.895169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.151 qpair failed and we were unable to recover it. 00:26:29.151 [2024-07-26 14:20:36.895461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.151 [2024-07-26 14:20:36.895525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.151 qpair failed and we were unable to recover it. 00:26:29.151 [2024-07-26 14:20:36.895793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.151 [2024-07-26 14:20:36.895856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.151 qpair failed and we were unable to recover it. 00:26:29.151 [2024-07-26 14:20:36.896061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.151 [2024-07-26 14:20:36.896124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.151 qpair failed and we were unable to recover it. 00:26:29.151 [2024-07-26 14:20:36.896365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.151 [2024-07-26 14:20:36.896428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.151 qpair failed and we were unable to recover it. 00:26:29.151 [2024-07-26 14:20:36.896638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.151 [2024-07-26 14:20:36.896703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.151 qpair failed and we were unable to recover it. 00:26:29.151 [2024-07-26 14:20:36.896923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.151 [2024-07-26 14:20:36.896986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.151 qpair failed and we were unable to recover it. 00:26:29.151 [2024-07-26 14:20:36.897202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.151 [2024-07-26 14:20:36.897265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.151 qpair failed and we were unable to recover it. 00:26:29.151 [2024-07-26 14:20:36.897511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.151 [2024-07-26 14:20:36.897594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.151 qpair failed and we were unable to recover it. 00:26:29.151 [2024-07-26 14:20:36.897878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.151 [2024-07-26 14:20:36.897940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.151 qpair failed and we were unable to recover it. 00:26:29.151 [2024-07-26 14:20:36.898199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.151 [2024-07-26 14:20:36.898261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.151 qpair failed and we were unable to recover it. 00:26:29.151 [2024-07-26 14:20:36.898556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.151 [2024-07-26 14:20:36.898620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.151 qpair failed and we were unable to recover it. 00:26:29.151 [2024-07-26 14:20:36.898903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.151 [2024-07-26 14:20:36.898968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.151 qpair failed and we were unable to recover it. 00:26:29.151 [2024-07-26 14:20:36.899249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.151 [2024-07-26 14:20:36.899312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.151 qpair failed and we were unable to recover it. 00:26:29.151 [2024-07-26 14:20:36.899560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.151 [2024-07-26 14:20:36.899626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.151 qpair failed and we were unable to recover it. 00:26:29.151 [2024-07-26 14:20:36.899889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.151 [2024-07-26 14:20:36.899953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.151 qpair failed and we were unable to recover it. 00:26:29.151 [2024-07-26 14:20:36.900231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.151 [2024-07-26 14:20:36.900294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.151 qpair failed and we were unable to recover it. 00:26:29.151 [2024-07-26 14:20:36.900507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.151 [2024-07-26 14:20:36.900584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.151 qpair failed and we were unable to recover it. 00:26:29.151 [2024-07-26 14:20:36.900822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.151 [2024-07-26 14:20:36.900886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.151 qpair failed and we were unable to recover it. 00:26:29.151 [2024-07-26 14:20:36.901174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.151 [2024-07-26 14:20:36.901237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.151 qpair failed and we were unable to recover it. 00:26:29.151 [2024-07-26 14:20:36.901469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.151 [2024-07-26 14:20:36.901548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.151 qpair failed and we were unable to recover it. 00:26:29.151 [2024-07-26 14:20:36.901800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.151 [2024-07-26 14:20:36.901863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.151 qpair failed and we were unable to recover it. 00:26:29.151 [2024-07-26 14:20:36.902050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.151 [2024-07-26 14:20:36.902112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.151 qpair failed and we were unable to recover it. 00:26:29.151 [2024-07-26 14:20:36.902392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.151 [2024-07-26 14:20:36.902456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.151 qpair failed and we were unable to recover it. 00:26:29.151 [2024-07-26 14:20:36.902708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.151 [2024-07-26 14:20:36.902773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.151 qpair failed and we were unable to recover it. 00:26:29.151 [2024-07-26 14:20:36.903029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.151 [2024-07-26 14:20:36.903093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.151 qpair failed and we were unable to recover it. 00:26:29.151 [2024-07-26 14:20:36.903338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.151 [2024-07-26 14:20:36.903400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.151 qpair failed and we were unable to recover it. 00:26:29.151 [2024-07-26 14:20:36.903602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.151 [2024-07-26 14:20:36.903667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.151 qpair failed and we were unable to recover it. 00:26:29.151 [2024-07-26 14:20:36.903903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.151 [2024-07-26 14:20:36.903967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.151 qpair failed and we were unable to recover it. 00:26:29.151 [2024-07-26 14:20:36.904223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.151 [2024-07-26 14:20:36.904286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.151 qpair failed and we were unable to recover it. 00:26:29.151 [2024-07-26 14:20:36.904575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.151 [2024-07-26 14:20:36.904639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.151 qpair failed and we were unable to recover it. 00:26:29.151 [2024-07-26 14:20:36.904882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.151 [2024-07-26 14:20:36.904944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.151 qpair failed and we were unable to recover it. 00:26:29.151 [2024-07-26 14:20:36.905180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.151 [2024-07-26 14:20:36.905242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.151 qpair failed and we were unable to recover it. 00:26:29.151 [2024-07-26 14:20:36.905480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.151 [2024-07-26 14:20:36.905556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.151 qpair failed and we were unable to recover it. 00:26:29.151 [2024-07-26 14:20:36.905846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.151 [2024-07-26 14:20:36.905908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.151 qpair failed and we were unable to recover it. 00:26:29.151 [2024-07-26 14:20:36.906200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.151 [2024-07-26 14:20:36.906263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.151 qpair failed and we were unable to recover it. 00:26:29.151 [2024-07-26 14:20:36.906507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.151 [2024-07-26 14:20:36.906595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.152 qpair failed and we were unable to recover it. 00:26:29.152 [2024-07-26 14:20:36.906807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.152 [2024-07-26 14:20:36.906872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.152 qpair failed and we were unable to recover it. 00:26:29.152 [2024-07-26 14:20:36.907164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.152 [2024-07-26 14:20:36.907227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.152 qpair failed and we were unable to recover it. 00:26:29.152 [2024-07-26 14:20:36.907468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.152 [2024-07-26 14:20:36.907549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.152 qpair failed and we were unable to recover it. 00:26:29.152 [2024-07-26 14:20:36.907760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.152 [2024-07-26 14:20:36.907823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.152 qpair failed and we were unable to recover it. 00:26:29.152 [2024-07-26 14:20:36.908090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.152 [2024-07-26 14:20:36.908152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.152 qpair failed and we were unable to recover it. 00:26:29.152 [2024-07-26 14:20:36.908391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.152 [2024-07-26 14:20:36.908453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.152 qpair failed and we were unable to recover it. 00:26:29.152 [2024-07-26 14:20:36.908755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.152 [2024-07-26 14:20:36.908818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.152 qpair failed and we were unable to recover it. 00:26:29.152 [2024-07-26 14:20:36.909020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.152 [2024-07-26 14:20:36.909082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.152 qpair failed and we were unable to recover it. 00:26:29.152 [2024-07-26 14:20:36.909320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.152 [2024-07-26 14:20:36.909383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.152 qpair failed and we were unable to recover it. 00:26:29.152 [2024-07-26 14:20:36.909571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.152 [2024-07-26 14:20:36.909635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.152 qpair failed and we were unable to recover it. 00:26:29.152 [2024-07-26 14:20:36.909850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.152 [2024-07-26 14:20:36.909913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.152 qpair failed and we were unable to recover it. 00:26:29.152 [2024-07-26 14:20:36.910105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.152 [2024-07-26 14:20:36.910169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.152 qpair failed and we were unable to recover it. 00:26:29.152 [2024-07-26 14:20:36.910370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.152 [2024-07-26 14:20:36.910433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.152 qpair failed and we were unable to recover it. 00:26:29.152 [2024-07-26 14:20:36.910725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.152 [2024-07-26 14:20:36.910789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.152 qpair failed and we were unable to recover it. 00:26:29.152 [2024-07-26 14:20:36.911036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.152 [2024-07-26 14:20:36.911099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.152 qpair failed and we were unable to recover it. 00:26:29.152 [2024-07-26 14:20:36.911365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.152 [2024-07-26 14:20:36.911427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.152 qpair failed and we were unable to recover it. 00:26:29.152 [2024-07-26 14:20:36.911734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.152 [2024-07-26 14:20:36.911798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.152 qpair failed and we were unable to recover it. 00:26:29.152 [2024-07-26 14:20:36.912088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.152 [2024-07-26 14:20:36.912152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.152 qpair failed and we were unable to recover it. 00:26:29.152 [2024-07-26 14:20:36.912391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.152 [2024-07-26 14:20:36.912454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.152 qpair failed and we were unable to recover it. 00:26:29.152 [2024-07-26 14:20:36.912746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.152 [2024-07-26 14:20:36.912810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.152 qpair failed and we were unable to recover it. 00:26:29.152 [2024-07-26 14:20:36.913018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.152 [2024-07-26 14:20:36.913080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.152 qpair failed and we were unable to recover it. 00:26:29.152 [2024-07-26 14:20:36.913343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.152 [2024-07-26 14:20:36.913406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.152 qpair failed and we were unable to recover it. 00:26:29.152 [2024-07-26 14:20:36.913647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.152 [2024-07-26 14:20:36.913713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.152 qpair failed and we were unable to recover it. 00:26:29.152 [2024-07-26 14:20:36.913967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.152 [2024-07-26 14:20:36.914029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.152 qpair failed and we were unable to recover it. 00:26:29.152 [2024-07-26 14:20:36.914260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.152 [2024-07-26 14:20:36.914322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.152 qpair failed and we were unable to recover it. 00:26:29.152 [2024-07-26 14:20:36.914608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.152 [2024-07-26 14:20:36.914671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.152 qpair failed and we were unable to recover it. 00:26:29.152 [2024-07-26 14:20:36.914911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.152 [2024-07-26 14:20:36.914975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.152 qpair failed and we were unable to recover it. 00:26:29.152 [2024-07-26 14:20:36.915232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.152 [2024-07-26 14:20:36.915294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.152 qpair failed and we were unable to recover it. 00:26:29.152 [2024-07-26 14:20:36.915541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.152 [2024-07-26 14:20:36.915616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.152 qpair failed and we were unable to recover it. 00:26:29.152 [2024-07-26 14:20:36.915863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.152 [2024-07-26 14:20:36.915926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.152 qpair failed and we were unable to recover it. 00:26:29.152 [2024-07-26 14:20:36.916205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.152 [2024-07-26 14:20:36.916268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.152 qpair failed and we were unable to recover it. 00:26:29.152 [2024-07-26 14:20:36.916501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.152 [2024-07-26 14:20:36.916578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.152 qpair failed and we were unable to recover it. 00:26:29.152 [2024-07-26 14:20:36.916829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.152 [2024-07-26 14:20:36.916891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.152 qpair failed and we were unable to recover it. 00:26:29.152 [2024-07-26 14:20:36.917093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.152 [2024-07-26 14:20:36.917156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.152 qpair failed and we were unable to recover it. 00:26:29.152 [2024-07-26 14:20:36.917419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.152 [2024-07-26 14:20:36.917482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.152 qpair failed and we were unable to recover it. 00:26:29.152 [2024-07-26 14:20:36.917792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.152 [2024-07-26 14:20:36.917855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.152 qpair failed and we were unable to recover it. 00:26:29.152 [2024-07-26 14:20:36.918091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.152 [2024-07-26 14:20:36.918153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.152 qpair failed and we were unable to recover it. 00:26:29.152 [2024-07-26 14:20:36.918396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.152 [2024-07-26 14:20:36.918458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.153 qpair failed and we were unable to recover it. 00:26:29.153 [2024-07-26 14:20:36.918724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.153 [2024-07-26 14:20:36.918789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.153 qpair failed and we were unable to recover it. 00:26:29.153 [2024-07-26 14:20:36.919082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.153 [2024-07-26 14:20:36.919144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.153 qpair failed and we were unable to recover it. 00:26:29.153 [2024-07-26 14:20:36.919382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.153 [2024-07-26 14:20:36.919444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.153 qpair failed and we were unable to recover it. 00:26:29.153 [2024-07-26 14:20:36.919662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.153 [2024-07-26 14:20:36.919727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.153 qpair failed and we were unable to recover it. 00:26:29.153 [2024-07-26 14:20:36.919941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.153 [2024-07-26 14:20:36.920003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.153 qpair failed and we were unable to recover it. 00:26:29.153 [2024-07-26 14:20:36.920278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.153 [2024-07-26 14:20:36.920341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.153 qpair failed and we were unable to recover it. 00:26:29.153 [2024-07-26 14:20:36.920603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.153 [2024-07-26 14:20:36.920669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.153 qpair failed and we were unable to recover it. 00:26:29.153 [2024-07-26 14:20:36.920861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.153 [2024-07-26 14:20:36.920923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.153 qpair failed and we were unable to recover it. 00:26:29.153 [2024-07-26 14:20:36.921167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.153 [2024-07-26 14:20:36.921230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.153 qpair failed and we were unable to recover it. 00:26:29.153 [2024-07-26 14:20:36.921511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.153 [2024-07-26 14:20:36.921587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.153 qpair failed and we were unable to recover it. 00:26:29.153 [2024-07-26 14:20:36.921793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.153 [2024-07-26 14:20:36.921857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.153 qpair failed and we were unable to recover it. 00:26:29.153 [2024-07-26 14:20:36.922106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.153 [2024-07-26 14:20:36.922169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.153 qpair failed and we were unable to recover it. 00:26:29.153 [2024-07-26 14:20:36.922413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.153 [2024-07-26 14:20:36.922475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.153 qpair failed and we were unable to recover it. 00:26:29.153 [2024-07-26 14:20:36.922758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.153 [2024-07-26 14:20:36.922823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.153 qpair failed and we were unable to recover it. 00:26:29.153 [2024-07-26 14:20:36.923057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.153 [2024-07-26 14:20:36.923120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.153 qpair failed and we were unable to recover it. 00:26:29.153 [2024-07-26 14:20:36.923312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.153 [2024-07-26 14:20:36.923376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.153 qpair failed and we were unable to recover it. 00:26:29.153 [2024-07-26 14:20:36.923579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.153 [2024-07-26 14:20:36.923644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.153 qpair failed and we were unable to recover it. 00:26:29.153 [2024-07-26 14:20:36.923900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.153 [2024-07-26 14:20:36.923973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.153 qpair failed and we were unable to recover it. 00:26:29.153 [2024-07-26 14:20:36.924222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.153 [2024-07-26 14:20:36.924286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.153 qpair failed and we were unable to recover it. 00:26:29.153 [2024-07-26 14:20:36.924518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.153 [2024-07-26 14:20:36.924600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.153 qpair failed and we were unable to recover it. 00:26:29.153 [2024-07-26 14:20:36.924893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.153 [2024-07-26 14:20:36.924956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.153 qpair failed and we were unable to recover it. 00:26:29.153 [2024-07-26 14:20:36.925178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.153 [2024-07-26 14:20:36.925240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.153 qpair failed and we were unable to recover it. 00:26:29.153 [2024-07-26 14:20:36.925521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.153 [2024-07-26 14:20:36.925601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.153 qpair failed and we were unable to recover it. 00:26:29.153 [2024-07-26 14:20:36.925852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.153 [2024-07-26 14:20:36.925916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.153 qpair failed and we were unable to recover it. 00:26:29.153 [2024-07-26 14:20:36.926167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.153 [2024-07-26 14:20:36.926229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.153 qpair failed and we were unable to recover it. 00:26:29.153 [2024-07-26 14:20:36.926456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.153 [2024-07-26 14:20:36.926519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.153 qpair failed and we were unable to recover it. 00:26:29.153 [2024-07-26 14:20:36.926786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.153 [2024-07-26 14:20:36.926849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.153 qpair failed and we were unable to recover it. 00:26:29.153 [2024-07-26 14:20:36.927082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.153 [2024-07-26 14:20:36.927145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.153 qpair failed and we were unable to recover it. 00:26:29.153 [2024-07-26 14:20:36.927437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.153 [2024-07-26 14:20:36.927500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.153 qpair failed and we were unable to recover it. 00:26:29.153 [2024-07-26 14:20:36.927796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.153 [2024-07-26 14:20:36.927860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.153 qpair failed and we were unable to recover it. 00:26:29.153 [2024-07-26 14:20:36.928101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.153 [2024-07-26 14:20:36.928163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.153 qpair failed and we were unable to recover it. 00:26:29.153 [2024-07-26 14:20:36.928382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.153 [2024-07-26 14:20:36.928446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.153 qpair failed and we were unable to recover it. 00:26:29.153 [2024-07-26 14:20:36.928712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.153 [2024-07-26 14:20:36.928776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.153 qpair failed and we were unable to recover it. 00:26:29.153 [2024-07-26 14:20:36.929058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.153 [2024-07-26 14:20:36.929121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.153 qpair failed and we were unable to recover it. 00:26:29.153 [2024-07-26 14:20:36.929364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.153 [2024-07-26 14:20:36.929426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.153 qpair failed and we were unable to recover it. 00:26:29.153 [2024-07-26 14:20:36.929732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.153 [2024-07-26 14:20:36.929796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.153 qpair failed and we were unable to recover it. 00:26:29.153 [2024-07-26 14:20:36.930070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.153 [2024-07-26 14:20:36.930132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.153 qpair failed and we were unable to recover it. 00:26:29.153 [2024-07-26 14:20:36.930398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.154 [2024-07-26 14:20:36.930461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.154 qpair failed and we were unable to recover it. 00:26:29.154 [2024-07-26 14:20:36.930783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.154 [2024-07-26 14:20:36.930848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.154 qpair failed and we were unable to recover it. 00:26:29.154 [2024-07-26 14:20:36.931132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.154 [2024-07-26 14:20:36.931195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.154 qpair failed and we were unable to recover it. 00:26:29.154 [2024-07-26 14:20:36.931395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.154 [2024-07-26 14:20:36.931460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.154 qpair failed and we were unable to recover it. 00:26:29.154 [2024-07-26 14:20:36.931718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.154 [2024-07-26 14:20:36.931782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.154 qpair failed and we were unable to recover it. 00:26:29.154 [2024-07-26 14:20:36.931975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.154 [2024-07-26 14:20:36.932038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.154 qpair failed and we were unable to recover it. 00:26:29.154 [2024-07-26 14:20:36.932276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.154 [2024-07-26 14:20:36.932339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.154 qpair failed and we were unable to recover it. 00:26:29.154 [2024-07-26 14:20:36.932581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.154 [2024-07-26 14:20:36.932656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.154 qpair failed and we were unable to recover it. 00:26:29.154 [2024-07-26 14:20:36.932943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.154 [2024-07-26 14:20:36.933007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.154 qpair failed and we were unable to recover it. 00:26:29.154 [2024-07-26 14:20:36.933246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.154 [2024-07-26 14:20:36.933308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.154 qpair failed and we were unable to recover it. 00:26:29.154 [2024-07-26 14:20:36.933600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.154 [2024-07-26 14:20:36.933664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.154 qpair failed and we were unable to recover it. 00:26:29.154 [2024-07-26 14:20:36.933945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.154 [2024-07-26 14:20:36.934009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.154 qpair failed and we were unable to recover it. 00:26:29.154 [2024-07-26 14:20:36.934261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.154 [2024-07-26 14:20:36.934324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.154 qpair failed and we were unable to recover it. 00:26:29.154 [2024-07-26 14:20:36.934598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.154 [2024-07-26 14:20:36.934662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.154 qpair failed and we were unable to recover it. 00:26:29.154 [2024-07-26 14:20:36.934914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.154 [2024-07-26 14:20:36.934977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.154 qpair failed and we were unable to recover it. 00:26:29.154 [2024-07-26 14:20:36.935213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.154 [2024-07-26 14:20:36.935277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.154 qpair failed and we were unable to recover it. 00:26:29.154 [2024-07-26 14:20:36.935554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.154 [2024-07-26 14:20:36.935618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.154 qpair failed and we were unable to recover it. 00:26:29.154 [2024-07-26 14:20:36.935811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.154 [2024-07-26 14:20:36.935874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.154 qpair failed and we were unable to recover it. 00:26:29.154 [2024-07-26 14:20:36.936086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.154 [2024-07-26 14:20:36.936149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.154 qpair failed and we were unable to recover it. 00:26:29.154 [2024-07-26 14:20:36.936389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.154 [2024-07-26 14:20:36.936451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.154 qpair failed and we were unable to recover it. 00:26:29.154 [2024-07-26 14:20:36.936687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.154 [2024-07-26 14:20:36.936751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.154 qpair failed and we were unable to recover it. 00:26:29.154 [2024-07-26 14:20:36.936997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.154 [2024-07-26 14:20:36.937062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.154 qpair failed and we were unable to recover it. 00:26:29.154 [2024-07-26 14:20:36.937300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.154 [2024-07-26 14:20:36.937363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.154 qpair failed and we were unable to recover it. 00:26:29.154 [2024-07-26 14:20:36.937601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.154 [2024-07-26 14:20:36.937667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.154 qpair failed and we were unable to recover it. 00:26:29.154 [2024-07-26 14:20:36.937927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.154 [2024-07-26 14:20:36.937989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.154 qpair failed and we were unable to recover it. 00:26:29.154 [2024-07-26 14:20:36.938245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.154 [2024-07-26 14:20:36.938308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.154 qpair failed and we were unable to recover it. 00:26:29.154 [2024-07-26 14:20:36.938579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.154 [2024-07-26 14:20:36.938644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.154 qpair failed and we were unable to recover it. 00:26:29.154 [2024-07-26 14:20:36.938887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.154 [2024-07-26 14:20:36.938949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.154 qpair failed and we were unable to recover it. 00:26:29.154 [2024-07-26 14:20:36.939224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.154 [2024-07-26 14:20:36.939286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.154 qpair failed and we were unable to recover it. 00:26:29.154 [2024-07-26 14:20:36.939582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.154 [2024-07-26 14:20:36.939645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.154 qpair failed and we were unable to recover it. 00:26:29.154 [2024-07-26 14:20:36.939887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.154 [2024-07-26 14:20:36.939949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.154 qpair failed and we were unable to recover it. 00:26:29.154 [2024-07-26 14:20:36.940199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.154 [2024-07-26 14:20:36.940261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.154 qpair failed and we were unable to recover it. 00:26:29.154 [2024-07-26 14:20:36.940548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.154 [2024-07-26 14:20:36.940612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.154 qpair failed and we were unable to recover it. 00:26:29.154 [2024-07-26 14:20:36.940860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.154 [2024-07-26 14:20:36.940923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.154 qpair failed and we were unable to recover it. 00:26:29.154 [2024-07-26 14:20:36.941193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.154 [2024-07-26 14:20:36.941257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.154 qpair failed and we were unable to recover it. 00:26:29.154 [2024-07-26 14:20:36.941517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.154 [2024-07-26 14:20:36.941597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.154 qpair failed and we were unable to recover it. 00:26:29.154 [2024-07-26 14:20:36.941835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.154 [2024-07-26 14:20:36.941899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.154 qpair failed and we were unable to recover it. 00:26:29.154 [2024-07-26 14:20:36.942168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.154 [2024-07-26 14:20:36.942231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.154 qpair failed and we were unable to recover it. 00:26:29.154 [2024-07-26 14:20:36.942479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.155 [2024-07-26 14:20:36.942568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.155 qpair failed and we were unable to recover it. 00:26:29.155 [2024-07-26 14:20:36.942838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.155 [2024-07-26 14:20:36.942900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.155 qpair failed and we were unable to recover it. 00:26:29.155 [2024-07-26 14:20:36.943169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.155 [2024-07-26 14:20:36.943232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.155 qpair failed and we were unable to recover it. 00:26:29.155 [2024-07-26 14:20:36.943507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.155 [2024-07-26 14:20:36.943591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.155 qpair failed and we were unable to recover it. 00:26:29.155 [2024-07-26 14:20:36.943803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.155 [2024-07-26 14:20:36.943867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.155 qpair failed and we were unable to recover it. 00:26:29.155 [2024-07-26 14:20:36.944142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.155 [2024-07-26 14:20:36.944204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.155 qpair failed and we were unable to recover it. 00:26:29.155 [2024-07-26 14:20:36.944486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.155 [2024-07-26 14:20:36.944567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.155 qpair failed and we were unable to recover it. 00:26:29.155 [2024-07-26 14:20:36.944832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.155 [2024-07-26 14:20:36.944894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.155 qpair failed and we were unable to recover it. 00:26:29.155 [2024-07-26 14:20:36.945180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.155 [2024-07-26 14:20:36.945243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.155 qpair failed and we were unable to recover it. 00:26:29.155 [2024-07-26 14:20:36.945521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.155 [2024-07-26 14:20:36.945601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.155 qpair failed and we were unable to recover it. 00:26:29.155 [2024-07-26 14:20:36.945854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.155 [2024-07-26 14:20:36.945929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.155 qpair failed and we were unable to recover it. 00:26:29.155 [2024-07-26 14:20:36.946179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.155 [2024-07-26 14:20:36.946242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.155 qpair failed and we were unable to recover it. 00:26:29.155 [2024-07-26 14:20:36.946486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.155 [2024-07-26 14:20:36.946574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.155 qpair failed and we were unable to recover it. 00:26:29.155 [2024-07-26 14:20:36.946841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.155 [2024-07-26 14:20:36.946904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.155 qpair failed and we were unable to recover it. 00:26:29.155 [2024-07-26 14:20:36.947170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.155 [2024-07-26 14:20:36.947233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.155 qpair failed and we were unable to recover it. 00:26:29.155 [2024-07-26 14:20:36.947466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.155 [2024-07-26 14:20:36.947548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.155 qpair failed and we were unable to recover it. 00:26:29.155 [2024-07-26 14:20:36.947795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.155 [2024-07-26 14:20:36.947860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.155 qpair failed and we were unable to recover it. 00:26:29.155 [2024-07-26 14:20:36.948079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.155 [2024-07-26 14:20:36.948142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.155 qpair failed and we were unable to recover it. 00:26:29.155 [2024-07-26 14:20:36.948391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.155 [2024-07-26 14:20:36.948454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.155 qpair failed and we were unable to recover it. 00:26:29.155 [2024-07-26 14:20:36.948711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.155 [2024-07-26 14:20:36.948776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.155 qpair failed and we were unable to recover it. 00:26:29.155 [2024-07-26 14:20:36.949014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.155 [2024-07-26 14:20:36.949076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.155 qpair failed and we were unable to recover it. 00:26:29.155 [2024-07-26 14:20:36.949280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.155 [2024-07-26 14:20:36.949343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.155 qpair failed and we were unable to recover it. 00:26:29.155 [2024-07-26 14:20:36.949560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.155 [2024-07-26 14:20:36.949625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.155 qpair failed and we were unable to recover it. 00:26:29.155 [2024-07-26 14:20:36.949874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.155 [2024-07-26 14:20:36.949936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.155 qpair failed and we were unable to recover it. 00:26:29.155 [2024-07-26 14:20:36.950183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.155 [2024-07-26 14:20:36.950248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.155 qpair failed and we were unable to recover it. 00:26:29.155 [2024-07-26 14:20:36.950461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.155 [2024-07-26 14:20:36.950525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.155 qpair failed and we were unable to recover it. 00:26:29.155 [2024-07-26 14:20:36.950794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.155 [2024-07-26 14:20:36.950856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.155 qpair failed and we were unable to recover it. 00:26:29.155 [2024-07-26 14:20:36.951048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.155 [2024-07-26 14:20:36.951110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.155 qpair failed and we were unable to recover it. 00:26:29.155 [2024-07-26 14:20:36.951364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.155 [2024-07-26 14:20:36.951426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.155 qpair failed and we were unable to recover it. 00:26:29.155 [2024-07-26 14:20:36.951696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.155 [2024-07-26 14:20:36.951760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.155 qpair failed and we were unable to recover it. 00:26:29.155 [2024-07-26 14:20:36.951998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.155 [2024-07-26 14:20:36.952060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.155 qpair failed and we were unable to recover it. 00:26:29.155 [2024-07-26 14:20:36.952279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.155 [2024-07-26 14:20:36.952341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.155 qpair failed and we were unable to recover it. 00:26:29.155 [2024-07-26 14:20:36.952592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.156 [2024-07-26 14:20:36.952656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.156 qpair failed and we were unable to recover it. 00:26:29.156 [2024-07-26 14:20:36.952898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.156 [2024-07-26 14:20:36.952961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.156 qpair failed and we were unable to recover it. 00:26:29.156 [2024-07-26 14:20:36.953194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.156 [2024-07-26 14:20:36.953259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.156 qpair failed and we were unable to recover it. 00:26:29.156 [2024-07-26 14:20:36.953511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.156 [2024-07-26 14:20:36.953592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.156 qpair failed and we were unable to recover it. 00:26:29.156 [2024-07-26 14:20:36.953873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.156 [2024-07-26 14:20:36.953935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.156 qpair failed and we were unable to recover it. 00:26:29.156 [2024-07-26 14:20:36.954212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.156 [2024-07-26 14:20:36.954285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.156 qpair failed and we were unable to recover it. 00:26:29.156 [2024-07-26 14:20:36.954490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.156 [2024-07-26 14:20:36.954571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.156 qpair failed and we were unable to recover it. 00:26:29.156 [2024-07-26 14:20:36.954820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.156 [2024-07-26 14:20:36.954882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.156 qpair failed and we were unable to recover it. 00:26:29.156 [2024-07-26 14:20:36.955164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.156 [2024-07-26 14:20:36.955227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.156 qpair failed and we were unable to recover it. 00:26:29.156 [2024-07-26 14:20:36.955507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.156 [2024-07-26 14:20:36.955589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.156 qpair failed and we were unable to recover it. 00:26:29.156 [2024-07-26 14:20:36.955837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.156 [2024-07-26 14:20:36.955899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.156 qpair failed and we were unable to recover it. 00:26:29.156 [2024-07-26 14:20:36.956110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.156 [2024-07-26 14:20:36.956174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.156 qpair failed and we were unable to recover it. 00:26:29.156 [2024-07-26 14:20:36.956384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.156 [2024-07-26 14:20:36.956447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.156 qpair failed and we were unable to recover it. 00:26:29.156 [2024-07-26 14:20:36.956706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.156 [2024-07-26 14:20:36.956770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.156 qpair failed and we were unable to recover it. 00:26:29.156 [2024-07-26 14:20:36.957012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.156 [2024-07-26 14:20:36.957077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.156 qpair failed and we were unable to recover it. 00:26:29.156 [2024-07-26 14:20:36.957296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.156 [2024-07-26 14:20:36.957359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.156 qpair failed and we were unable to recover it. 00:26:29.156 [2024-07-26 14:20:36.957590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.156 [2024-07-26 14:20:36.957656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.156 qpair failed and we were unable to recover it. 00:26:29.156 [2024-07-26 14:20:36.957902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.156 [2024-07-26 14:20:36.957965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.156 qpair failed and we were unable to recover it. 00:26:29.156 [2024-07-26 14:20:36.958247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.156 [2024-07-26 14:20:36.958309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.156 qpair failed and we were unable to recover it. 00:26:29.156 [2024-07-26 14:20:36.958564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.156 [2024-07-26 14:20:36.958629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.156 qpair failed and we were unable to recover it. 00:26:29.156 [2024-07-26 14:20:36.958875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.156 [2024-07-26 14:20:36.958940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.156 qpair failed and we were unable to recover it. 00:26:29.156 [2024-07-26 14:20:36.959187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.156 [2024-07-26 14:20:36.959251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.156 qpair failed and we were unable to recover it. 00:26:29.156 [2024-07-26 14:20:36.959555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.156 [2024-07-26 14:20:36.959619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.156 qpair failed and we were unable to recover it. 00:26:29.156 [2024-07-26 14:20:36.959859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.156 [2024-07-26 14:20:36.959922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.156 qpair failed and we were unable to recover it. 00:26:29.156 [2024-07-26 14:20:36.960162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.156 [2024-07-26 14:20:36.960225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.156 qpair failed and we were unable to recover it. 00:26:29.156 [2024-07-26 14:20:36.960501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.156 [2024-07-26 14:20:36.960576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.156 qpair failed and we were unable to recover it. 00:26:29.156 [2024-07-26 14:20:36.960843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.156 [2024-07-26 14:20:36.960906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.156 qpair failed and we were unable to recover it. 00:26:29.156 [2024-07-26 14:20:36.961106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.156 [2024-07-26 14:20:36.961168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.156 qpair failed and we were unable to recover it. 00:26:29.156 [2024-07-26 14:20:36.961447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.156 [2024-07-26 14:20:36.961509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.156 qpair failed and we were unable to recover it. 00:26:29.156 [2024-07-26 14:20:36.961753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.156 [2024-07-26 14:20:36.961816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.156 qpair failed and we were unable to recover it. 00:26:29.156 [2024-07-26 14:20:36.962028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.156 [2024-07-26 14:20:36.962090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.156 qpair failed and we were unable to recover it. 00:26:29.156 [2024-07-26 14:20:36.962349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.156 [2024-07-26 14:20:36.962412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.156 qpair failed and we were unable to recover it. 00:26:29.156 [2024-07-26 14:20:36.962685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.156 [2024-07-26 14:20:36.962759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.156 qpair failed and we were unable to recover it. 00:26:29.156 [2024-07-26 14:20:36.962976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.156 [2024-07-26 14:20:36.963040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.156 qpair failed and we were unable to recover it. 00:26:29.156 [2024-07-26 14:20:36.963247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.156 [2024-07-26 14:20:36.963311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.156 qpair failed and we were unable to recover it. 00:26:29.156 [2024-07-26 14:20:36.963598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.156 [2024-07-26 14:20:36.963663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.156 qpair failed and we were unable to recover it. 00:26:29.156 [2024-07-26 14:20:36.963919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.156 [2024-07-26 14:20:36.963981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.156 qpair failed and we were unable to recover it. 00:26:29.156 [2024-07-26 14:20:36.964233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.156 [2024-07-26 14:20:36.964296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.156 qpair failed and we were unable to recover it. 00:26:29.157 [2024-07-26 14:20:36.964547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.157 [2024-07-26 14:20:36.964611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.157 qpair failed and we were unable to recover it. 00:26:29.157 [2024-07-26 14:20:36.964813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.157 [2024-07-26 14:20:36.964876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.157 qpair failed and we were unable to recover it. 00:26:29.157 [2024-07-26 14:20:36.965119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.157 [2024-07-26 14:20:36.965182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.157 qpair failed and we were unable to recover it. 00:26:29.157 [2024-07-26 14:20:36.965412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.157 [2024-07-26 14:20:36.965475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.157 qpair failed and we were unable to recover it. 00:26:29.157 [2024-07-26 14:20:36.965726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.157 [2024-07-26 14:20:36.965799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.157 qpair failed and we were unable to recover it. 00:26:29.157 [2024-07-26 14:20:36.966055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.157 [2024-07-26 14:20:36.966106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.157 qpair failed and we were unable to recover it. 00:26:29.157 [2024-07-26 14:20:36.966335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.157 [2024-07-26 14:20:36.966385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.157 qpair failed and we were unable to recover it. 00:26:29.157 [2024-07-26 14:20:36.966600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.157 [2024-07-26 14:20:36.966634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.157 qpair failed and we were unable to recover it. 00:26:29.157 [2024-07-26 14:20:36.966809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.157 [2024-07-26 14:20:36.966843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.157 qpair failed and we were unable to recover it. 00:26:29.157 [2024-07-26 14:20:36.966952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.157 [2024-07-26 14:20:36.966985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.157 qpair failed and we were unable to recover it. 00:26:29.157 [2024-07-26 14:20:36.967126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.157 [2024-07-26 14:20:36.967159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.157 qpair failed and we were unable to recover it. 00:26:29.157 [2024-07-26 14:20:36.967268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.157 [2024-07-26 14:20:36.967336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.157 qpair failed and we were unable to recover it. 00:26:29.157 [2024-07-26 14:20:36.967591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.157 [2024-07-26 14:20:36.967625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.157 qpair failed and we were unable to recover it. 00:26:29.157 [2024-07-26 14:20:36.967759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.157 [2024-07-26 14:20:36.967793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.157 qpair failed and we were unable to recover it. 00:26:29.157 [2024-07-26 14:20:36.968073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.157 [2024-07-26 14:20:36.968136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.157 qpair failed and we were unable to recover it. 00:26:29.157 [2024-07-26 14:20:36.968379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.157 [2024-07-26 14:20:36.968443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.157 qpair failed and we were unable to recover it. 00:26:29.157 [2024-07-26 14:20:36.968646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.157 [2024-07-26 14:20:36.968680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.157 qpair failed and we were unable to recover it. 00:26:29.157 [2024-07-26 14:20:36.968796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.157 [2024-07-26 14:20:36.968829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.157 qpair failed and we were unable to recover it. 00:26:29.157 [2024-07-26 14:20:36.968945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.157 [2024-07-26 14:20:36.968978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.157 qpair failed and we were unable to recover it. 00:26:29.157 [2024-07-26 14:20:36.969134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.157 [2024-07-26 14:20:36.969197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.157 qpair failed and we were unable to recover it. 00:26:29.157 [2024-07-26 14:20:36.969402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.157 [2024-07-26 14:20:36.969465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.157 qpair failed and we were unable to recover it. 00:26:29.157 [2024-07-26 14:20:36.969680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.157 [2024-07-26 14:20:36.969713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.157 qpair failed and we were unable to recover it. 00:26:29.157 [2024-07-26 14:20:36.969856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.157 [2024-07-26 14:20:36.969926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.157 qpair failed and we were unable to recover it. 00:26:29.157 [2024-07-26 14:20:36.970171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.157 [2024-07-26 14:20:36.970220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.157 qpair failed and we were unable to recover it. 00:26:29.157 [2024-07-26 14:20:36.970509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.157 [2024-07-26 14:20:36.970606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.157 qpair failed and we were unable to recover it. 00:26:29.157 [2024-07-26 14:20:36.970717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.157 [2024-07-26 14:20:36.970750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.157 qpair failed and we were unable to recover it. 00:26:29.157 [2024-07-26 14:20:36.970934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.157 [2024-07-26 14:20:36.970996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.157 qpair failed and we were unable to recover it. 00:26:29.157 [2024-07-26 14:20:36.971270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.157 [2024-07-26 14:20:36.971332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.157 qpair failed and we were unable to recover it. 00:26:29.157 [2024-07-26 14:20:36.971604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.157 [2024-07-26 14:20:36.971638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.157 qpair failed and we were unable to recover it. 00:26:29.157 [2024-07-26 14:20:36.971778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.157 [2024-07-26 14:20:36.971811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.157 qpair failed and we were unable to recover it. 00:26:29.157 [2024-07-26 14:20:36.972030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.157 [2024-07-26 14:20:36.972092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.157 qpair failed and we were unable to recover it. 00:26:29.157 [2024-07-26 14:20:36.972359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.157 [2024-07-26 14:20:36.972411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.157 qpair failed and we were unable to recover it. 00:26:29.157 [2024-07-26 14:20:36.972656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.157 [2024-07-26 14:20:36.972689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.157 qpair failed and we were unable to recover it. 00:26:29.157 [2024-07-26 14:20:36.972856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.157 [2024-07-26 14:20:36.972910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.157 qpair failed and we were unable to recover it. 00:26:29.157 [2024-07-26 14:20:36.973008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.157 [2024-07-26 14:20:36.973039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.157 qpair failed and we were unable to recover it. 00:26:29.157 [2024-07-26 14:20:36.973206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.157 [2024-07-26 14:20:36.973269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.157 qpair failed and we were unable to recover it. 00:26:29.157 [2024-07-26 14:20:36.973507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.157 [2024-07-26 14:20:36.973585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.157 qpair failed and we were unable to recover it. 00:26:29.157 [2024-07-26 14:20:36.973752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.157 [2024-07-26 14:20:36.973785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.157 qpair failed and we were unable to recover it. 00:26:29.158 [2024-07-26 14:20:36.973970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.158 [2024-07-26 14:20:36.974048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.158 qpair failed and we were unable to recover it. 00:26:29.158 [2024-07-26 14:20:36.974331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.158 [2024-07-26 14:20:36.974393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.158 qpair failed and we were unable to recover it. 00:26:29.158 [2024-07-26 14:20:36.974605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.158 [2024-07-26 14:20:36.974639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.158 qpair failed and we were unable to recover it. 00:26:29.158 [2024-07-26 14:20:36.974804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.158 [2024-07-26 14:20:36.974836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.158 qpair failed and we were unable to recover it. 00:26:29.158 [2024-07-26 14:20:36.974969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.158 [2024-07-26 14:20:36.975001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.158 qpair failed and we were unable to recover it. 00:26:29.158 [2024-07-26 14:20:36.975203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.158 [2024-07-26 14:20:36.975266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.158 qpair failed and we were unable to recover it. 00:26:29.158 [2024-07-26 14:20:36.975452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.158 [2024-07-26 14:20:36.975514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.158 qpair failed and we were unable to recover it. 00:26:29.158 [2024-07-26 14:20:36.975685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.158 [2024-07-26 14:20:36.975718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.158 qpair failed and we were unable to recover it. 00:26:29.158 [2024-07-26 14:20:36.975834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.158 [2024-07-26 14:20:36.975867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.158 qpair failed and we were unable to recover it. 00:26:29.158 [2024-07-26 14:20:36.976111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.158 [2024-07-26 14:20:36.976144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.158 qpair failed and we were unable to recover it. 00:26:29.158 [2024-07-26 14:20:36.976282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.158 [2024-07-26 14:20:36.976315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.158 qpair failed and we were unable to recover it. 00:26:29.158 [2024-07-26 14:20:36.976615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.158 [2024-07-26 14:20:36.976651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.158 qpair failed and we were unable to recover it. 00:26:29.158 [2024-07-26 14:20:36.976791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.158 [2024-07-26 14:20:36.976847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.158 qpair failed and we were unable to recover it. 00:26:29.158 [2024-07-26 14:20:36.977101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.158 [2024-07-26 14:20:36.977164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.158 qpair failed and we were unable to recover it. 00:26:29.158 [2024-07-26 14:20:36.977442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.158 [2024-07-26 14:20:36.977505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.158 qpair failed and we were unable to recover it. 00:26:29.158 [2024-07-26 14:20:36.977706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.158 [2024-07-26 14:20:36.977739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.158 qpair failed and we were unable to recover it. 00:26:29.158 [2024-07-26 14:20:36.977906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.158 [2024-07-26 14:20:36.977973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.158 qpair failed and we were unable to recover it. 00:26:29.158 [2024-07-26 14:20:36.978216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.158 [2024-07-26 14:20:36.978278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.158 qpair failed and we were unable to recover it. 00:26:29.158 [2024-07-26 14:20:36.978477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.158 [2024-07-26 14:20:36.978576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.158 qpair failed and we were unable to recover it. 00:26:29.158 [2024-07-26 14:20:36.978731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.158 [2024-07-26 14:20:36.978763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.158 qpair failed and we were unable to recover it. 00:26:29.158 [2024-07-26 14:20:36.978904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.158 [2024-07-26 14:20:36.978936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.158 qpair failed and we were unable to recover it. 00:26:29.158 [2024-07-26 14:20:36.979072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.158 [2024-07-26 14:20:36.979105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.158 qpair failed and we were unable to recover it. 00:26:29.158 [2024-07-26 14:20:36.979355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.158 [2024-07-26 14:20:36.979420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.158 qpair failed and we were unable to recover it. 00:26:29.158 [2024-07-26 14:20:36.979637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.158 [2024-07-26 14:20:36.979688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.158 qpair failed and we were unable to recover it. 00:26:29.158 [2024-07-26 14:20:36.979895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.158 [2024-07-26 14:20:36.979968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.158 qpair failed and we were unable to recover it. 00:26:29.158 [2024-07-26 14:20:36.980209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.158 [2024-07-26 14:20:36.980281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.158 qpair failed and we were unable to recover it. 00:26:29.158 [2024-07-26 14:20:36.980501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.158 [2024-07-26 14:20:36.980595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.158 qpair failed and we were unable to recover it. 00:26:29.158 [2024-07-26 14:20:36.980772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.158 [2024-07-26 14:20:36.980842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.158 qpair failed and we were unable to recover it. 00:26:29.158 [2024-07-26 14:20:36.981079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.158 [2024-07-26 14:20:36.981144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.158 qpair failed and we were unable to recover it. 00:26:29.158 [2024-07-26 14:20:36.981392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.158 [2024-07-26 14:20:36.981455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.158 qpair failed and we were unable to recover it. 00:26:29.158 [2024-07-26 14:20:36.981726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.158 [2024-07-26 14:20:36.981778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.158 qpair failed and we were unable to recover it. 00:26:29.158 [2024-07-26 14:20:36.982013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.158 [2024-07-26 14:20:36.982064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.158 qpair failed and we were unable to recover it. 00:26:29.158 [2024-07-26 14:20:36.982208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.158 [2024-07-26 14:20:36.982258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.158 qpair failed and we were unable to recover it. 00:26:29.158 [2024-07-26 14:20:36.982517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.158 [2024-07-26 14:20:36.982606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.158 qpair failed and we were unable to recover it. 00:26:29.158 [2024-07-26 14:20:36.982811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.158 [2024-07-26 14:20:36.982889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.158 qpair failed and we were unable to recover it. 00:26:29.158 [2024-07-26 14:20:36.983136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.158 [2024-07-26 14:20:36.983199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.158 qpair failed and we were unable to recover it. 00:26:29.158 [2024-07-26 14:20:36.983490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.158 [2024-07-26 14:20:36.983585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.158 qpair failed and we were unable to recover it. 00:26:29.158 [2024-07-26 14:20:36.983760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.158 [2024-07-26 14:20:36.983811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.158 qpair failed and we were unable to recover it. 00:26:29.159 [2024-07-26 14:20:36.983966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.159 [2024-07-26 14:20:36.984017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.159 qpair failed and we were unable to recover it. 00:26:29.159 [2024-07-26 14:20:36.984271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.159 [2024-07-26 14:20:36.984334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.159 qpair failed and we were unable to recover it. 00:26:29.159 [2024-07-26 14:20:36.984592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.159 [2024-07-26 14:20:36.984645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.159 qpair failed and we were unable to recover it. 00:26:29.159 [2024-07-26 14:20:36.984851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.159 [2024-07-26 14:20:36.984901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.159 qpair failed and we were unable to recover it. 00:26:29.159 [2024-07-26 14:20:36.985084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.159 [2024-07-26 14:20:36.985148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.159 qpair failed and we were unable to recover it. 00:26:29.159 [2024-07-26 14:20:36.985436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.159 [2024-07-26 14:20:36.985500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.159 qpair failed and we were unable to recover it. 00:26:29.159 [2024-07-26 14:20:36.985759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.159 [2024-07-26 14:20:36.985809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.159 qpair failed and we were unable to recover it. 00:26:29.159 [2024-07-26 14:20:36.986101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.159 [2024-07-26 14:20:36.986164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.159 qpair failed and we were unable to recover it. 00:26:29.159 [2024-07-26 14:20:36.986441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.159 [2024-07-26 14:20:36.986504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.159 qpair failed and we were unable to recover it. 00:26:29.159 [2024-07-26 14:20:36.986769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.159 [2024-07-26 14:20:36.986820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.159 qpair failed and we were unable to recover it. 00:26:29.159 [2024-07-26 14:20:36.987035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.159 [2024-07-26 14:20:36.987098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.159 qpair failed and we were unable to recover it. 00:26:29.159 [2024-07-26 14:20:36.987376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.159 [2024-07-26 14:20:36.987438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.159 qpair failed and we were unable to recover it. 00:26:29.159 [2024-07-26 14:20:36.987686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.159 [2024-07-26 14:20:36.987738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.159 qpair failed and we were unable to recover it. 00:26:29.159 [2024-07-26 14:20:36.987962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.159 [2024-07-26 14:20:36.988033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.159 qpair failed and we were unable to recover it. 00:26:29.159 [2024-07-26 14:20:36.988246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.159 [2024-07-26 14:20:36.988308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.159 qpair failed and we were unable to recover it. 00:26:29.159 [2024-07-26 14:20:36.988598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.159 [2024-07-26 14:20:36.988650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.159 qpair failed and we were unable to recover it. 00:26:29.159 [2024-07-26 14:20:36.988846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.159 [2024-07-26 14:20:36.988924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.159 qpair failed and we were unable to recover it. 00:26:29.159 [2024-07-26 14:20:36.989178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.159 [2024-07-26 14:20:36.989241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.159 qpair failed and we were unable to recover it. 00:26:29.159 [2024-07-26 14:20:36.989502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.159 [2024-07-26 14:20:36.989592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.159 qpair failed and we were unable to recover it. 00:26:29.159 [2024-07-26 14:20:36.989794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.159 [2024-07-26 14:20:36.989867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.159 qpair failed and we were unable to recover it. 00:26:29.159 [2024-07-26 14:20:36.990111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.159 [2024-07-26 14:20:36.990174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.159 qpair failed and we were unable to recover it. 00:26:29.159 [2024-07-26 14:20:36.990446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.159 [2024-07-26 14:20:36.990509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.159 qpair failed and we were unable to recover it. 00:26:29.159 [2024-07-26 14:20:36.990754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.159 [2024-07-26 14:20:36.990805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.159 qpair failed and we were unable to recover it. 00:26:29.159 [2024-07-26 14:20:36.991096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.159 [2024-07-26 14:20:36.991159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.159 qpair failed and we were unable to recover it. 00:26:29.159 [2024-07-26 14:20:36.991401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.159 [2024-07-26 14:20:36.991463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.159 qpair failed and we were unable to recover it. 00:26:29.159 [2024-07-26 14:20:36.991710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.159 [2024-07-26 14:20:36.991761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.159 qpair failed and we were unable to recover it. 00:26:29.159 [2024-07-26 14:20:36.991978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.159 [2024-07-26 14:20:36.992041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.159 qpair failed and we were unable to recover it. 00:26:29.159 [2024-07-26 14:20:36.992259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.159 [2024-07-26 14:20:36.992322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.159 qpair failed and we were unable to recover it. 00:26:29.159 [2024-07-26 14:20:36.992576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.159 [2024-07-26 14:20:36.992628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.159 qpair failed and we were unable to recover it. 00:26:29.159 [2024-07-26 14:20:36.992817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.159 [2024-07-26 14:20:36.992868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.159 qpair failed and we were unable to recover it. 00:26:29.159 [2024-07-26 14:20:36.993108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.159 [2024-07-26 14:20:36.993170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.159 qpair failed and we were unable to recover it. 00:26:29.159 [2024-07-26 14:20:36.993450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.159 [2024-07-26 14:20:36.993513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.159 qpair failed and we were unable to recover it. 00:26:29.159 [2024-07-26 14:20:36.993720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.159 [2024-07-26 14:20:36.993771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.159 qpair failed and we were unable to recover it. 00:26:29.159 [2024-07-26 14:20:36.993980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.159 [2024-07-26 14:20:36.994030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.159 qpair failed and we were unable to recover it. 00:26:29.159 [2024-07-26 14:20:36.994196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.159 [2024-07-26 14:20:36.994246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.159 qpair failed and we were unable to recover it. 00:26:29.159 [2024-07-26 14:20:36.994445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.159 [2024-07-26 14:20:36.994495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.159 qpair failed and we were unable to recover it. 00:26:29.159 [2024-07-26 14:20:36.994733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.159 [2024-07-26 14:20:36.994783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.159 qpair failed and we were unable to recover it. 00:26:29.159 [2024-07-26 14:20:36.994943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.159 [2024-07-26 14:20:36.994994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.159 qpair failed and we were unable to recover it. 00:26:29.159 [2024-07-26 14:20:36.995178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.160 [2024-07-26 14:20:36.995228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.160 qpair failed and we were unable to recover it. 00:26:29.160 [2024-07-26 14:20:36.995415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.160 [2024-07-26 14:20:36.995464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.160 qpair failed and we were unable to recover it. 00:26:29.160 [2024-07-26 14:20:36.995692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.160 [2024-07-26 14:20:36.995765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.160 qpair failed and we were unable to recover it. 00:26:29.160 [2024-07-26 14:20:36.995974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.160 [2024-07-26 14:20:36.996037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.160 qpair failed and we were unable to recover it. 00:26:29.160 [2024-07-26 14:20:36.996313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.160 [2024-07-26 14:20:36.996376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.160 qpair failed and we were unable to recover it. 00:26:29.160 [2024-07-26 14:20:36.996612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.160 [2024-07-26 14:20:36.996677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.160 qpair failed and we were unable to recover it. 00:26:29.160 [2024-07-26 14:20:36.996906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.160 [2024-07-26 14:20:36.996970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.160 qpair failed and we were unable to recover it. 00:26:29.160 [2024-07-26 14:20:36.997219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.160 [2024-07-26 14:20:36.997282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.160 qpair failed and we were unable to recover it. 00:26:29.160 [2024-07-26 14:20:36.997463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.160 [2024-07-26 14:20:36.997526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.160 qpair failed and we were unable to recover it. 00:26:29.160 [2024-07-26 14:20:36.997764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.160 [2024-07-26 14:20:36.997827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.160 qpair failed and we were unable to recover it. 00:26:29.160 [2024-07-26 14:20:36.998043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.160 [2024-07-26 14:20:36.998105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.160 qpair failed and we were unable to recover it. 00:26:29.160 [2024-07-26 14:20:36.998325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.160 [2024-07-26 14:20:36.998388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.160 qpair failed and we were unable to recover it. 00:26:29.160 [2024-07-26 14:20:36.998686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.160 [2024-07-26 14:20:36.998751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.160 qpair failed and we were unable to recover it. 00:26:29.160 [2024-07-26 14:20:36.999005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.160 [2024-07-26 14:20:36.999067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.160 qpair failed and we were unable to recover it. 00:26:29.160 [2024-07-26 14:20:36.999285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.160 [2024-07-26 14:20:36.999348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.160 qpair failed and we were unable to recover it. 00:26:29.160 [2024-07-26 14:20:36.999591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.160 [2024-07-26 14:20:36.999657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.160 qpair failed and we were unable to recover it. 00:26:29.160 [2024-07-26 14:20:36.999953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.160 [2024-07-26 14:20:37.000016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.160 qpair failed and we were unable to recover it. 00:26:29.160 [2024-07-26 14:20:37.000231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.160 [2024-07-26 14:20:37.000294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.160 qpair failed and we were unable to recover it. 00:26:29.160 [2024-07-26 14:20:37.000557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.160 [2024-07-26 14:20:37.000609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.160 qpair failed and we were unable to recover it. 00:26:29.160 [2024-07-26 14:20:37.000807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.160 [2024-07-26 14:20:37.000858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.160 qpair failed and we were unable to recover it. 00:26:29.160 [2024-07-26 14:20:37.001111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.160 [2024-07-26 14:20:37.001174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.160 qpair failed and we were unable to recover it. 00:26:29.160 [2024-07-26 14:20:37.001479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.160 [2024-07-26 14:20:37.001556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.160 qpair failed and we were unable to recover it. 00:26:29.160 [2024-07-26 14:20:37.001768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.160 [2024-07-26 14:20:37.001830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.160 qpair failed and we were unable to recover it. 00:26:29.160 [2024-07-26 14:20:37.002013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.160 [2024-07-26 14:20:37.002075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.160 qpair failed and we were unable to recover it. 00:26:29.160 [2024-07-26 14:20:37.002328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.160 [2024-07-26 14:20:37.002391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.160 qpair failed and we were unable to recover it. 00:26:29.160 [2024-07-26 14:20:37.002639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.160 [2024-07-26 14:20:37.002703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.160 qpair failed and we were unable to recover it. 00:26:29.160 [2024-07-26 14:20:37.002983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.160 [2024-07-26 14:20:37.003046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.160 qpair failed and we were unable to recover it. 00:26:29.160 [2024-07-26 14:20:37.003329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.160 [2024-07-26 14:20:37.003390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.160 qpair failed and we were unable to recover it. 00:26:29.160 [2024-07-26 14:20:37.003655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.160 [2024-07-26 14:20:37.003719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.160 qpair failed and we were unable to recover it. 00:26:29.160 [2024-07-26 14:20:37.004009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.160 [2024-07-26 14:20:37.004073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.160 qpair failed and we were unable to recover it. 00:26:29.160 [2024-07-26 14:20:37.004329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.160 [2024-07-26 14:20:37.004392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.160 qpair failed and we were unable to recover it. 00:26:29.160 [2024-07-26 14:20:37.004608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.160 [2024-07-26 14:20:37.004673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.160 qpair failed and we were unable to recover it. 00:26:29.160 [2024-07-26 14:20:37.004922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.160 [2024-07-26 14:20:37.004985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.160 qpair failed and we were unable to recover it. 00:26:29.161 [2024-07-26 14:20:37.005268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.161 [2024-07-26 14:20:37.005318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.161 qpair failed and we were unable to recover it. 00:26:29.161 [2024-07-26 14:20:37.005569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.161 [2024-07-26 14:20:37.005633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.161 qpair failed and we were unable to recover it. 00:26:29.161 [2024-07-26 14:20:37.005882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.161 [2024-07-26 14:20:37.005944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.161 qpair failed and we were unable to recover it. 00:26:29.161 [2024-07-26 14:20:37.006184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.161 [2024-07-26 14:20:37.006247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.161 qpair failed and we were unable to recover it. 00:26:29.161 [2024-07-26 14:20:37.006479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.161 [2024-07-26 14:20:37.006576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.161 qpair failed and we were unable to recover it. 00:26:29.161 [2024-07-26 14:20:37.006859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.161 [2024-07-26 14:20:37.006922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.161 qpair failed and we were unable to recover it. 00:26:29.161 [2024-07-26 14:20:37.007172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.161 [2024-07-26 14:20:37.007235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.161 qpair failed and we were unable to recover it. 00:26:29.161 [2024-07-26 14:20:37.007478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.161 [2024-07-26 14:20:37.007555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.161 qpair failed and we were unable to recover it. 00:26:29.161 [2024-07-26 14:20:37.007849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.161 [2024-07-26 14:20:37.007911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.161 qpair failed and we were unable to recover it. 00:26:29.161 [2024-07-26 14:20:37.008162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.161 [2024-07-26 14:20:37.008224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.161 qpair failed and we were unable to recover it. 00:26:29.161 [2024-07-26 14:20:37.008477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.161 [2024-07-26 14:20:37.008568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.161 qpair failed and we were unable to recover it. 00:26:29.161 [2024-07-26 14:20:37.008866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.161 [2024-07-26 14:20:37.008916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.161 qpair failed and we were unable to recover it. 00:26:29.161 [2024-07-26 14:20:37.009115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.161 [2024-07-26 14:20:37.009192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.161 qpair failed and we were unable to recover it. 00:26:29.161 [2024-07-26 14:20:37.009454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.161 [2024-07-26 14:20:37.009517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.161 qpair failed and we were unable to recover it. 00:26:29.161 [2024-07-26 14:20:37.009837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.161 [2024-07-26 14:20:37.009898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.161 qpair failed and we were unable to recover it. 00:26:29.161 [2024-07-26 14:20:37.010104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.161 [2024-07-26 14:20:37.010168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.161 qpair failed and we were unable to recover it. 00:26:29.161 [2024-07-26 14:20:37.010444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.161 [2024-07-26 14:20:37.010508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.161 qpair failed and we were unable to recover it. 00:26:29.161 [2024-07-26 14:20:37.010824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.161 [2024-07-26 14:20:37.010889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.161 qpair failed and we were unable to recover it. 00:26:29.161 [2024-07-26 14:20:37.011177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.161 [2024-07-26 14:20:37.011241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.161 qpair failed and we were unable to recover it. 00:26:29.161 [2024-07-26 14:20:37.011490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.161 [2024-07-26 14:20:37.011574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.161 qpair failed and we were unable to recover it. 00:26:29.161 [2024-07-26 14:20:37.011878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.161 [2024-07-26 14:20:37.011941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.161 qpair failed and we were unable to recover it. 00:26:29.161 [2024-07-26 14:20:37.012241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.161 [2024-07-26 14:20:37.012305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.161 qpair failed and we were unable to recover it. 00:26:29.161 [2024-07-26 14:20:37.012599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.161 [2024-07-26 14:20:37.012665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.161 qpair failed and we were unable to recover it. 00:26:29.161 [2024-07-26 14:20:37.012911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.161 [2024-07-26 14:20:37.012975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.161 qpair failed and we were unable to recover it. 00:26:29.161 [2024-07-26 14:20:37.013231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.161 [2024-07-26 14:20:37.013296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.161 qpair failed and we were unable to recover it. 00:26:29.161 [2024-07-26 14:20:37.013592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.161 [2024-07-26 14:20:37.013657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.161 qpair failed and we were unable to recover it. 00:26:29.161 [2024-07-26 14:20:37.013896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.161 [2024-07-26 14:20:37.013961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.161 qpair failed and we were unable to recover it. 00:26:29.161 [2024-07-26 14:20:37.014220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.161 [2024-07-26 14:20:37.014285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.161 qpair failed and we were unable to recover it. 00:26:29.161 [2024-07-26 14:20:37.014470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.161 [2024-07-26 14:20:37.014549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.161 qpair failed and we were unable to recover it. 00:26:29.161 [2024-07-26 14:20:37.014822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.161 [2024-07-26 14:20:37.014873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.161 qpair failed and we were unable to recover it. 00:26:29.161 [2024-07-26 14:20:37.015090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.161 [2024-07-26 14:20:37.015155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.161 qpair failed and we were unable to recover it. 00:26:29.161 [2024-07-26 14:20:37.015430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.161 [2024-07-26 14:20:37.015494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.161 qpair failed and we were unable to recover it. 00:26:29.161 [2024-07-26 14:20:37.015798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.161 [2024-07-26 14:20:37.015862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.161 qpair failed and we were unable to recover it. 00:26:29.161 [2024-07-26 14:20:37.016143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.161 [2024-07-26 14:20:37.016207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.161 qpair failed and we were unable to recover it. 00:26:29.161 [2024-07-26 14:20:37.016486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.161 [2024-07-26 14:20:37.016567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.161 qpair failed and we were unable to recover it. 00:26:29.161 [2024-07-26 14:20:37.016864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.161 [2024-07-26 14:20:37.016927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.161 qpair failed and we were unable to recover it. 00:26:29.161 [2024-07-26 14:20:37.017215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.161 [2024-07-26 14:20:37.017281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.161 qpair failed and we were unable to recover it. 00:26:29.161 [2024-07-26 14:20:37.017560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.161 [2024-07-26 14:20:37.017636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.161 qpair failed and we were unable to recover it. 00:26:29.162 [2024-07-26 14:20:37.017861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.162 [2024-07-26 14:20:37.017925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.162 qpair failed and we were unable to recover it. 00:26:29.162 [2024-07-26 14:20:37.018169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.162 [2024-07-26 14:20:37.018234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.162 qpair failed and we were unable to recover it. 00:26:29.162 [2024-07-26 14:20:37.018519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.162 [2024-07-26 14:20:37.018616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.162 qpair failed and we were unable to recover it. 00:26:29.162 [2024-07-26 14:20:37.018899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.162 [2024-07-26 14:20:37.018963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.162 qpair failed and we were unable to recover it. 00:26:29.162 [2024-07-26 14:20:37.019242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.162 [2024-07-26 14:20:37.019305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.162 qpair failed and we were unable to recover it. 00:26:29.162 [2024-07-26 14:20:37.019593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.162 [2024-07-26 14:20:37.019660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.162 qpair failed and we were unable to recover it. 00:26:29.162 [2024-07-26 14:20:37.019954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.162 [2024-07-26 14:20:37.020019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.162 qpair failed and we were unable to recover it. 00:26:29.162 [2024-07-26 14:20:37.020320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.162 [2024-07-26 14:20:37.020384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.162 qpair failed and we were unable to recover it. 00:26:29.162 [2024-07-26 14:20:37.020670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.162 [2024-07-26 14:20:37.020735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.162 qpair failed and we were unable to recover it. 00:26:29.162 [2024-07-26 14:20:37.021026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.162 [2024-07-26 14:20:37.021092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.162 qpair failed and we were unable to recover it. 00:26:29.162 [2024-07-26 14:20:37.021344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.162 [2024-07-26 14:20:37.021409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.162 qpair failed and we were unable to recover it. 00:26:29.162 [2024-07-26 14:20:37.021651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.162 [2024-07-26 14:20:37.021718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.162 qpair failed and we were unable to recover it. 00:26:29.162 [2024-07-26 14:20:37.021941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.162 [2024-07-26 14:20:37.022015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.162 qpair failed and we were unable to recover it. 00:26:29.162 [2024-07-26 14:20:37.022253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.162 [2024-07-26 14:20:37.022317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.162 qpair failed and we were unable to recover it. 00:26:29.162 [2024-07-26 14:20:37.022588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.162 [2024-07-26 14:20:37.022654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.162 qpair failed and we were unable to recover it. 00:26:29.162 [2024-07-26 14:20:37.022901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.162 [2024-07-26 14:20:37.022965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.162 qpair failed and we were unable to recover it. 00:26:29.162 [2024-07-26 14:20:37.023208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.162 [2024-07-26 14:20:37.023272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.162 qpair failed and we were unable to recover it. 00:26:29.162 [2024-07-26 14:20:37.023507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.162 [2024-07-26 14:20:37.023587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.162 qpair failed and we were unable to recover it. 00:26:29.162 [2024-07-26 14:20:37.023807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.162 [2024-07-26 14:20:37.023871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.162 qpair failed and we were unable to recover it. 00:26:29.162 [2024-07-26 14:20:37.024130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.162 [2024-07-26 14:20:37.024195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.162 qpair failed and we were unable to recover it. 00:26:29.162 [2024-07-26 14:20:37.024437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.162 [2024-07-26 14:20:37.024501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.162 qpair failed and we were unable to recover it. 00:26:29.162 [2024-07-26 14:20:37.024764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.162 [2024-07-26 14:20:37.024829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.162 qpair failed and we were unable to recover it. 00:26:29.162 [2024-07-26 14:20:37.025095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.162 [2024-07-26 14:20:37.025159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.162 qpair failed and we were unable to recover it. 00:26:29.162 [2024-07-26 14:20:37.025434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.162 [2024-07-26 14:20:37.025497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.162 qpair failed and we were unable to recover it. 00:26:29.162 [2024-07-26 14:20:37.025721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.162 [2024-07-26 14:20:37.025786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.162 qpair failed and we were unable to recover it. 00:26:29.162 [2024-07-26 14:20:37.026030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.162 [2024-07-26 14:20:37.026095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.162 qpair failed and we were unable to recover it. 00:26:29.162 [2024-07-26 14:20:37.026383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.162 [2024-07-26 14:20:37.026456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.162 qpair failed and we were unable to recover it. 00:26:29.162 [2024-07-26 14:20:37.026709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.162 [2024-07-26 14:20:37.026776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.162 qpair failed and we were unable to recover it. 00:26:29.162 [2024-07-26 14:20:37.027025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.162 [2024-07-26 14:20:37.027090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.162 qpair failed and we were unable to recover it. 00:26:29.162 [2024-07-26 14:20:37.027332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.162 [2024-07-26 14:20:37.027399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.162 qpair failed and we were unable to recover it. 00:26:29.162 [2024-07-26 14:20:37.027634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.162 [2024-07-26 14:20:37.027701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.162 qpair failed and we were unable to recover it. 00:26:29.162 [2024-07-26 14:20:37.027979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.162 [2024-07-26 14:20:37.028044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.162 qpair failed and we were unable to recover it. 00:26:29.162 [2024-07-26 14:20:37.028281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.162 [2024-07-26 14:20:37.028345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.162 qpair failed and we were unable to recover it. 00:26:29.162 [2024-07-26 14:20:37.028555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.162 [2024-07-26 14:20:37.028620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.162 qpair failed and we were unable to recover it. 00:26:29.162 [2024-07-26 14:20:37.028835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.162 [2024-07-26 14:20:37.028902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.162 qpair failed and we were unable to recover it. 00:26:29.162 [2024-07-26 14:20:37.029148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.162 [2024-07-26 14:20:37.029215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.162 qpair failed and we were unable to recover it. 00:26:29.162 [2024-07-26 14:20:37.029453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.162 [2024-07-26 14:20:37.029518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.162 qpair failed and we were unable to recover it. 00:26:29.163 [2024-07-26 14:20:37.029765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.163 [2024-07-26 14:20:37.029830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.163 qpair failed and we were unable to recover it. 00:26:29.163 [2024-07-26 14:20:37.030079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.163 [2024-07-26 14:20:37.030143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.163 qpair failed and we were unable to recover it. 00:26:29.163 [2024-07-26 14:20:37.030338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.163 [2024-07-26 14:20:37.030403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.163 qpair failed and we were unable to recover it. 00:26:29.163 [2024-07-26 14:20:37.030682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.163 [2024-07-26 14:20:37.030748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.163 qpair failed and we were unable to recover it. 00:26:29.163 [2024-07-26 14:20:37.031035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.163 [2024-07-26 14:20:37.031100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.163 qpair failed and we were unable to recover it. 00:26:29.163 [2024-07-26 14:20:37.031336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.163 [2024-07-26 14:20:37.031401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.163 qpair failed and we were unable to recover it. 00:26:29.163 [2024-07-26 14:20:37.031671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.163 [2024-07-26 14:20:37.031738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.163 qpair failed and we were unable to recover it. 00:26:29.163 [2024-07-26 14:20:37.032018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.163 [2024-07-26 14:20:37.032083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.163 qpair failed and we were unable to recover it. 00:26:29.163 [2024-07-26 14:20:37.032336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.163 [2024-07-26 14:20:37.032401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.163 qpair failed and we were unable to recover it. 00:26:29.163 [2024-07-26 14:20:37.032631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.163 [2024-07-26 14:20:37.032698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.163 qpair failed and we were unable to recover it. 00:26:29.163 [2024-07-26 14:20:37.032973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.163 [2024-07-26 14:20:37.033037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.163 qpair failed and we were unable to recover it. 00:26:29.163 [2024-07-26 14:20:37.033317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.163 [2024-07-26 14:20:37.033381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.163 qpair failed and we were unable to recover it. 00:26:29.163 [2024-07-26 14:20:37.033662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.163 [2024-07-26 14:20:37.033729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.163 qpair failed and we were unable to recover it. 00:26:29.163 [2024-07-26 14:20:37.034007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.163 [2024-07-26 14:20:37.034059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.163 qpair failed and we were unable to recover it. 00:26:29.163 [2024-07-26 14:20:37.034257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.163 [2024-07-26 14:20:37.034310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.163 qpair failed and we were unable to recover it. 00:26:29.163 [2024-07-26 14:20:37.034567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.163 [2024-07-26 14:20:37.034633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.163 qpair failed and we were unable to recover it. 00:26:29.163 [2024-07-26 14:20:37.034866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.163 [2024-07-26 14:20:37.034930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.163 qpair failed and we were unable to recover it. 00:26:29.163 [2024-07-26 14:20:37.035191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.163 [2024-07-26 14:20:37.035256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.163 qpair failed and we were unable to recover it. 00:26:29.163 [2024-07-26 14:20:37.035481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.163 [2024-07-26 14:20:37.035561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.163 qpair failed and we were unable to recover it. 00:26:29.163 [2024-07-26 14:20:37.035759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.163 [2024-07-26 14:20:37.035823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.163 qpair failed and we were unable to recover it. 00:26:29.163 [2024-07-26 14:20:37.036048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.163 [2024-07-26 14:20:37.036112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.163 qpair failed and we were unable to recover it. 00:26:29.163 [2024-07-26 14:20:37.036369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.163 [2024-07-26 14:20:37.036433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.163 qpair failed and we were unable to recover it. 00:26:29.163 [2024-07-26 14:20:37.036746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.163 [2024-07-26 14:20:37.036812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.163 qpair failed and we were unable to recover it. 00:26:29.163 [2024-07-26 14:20:37.037100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.163 [2024-07-26 14:20:37.037165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.163 qpair failed and we were unable to recover it. 00:26:29.163 [2024-07-26 14:20:37.037418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.163 [2024-07-26 14:20:37.037482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.163 qpair failed and we were unable to recover it. 00:26:29.163 [2024-07-26 14:20:37.037705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.163 [2024-07-26 14:20:37.037771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.163 qpair failed and we were unable to recover it. 00:26:29.163 [2024-07-26 14:20:37.038089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.163 [2024-07-26 14:20:37.038153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.163 qpair failed and we were unable to recover it. 00:26:29.163 [2024-07-26 14:20:37.038382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.163 [2024-07-26 14:20:37.038447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.163 qpair failed and we were unable to recover it. 00:26:29.163 [2024-07-26 14:20:37.038735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.163 [2024-07-26 14:20:37.038802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.163 qpair failed and we were unable to recover it. 00:26:29.163 [2024-07-26 14:20:37.039084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.163 [2024-07-26 14:20:37.039148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.163 qpair failed and we were unable to recover it. 00:26:29.163 [2024-07-26 14:20:37.039394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.163 [2024-07-26 14:20:37.039460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.163 qpair failed and we were unable to recover it. 00:26:29.163 [2024-07-26 14:20:37.039733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.163 [2024-07-26 14:20:37.039798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.163 qpair failed and we were unable to recover it. 00:26:29.163 [2024-07-26 14:20:37.040028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.163 [2024-07-26 14:20:37.040090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.163 qpair failed and we were unable to recover it. 00:26:29.163 [2024-07-26 14:20:37.040302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.163 [2024-07-26 14:20:37.040366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.163 qpair failed and we were unable to recover it. 00:26:29.163 [2024-07-26 14:20:37.040644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.163 [2024-07-26 14:20:37.040710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.163 qpair failed and we were unable to recover it. 00:26:29.163 [2024-07-26 14:20:37.040970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.163 [2024-07-26 14:20:37.041023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.163 qpair failed and we were unable to recover it. 00:26:29.163 [2024-07-26 14:20:37.041185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.163 [2024-07-26 14:20:37.041238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.163 qpair failed and we were unable to recover it. 00:26:29.163 [2024-07-26 14:20:37.041390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.163 [2024-07-26 14:20:37.041441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.163 qpair failed and we were unable to recover it. 00:26:29.164 [2024-07-26 14:20:37.041609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.164 [2024-07-26 14:20:37.041682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.164 qpair failed and we were unable to recover it. 00:26:29.164 [2024-07-26 14:20:37.041916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.164 [2024-07-26 14:20:37.041980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.164 qpair failed and we were unable to recover it. 00:26:29.164 [2024-07-26 14:20:37.042191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.164 [2024-07-26 14:20:37.042256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.164 qpair failed and we were unable to recover it. 00:26:29.164 [2024-07-26 14:20:37.042556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.164 [2024-07-26 14:20:37.042621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.164 qpair failed and we were unable to recover it. 00:26:29.164 [2024-07-26 14:20:37.042865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.164 [2024-07-26 14:20:37.042928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.164 qpair failed and we were unable to recover it. 00:26:29.164 [2024-07-26 14:20:37.043206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.164 [2024-07-26 14:20:37.043272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.164 qpair failed and we were unable to recover it. 00:26:29.164 [2024-07-26 14:20:37.043495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.164 [2024-07-26 14:20:37.043588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.164 qpair failed and we were unable to recover it. 00:26:29.164 [2024-07-26 14:20:37.043846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.164 [2024-07-26 14:20:37.043912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.164 qpair failed and we were unable to recover it. 00:26:29.164 [2024-07-26 14:20:37.044215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.164 [2024-07-26 14:20:37.044268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.164 qpair failed and we were unable to recover it. 00:26:29.164 [2024-07-26 14:20:37.044501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.164 [2024-07-26 14:20:37.044585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.164 qpair failed and we were unable to recover it. 00:26:29.164 [2024-07-26 14:20:37.044844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.164 [2024-07-26 14:20:37.044910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.164 qpair failed and we were unable to recover it. 00:26:29.164 [2024-07-26 14:20:37.045120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.164 [2024-07-26 14:20:37.045183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.164 qpair failed and we were unable to recover it. 00:26:29.164 [2024-07-26 14:20:37.045377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.164 [2024-07-26 14:20:37.045442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.164 qpair failed and we were unable to recover it. 00:26:29.164 [2024-07-26 14:20:37.045686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.164 [2024-07-26 14:20:37.045751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.164 qpair failed and we were unable to recover it. 00:26:29.164 [2024-07-26 14:20:37.046022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.164 [2024-07-26 14:20:37.046087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.164 qpair failed and we were unable to recover it. 00:26:29.164 [2024-07-26 14:20:37.046377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.164 [2024-07-26 14:20:37.046441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.164 qpair failed and we were unable to recover it. 00:26:29.164 [2024-07-26 14:20:37.046722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.164 [2024-07-26 14:20:37.046774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.164 qpair failed and we were unable to recover it. 00:26:29.164 [2024-07-26 14:20:37.047028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.164 [2024-07-26 14:20:37.047093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.164 qpair failed and we were unable to recover it. 00:26:29.164 [2024-07-26 14:20:37.047388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.164 [2024-07-26 14:20:37.047453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.164 qpair failed and we were unable to recover it. 00:26:29.164 [2024-07-26 14:20:37.047737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.164 [2024-07-26 14:20:37.047811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.164 qpair failed and we were unable to recover it. 00:26:29.164 [2024-07-26 14:20:37.048054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.164 [2024-07-26 14:20:37.048119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.164 qpair failed and we were unable to recover it. 00:26:29.164 [2024-07-26 14:20:37.048346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.164 [2024-07-26 14:20:37.048410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.164 qpair failed and we were unable to recover it. 00:26:29.164 [2024-07-26 14:20:37.048653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.164 [2024-07-26 14:20:37.048721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.164 qpair failed and we were unable to recover it. 00:26:29.164 [2024-07-26 14:20:37.048964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.164 [2024-07-26 14:20:37.049029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.164 qpair failed and we were unable to recover it. 00:26:29.164 [2024-07-26 14:20:37.049213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.164 [2024-07-26 14:20:37.049278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.164 qpair failed and we were unable to recover it. 00:26:29.164 [2024-07-26 14:20:37.049549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.164 [2024-07-26 14:20:37.049614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.164 qpair failed and we were unable to recover it. 00:26:29.164 [2024-07-26 14:20:37.049848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.164 [2024-07-26 14:20:37.049914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.164 qpair failed and we were unable to recover it. 00:26:29.164 [2024-07-26 14:20:37.050175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.164 [2024-07-26 14:20:37.050239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.164 qpair failed and we were unable to recover it. 00:26:29.164 [2024-07-26 14:20:37.050538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.164 [2024-07-26 14:20:37.050621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.164 qpair failed and we were unable to recover it. 00:26:29.164 [2024-07-26 14:20:37.050823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.164 [2024-07-26 14:20:37.050888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.164 qpair failed and we were unable to recover it. 00:26:29.164 [2024-07-26 14:20:37.051076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.164 [2024-07-26 14:20:37.051141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.164 qpair failed and we were unable to recover it. 00:26:29.164 [2024-07-26 14:20:37.051366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.164 [2024-07-26 14:20:37.051430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.164 qpair failed and we were unable to recover it. 00:26:29.165 [2024-07-26 14:20:37.051711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.165 [2024-07-26 14:20:37.051776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.165 qpair failed and we were unable to recover it. 00:26:29.165 [2024-07-26 14:20:37.052022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.165 [2024-07-26 14:20:37.052087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.165 qpair failed and we were unable to recover it. 00:26:29.165 [2024-07-26 14:20:37.052338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.165 [2024-07-26 14:20:37.052404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.165 qpair failed and we were unable to recover it. 00:26:29.165 [2024-07-26 14:20:37.052607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.165 [2024-07-26 14:20:37.052674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.165 qpair failed and we were unable to recover it. 00:26:29.165 [2024-07-26 14:20:37.052930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.165 [2024-07-26 14:20:37.052995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.165 qpair failed and we were unable to recover it. 00:26:29.165 [2024-07-26 14:20:37.053280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.165 [2024-07-26 14:20:37.053345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.165 qpair failed and we were unable to recover it. 00:26:29.165 [2024-07-26 14:20:37.053588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.165 [2024-07-26 14:20:37.053653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.165 qpair failed and we were unable to recover it. 00:26:29.165 [2024-07-26 14:20:37.053944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.165 [2024-07-26 14:20:37.054009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.165 qpair failed and we were unable to recover it. 00:26:29.165 [2024-07-26 14:20:37.054256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.165 [2024-07-26 14:20:37.054321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.165 qpair failed and we were unable to recover it. 00:26:29.165 [2024-07-26 14:20:37.054546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.165 [2024-07-26 14:20:37.054614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.165 qpair failed and we were unable to recover it. 00:26:29.165 [2024-07-26 14:20:37.054868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.165 [2024-07-26 14:20:37.054933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.165 qpair failed and we were unable to recover it. 00:26:29.165 [2024-07-26 14:20:37.055174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.165 [2024-07-26 14:20:37.055239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.165 qpair failed and we were unable to recover it. 00:26:29.165 [2024-07-26 14:20:37.055518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.165 [2024-07-26 14:20:37.055610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.165 qpair failed and we were unable to recover it. 00:26:29.165 [2024-07-26 14:20:37.055868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.165 [2024-07-26 14:20:37.055932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.165 qpair failed and we were unable to recover it. 00:26:29.165 [2024-07-26 14:20:37.056166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.165 [2024-07-26 14:20:37.056240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.165 qpair failed and we were unable to recover it. 00:26:29.165 [2024-07-26 14:20:37.056500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.165 [2024-07-26 14:20:37.056587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.165 qpair failed and we were unable to recover it. 00:26:29.165 [2024-07-26 14:20:37.056834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.165 [2024-07-26 14:20:37.056898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.165 qpair failed and we were unable to recover it. 00:26:29.165 [2024-07-26 14:20:37.057165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.165 [2024-07-26 14:20:37.057229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.165 qpair failed and we were unable to recover it. 00:26:29.165 [2024-07-26 14:20:37.057464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.165 [2024-07-26 14:20:37.057547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.165 qpair failed and we were unable to recover it. 00:26:29.165 [2024-07-26 14:20:37.057786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.165 [2024-07-26 14:20:37.057851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.165 qpair failed and we were unable to recover it. 00:26:29.165 [2024-07-26 14:20:37.058090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.165 [2024-07-26 14:20:37.058153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.165 qpair failed and we were unable to recover it. 00:26:29.165 [2024-07-26 14:20:37.058358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.165 [2024-07-26 14:20:37.058423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.165 qpair failed and we were unable to recover it. 00:26:29.165 [2024-07-26 14:20:37.058696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.165 [2024-07-26 14:20:37.058749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.165 qpair failed and we were unable to recover it. 00:26:29.165 [2024-07-26 14:20:37.059024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.165 [2024-07-26 14:20:37.059089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.165 qpair failed and we were unable to recover it. 00:26:29.165 [2024-07-26 14:20:37.059373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.165 [2024-07-26 14:20:37.059437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.165 qpair failed and we were unable to recover it. 00:26:29.165 [2024-07-26 14:20:37.059736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.165 [2024-07-26 14:20:37.059802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.165 qpair failed and we were unable to recover it. 00:26:29.165 [2024-07-26 14:20:37.060045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.165 [2024-07-26 14:20:37.060110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.165 qpair failed and we were unable to recover it. 00:26:29.165 [2024-07-26 14:20:37.060331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.165 [2024-07-26 14:20:37.060395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.165 qpair failed and we were unable to recover it. 00:26:29.165 [2024-07-26 14:20:37.060626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.165 [2024-07-26 14:20:37.060694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.165 qpair failed and we were unable to recover it. 00:26:29.165 [2024-07-26 14:20:37.060979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.165 [2024-07-26 14:20:37.061045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.165 qpair failed and we were unable to recover it. 00:26:29.165 [2024-07-26 14:20:37.061298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.165 [2024-07-26 14:20:37.061362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.165 qpair failed and we were unable to recover it. 00:26:29.165 [2024-07-26 14:20:37.061635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.165 [2024-07-26 14:20:37.061701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.165 qpair failed and we were unable to recover it. 00:26:29.165 [2024-07-26 14:20:37.061978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.165 [2024-07-26 14:20:37.062043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.165 qpair failed and we were unable to recover it. 00:26:29.165 [2024-07-26 14:20:37.062284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.165 [2024-07-26 14:20:37.062347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.165 qpair failed and we were unable to recover it. 00:26:29.165 [2024-07-26 14:20:37.062644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.165 [2024-07-26 14:20:37.062710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.165 qpair failed and we were unable to recover it. 00:26:29.165 [2024-07-26 14:20:37.062965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.165 [2024-07-26 14:20:37.063029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.165 qpair failed and we were unable to recover it. 00:26:29.165 [2024-07-26 14:20:37.063325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.165 [2024-07-26 14:20:37.063390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.165 qpair failed and we were unable to recover it. 00:26:29.165 [2024-07-26 14:20:37.063593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.165 [2024-07-26 14:20:37.063659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.165 qpair failed and we were unable to recover it. 00:26:29.166 [2024-07-26 14:20:37.063904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.166 [2024-07-26 14:20:37.063969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.166 qpair failed and we were unable to recover it. 00:26:29.166 [2024-07-26 14:20:37.064208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.166 [2024-07-26 14:20:37.064271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.166 qpair failed and we were unable to recover it. 00:26:29.166 [2024-07-26 14:20:37.064559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.166 [2024-07-26 14:20:37.064626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.166 qpair failed and we were unable to recover it. 00:26:29.166 [2024-07-26 14:20:37.064880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.166 [2024-07-26 14:20:37.064954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.166 qpair failed and we were unable to recover it. 00:26:29.166 [2024-07-26 14:20:37.065175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.166 [2024-07-26 14:20:37.065239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.166 qpair failed and we were unable to recover it. 00:26:29.166 [2024-07-26 14:20:37.065490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.166 [2024-07-26 14:20:37.065574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.166 qpair failed and we were unable to recover it. 00:26:29.166 [2024-07-26 14:20:37.065831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.166 [2024-07-26 14:20:37.065897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.166 qpair failed and we were unable to recover it. 00:26:29.166 [2024-07-26 14:20:37.066086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.166 [2024-07-26 14:20:37.066150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.166 qpair failed and we were unable to recover it. 00:26:29.166 [2024-07-26 14:20:37.066400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.166 [2024-07-26 14:20:37.066465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.166 qpair failed and we were unable to recover it. 00:26:29.166 [2024-07-26 14:20:37.066762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.166 [2024-07-26 14:20:37.066828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.166 qpair failed and we were unable to recover it. 00:26:29.166 [2024-07-26 14:20:37.067079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.166 [2024-07-26 14:20:37.067144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.166 qpair failed and we were unable to recover it. 00:26:29.166 [2024-07-26 14:20:37.067388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.166 [2024-07-26 14:20:37.067452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.166 qpair failed and we were unable to recover it. 00:26:29.166 [2024-07-26 14:20:37.067748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.166 [2024-07-26 14:20:37.067814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.166 qpair failed and we were unable to recover it. 00:26:29.166 [2024-07-26 14:20:37.068063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.166 [2024-07-26 14:20:37.068128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.166 qpair failed and we were unable to recover it. 00:26:29.166 [2024-07-26 14:20:37.068418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.166 [2024-07-26 14:20:37.068483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.166 qpair failed and we were unable to recover it. 00:26:29.166 [2024-07-26 14:20:37.068741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.166 [2024-07-26 14:20:37.068806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.166 qpair failed and we were unable to recover it. 00:26:29.166 [2024-07-26 14:20:37.069015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.166 [2024-07-26 14:20:37.069079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.166 qpair failed and we were unable to recover it. 00:26:29.166 [2024-07-26 14:20:37.069381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.166 [2024-07-26 14:20:37.069447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.166 qpair failed and we were unable to recover it. 00:26:29.166 [2024-07-26 14:20:37.069711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.166 [2024-07-26 14:20:37.069777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.166 qpair failed and we were unable to recover it. 00:26:29.166 [2024-07-26 14:20:37.070000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.166 [2024-07-26 14:20:37.070063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.166 qpair failed and we were unable to recover it. 00:26:29.166 [2024-07-26 14:20:37.070355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.166 [2024-07-26 14:20:37.070419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.166 qpair failed and we were unable to recover it. 00:26:29.166 [2024-07-26 14:20:37.070707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.166 [2024-07-26 14:20:37.070774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.166 qpair failed and we were unable to recover it. 00:26:29.166 [2024-07-26 14:20:37.071030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.166 [2024-07-26 14:20:37.071094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.166 qpair failed and we were unable to recover it. 00:26:29.166 [2024-07-26 14:20:37.071283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.166 [2024-07-26 14:20:37.071347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.166 qpair failed and we were unable to recover it. 00:26:29.166 [2024-07-26 14:20:37.071581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.166 [2024-07-26 14:20:37.071647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.166 qpair failed and we were unable to recover it. 00:26:29.166 [2024-07-26 14:20:37.071913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.166 [2024-07-26 14:20:37.071978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.166 qpair failed and we were unable to recover it. 00:26:29.166 [2024-07-26 14:20:37.072218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.166 [2024-07-26 14:20:37.072283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.166 qpair failed and we were unable to recover it. 00:26:29.166 [2024-07-26 14:20:37.072551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.166 [2024-07-26 14:20:37.072617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.166 qpair failed and we were unable to recover it. 00:26:29.166 [2024-07-26 14:20:37.072912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.166 [2024-07-26 14:20:37.072977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.166 qpair failed and we were unable to recover it. 00:26:29.166 [2024-07-26 14:20:37.073203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.166 [2024-07-26 14:20:37.073267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.166 qpair failed and we were unable to recover it. 00:26:29.166 [2024-07-26 14:20:37.073516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.166 [2024-07-26 14:20:37.073598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.166 qpair failed and we were unable to recover it. 00:26:29.166 [2024-07-26 14:20:37.073887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.166 [2024-07-26 14:20:37.073951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.166 qpair failed and we were unable to recover it. 00:26:29.166 [2024-07-26 14:20:37.074194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.166 [2024-07-26 14:20:37.074259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.166 qpair failed and we were unable to recover it. 00:26:29.166 [2024-07-26 14:20:37.074502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.166 [2024-07-26 14:20:37.074588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.166 qpair failed and we were unable to recover it. 00:26:29.166 [2024-07-26 14:20:37.074787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.166 [2024-07-26 14:20:37.074852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.166 qpair failed and we were unable to recover it. 00:26:29.166 [2024-07-26 14:20:37.075131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.166 [2024-07-26 14:20:37.075196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.166 qpair failed and we were unable to recover it. 00:26:29.166 [2024-07-26 14:20:37.075416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.166 [2024-07-26 14:20:37.075480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.166 qpair failed and we were unable to recover it. 00:26:29.166 [2024-07-26 14:20:37.075713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.166 [2024-07-26 14:20:37.075778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.166 qpair failed and we were unable to recover it. 00:26:29.166 [2024-07-26 14:20:37.075975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.167 [2024-07-26 14:20:37.076040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.167 qpair failed and we were unable to recover it. 00:26:29.167 [2024-07-26 14:20:37.076270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.167 [2024-07-26 14:20:37.076334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.167 qpair failed and we were unable to recover it. 00:26:29.167 [2024-07-26 14:20:37.076585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.167 [2024-07-26 14:20:37.076653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.167 qpair failed and we were unable to recover it. 00:26:29.167 [2024-07-26 14:20:37.076931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.167 [2024-07-26 14:20:37.076984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.167 qpair failed and we were unable to recover it. 00:26:29.167 [2024-07-26 14:20:37.077141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.167 [2024-07-26 14:20:37.077195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.167 qpair failed and we were unable to recover it. 00:26:29.167 [2024-07-26 14:20:37.077454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.167 [2024-07-26 14:20:37.077521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.167 qpair failed and we were unable to recover it. 00:26:29.167 [2024-07-26 14:20:37.077775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.167 [2024-07-26 14:20:37.077849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.167 qpair failed and we were unable to recover it. 00:26:29.167 [2024-07-26 14:20:37.078089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.167 [2024-07-26 14:20:37.078154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.167 qpair failed and we were unable to recover it. 00:26:29.167 [2024-07-26 14:20:37.078377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.167 [2024-07-26 14:20:37.078442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.167 qpair failed and we were unable to recover it. 00:26:29.167 [2024-07-26 14:20:37.078692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.167 [2024-07-26 14:20:37.078757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.167 qpair failed and we were unable to recover it. 00:26:29.167 [2024-07-26 14:20:37.079036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.167 [2024-07-26 14:20:37.079101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.167 qpair failed and we were unable to recover it. 00:26:29.167 [2024-07-26 14:20:37.079300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.167 [2024-07-26 14:20:37.079365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.167 qpair failed and we were unable to recover it. 00:26:29.167 [2024-07-26 14:20:37.079583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.167 [2024-07-26 14:20:37.079650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.167 qpair failed and we were unable to recover it. 00:26:29.167 [2024-07-26 14:20:37.079886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.167 [2024-07-26 14:20:37.079951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.167 qpair failed and we were unable to recover it. 00:26:29.167 [2024-07-26 14:20:37.080186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.167 [2024-07-26 14:20:37.080250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.167 qpair failed and we were unable to recover it. 00:26:29.167 [2024-07-26 14:20:37.080492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.167 [2024-07-26 14:20:37.080576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.167 qpair failed and we were unable to recover it. 00:26:29.167 [2024-07-26 14:20:37.080801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.167 [2024-07-26 14:20:37.080866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.167 qpair failed and we were unable to recover it. 00:26:29.167 [2024-07-26 14:20:37.081130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.167 [2024-07-26 14:20:37.081182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.167 qpair failed and we were unable to recover it. 00:26:29.167 [2024-07-26 14:20:37.081385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.167 [2024-07-26 14:20:37.081464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.167 qpair failed and we were unable to recover it. 00:26:29.167 [2024-07-26 14:20:37.081699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.167 [2024-07-26 14:20:37.081765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.167 qpair failed and we were unable to recover it. 00:26:29.167 [2024-07-26 14:20:37.082026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.167 [2024-07-26 14:20:37.082090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.167 qpair failed and we were unable to recover it. 00:26:29.167 [2024-07-26 14:20:37.082371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.167 [2024-07-26 14:20:37.082435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.167 qpair failed and we were unable to recover it. 00:26:29.167 [2024-07-26 14:20:37.082690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.167 [2024-07-26 14:20:37.082758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.167 qpair failed and we were unable to recover it. 00:26:29.167 [2024-07-26 14:20:37.082988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.167 [2024-07-26 14:20:37.083052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.167 qpair failed and we were unable to recover it. 00:26:29.167 [2024-07-26 14:20:37.083320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.167 [2024-07-26 14:20:37.083384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.167 qpair failed and we were unable to recover it. 00:26:29.167 [2024-07-26 14:20:37.083603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.167 [2024-07-26 14:20:37.083669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.167 qpair failed and we were unable to recover it. 00:26:29.167 [2024-07-26 14:20:37.083959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.167 [2024-07-26 14:20:37.084023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.167 qpair failed and we were unable to recover it. 00:26:29.167 [2024-07-26 14:20:37.084299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.167 [2024-07-26 14:20:37.084365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.167 qpair failed and we were unable to recover it. 00:26:29.167 [2024-07-26 14:20:37.084617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.167 [2024-07-26 14:20:37.084683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.167 qpair failed and we were unable to recover it. 00:26:29.167 [2024-07-26 14:20:37.084934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.167 [2024-07-26 14:20:37.084999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.167 qpair failed and we were unable to recover it. 00:26:29.167 [2024-07-26 14:20:37.085289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.167 [2024-07-26 14:20:37.085354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.167 qpair failed and we were unable to recover it. 00:26:29.167 [2024-07-26 14:20:37.085599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.167 [2024-07-26 14:20:37.085665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.167 qpair failed and we were unable to recover it. 00:26:29.167 [2024-07-26 14:20:37.085912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.167 [2024-07-26 14:20:37.085977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.167 qpair failed and we were unable to recover it. 00:26:29.167 [2024-07-26 14:20:37.086174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.167 [2024-07-26 14:20:37.086248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.167 qpair failed and we were unable to recover it. 00:26:29.167 [2024-07-26 14:20:37.086499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.167 [2024-07-26 14:20:37.086577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.167 qpair failed and we were unable to recover it. 00:26:29.167 [2024-07-26 14:20:37.086818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.167 [2024-07-26 14:20:37.086883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.167 qpair failed and we were unable to recover it. 00:26:29.167 [2024-07-26 14:20:37.087131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.167 [2024-07-26 14:20:37.087183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.167 qpair failed and we were unable to recover it. 00:26:29.167 [2024-07-26 14:20:37.087384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.167 [2024-07-26 14:20:37.087461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.167 qpair failed and we were unable to recover it. 00:26:29.168 [2024-07-26 14:20:37.087720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.168 [2024-07-26 14:20:37.087785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.168 qpair failed and we were unable to recover it. 00:26:29.168 [2024-07-26 14:20:37.088022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.168 [2024-07-26 14:20:37.088086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.168 qpair failed and we were unable to recover it. 00:26:29.168 [2024-07-26 14:20:37.088291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.168 [2024-07-26 14:20:37.088356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.168 qpair failed and we were unable to recover it. 00:26:29.168 [2024-07-26 14:20:37.088636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.168 [2024-07-26 14:20:37.088704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.168 qpair failed and we were unable to recover it. 00:26:29.168 [2024-07-26 14:20:37.088982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.168 [2024-07-26 14:20:37.089047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.168 qpair failed and we were unable to recover it. 00:26:29.168 [2024-07-26 14:20:37.089286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.168 [2024-07-26 14:20:37.089351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.168 qpair failed and we were unable to recover it. 00:26:29.168 [2024-07-26 14:20:37.089563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.168 [2024-07-26 14:20:37.089628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.168 qpair failed and we were unable to recover it. 00:26:29.168 [2024-07-26 14:20:37.089874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.168 [2024-07-26 14:20:37.089939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.168 qpair failed and we were unable to recover it. 00:26:29.168 [2024-07-26 14:20:37.090148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.168 [2024-07-26 14:20:37.090213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.168 qpair failed and we were unable to recover it. 00:26:29.168 [2024-07-26 14:20:37.090469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.168 [2024-07-26 14:20:37.090547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.168 qpair failed and we were unable to recover it. 00:26:29.168 [2024-07-26 14:20:37.090809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.168 [2024-07-26 14:20:37.090874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.168 qpair failed and we were unable to recover it. 00:26:29.168 [2024-07-26 14:20:37.091112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.168 [2024-07-26 14:20:37.091176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.168 qpair failed and we were unable to recover it. 00:26:29.168 [2024-07-26 14:20:37.091452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.168 [2024-07-26 14:20:37.091516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.168 qpair failed and we were unable to recover it. 00:26:29.168 [2024-07-26 14:20:37.091779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.168 [2024-07-26 14:20:37.091843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.168 qpair failed and we were unable to recover it. 00:26:29.168 [2024-07-26 14:20:37.092081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.168 [2024-07-26 14:20:37.092146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.168 qpair failed and we were unable to recover it. 00:26:29.168 [2024-07-26 14:20:37.092362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.168 [2024-07-26 14:20:37.092427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.168 qpair failed and we were unable to recover it. 00:26:29.168 [2024-07-26 14:20:37.092689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.168 [2024-07-26 14:20:37.092755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.168 qpair failed and we were unable to recover it. 00:26:29.168 [2024-07-26 14:20:37.093001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.168 [2024-07-26 14:20:37.093068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.168 qpair failed and we were unable to recover it. 00:26:29.168 [2024-07-26 14:20:37.093351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.168 [2024-07-26 14:20:37.093415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.168 qpair failed and we were unable to recover it. 00:26:29.168 [2024-07-26 14:20:37.093687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.168 [2024-07-26 14:20:37.093753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.168 qpair failed and we were unable to recover it. 00:26:29.168 [2024-07-26 14:20:37.094036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.168 [2024-07-26 14:20:37.094101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.168 qpair failed and we were unable to recover it. 00:26:29.168 [2024-07-26 14:20:37.094317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.168 [2024-07-26 14:20:37.094380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.168 qpair failed and we were unable to recover it. 00:26:29.168 [2024-07-26 14:20:37.094637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.168 [2024-07-26 14:20:37.094718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.168 qpair failed and we were unable to recover it. 00:26:29.168 [2024-07-26 14:20:37.095017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.168 [2024-07-26 14:20:37.095081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.168 qpair failed and we were unable to recover it. 00:26:29.168 [2024-07-26 14:20:37.095371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.168 [2024-07-26 14:20:37.095435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.168 qpair failed and we were unable to recover it. 00:26:29.168 [2024-07-26 14:20:37.095665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.168 [2024-07-26 14:20:37.095731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.168 qpair failed and we were unable to recover it. 00:26:29.168 [2024-07-26 14:20:37.095965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.168 [2024-07-26 14:20:37.096031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.168 qpair failed and we were unable to recover it. 00:26:29.168 [2024-07-26 14:20:37.096273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.168 [2024-07-26 14:20:37.096338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.168 qpair failed and we were unable to recover it. 00:26:29.168 [2024-07-26 14:20:37.096620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.168 [2024-07-26 14:20:37.096686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.168 qpair failed and we were unable to recover it. 00:26:29.168 [2024-07-26 14:20:37.096948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.168 [2024-07-26 14:20:37.097012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.168 qpair failed and we were unable to recover it. 00:26:29.168 [2024-07-26 14:20:37.097272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.168 [2024-07-26 14:20:37.097336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.168 qpair failed and we were unable to recover it. 00:26:29.168 [2024-07-26 14:20:37.097595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.168 [2024-07-26 14:20:37.097661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.168 qpair failed and we were unable to recover it. 00:26:29.168 [2024-07-26 14:20:37.097938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.168 [2024-07-26 14:20:37.098002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.168 qpair failed and we were unable to recover it. 00:26:29.168 [2024-07-26 14:20:37.098245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.168 [2024-07-26 14:20:37.098311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.168 qpair failed and we were unable to recover it. 00:26:29.168 [2024-07-26 14:20:37.098647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.168 [2024-07-26 14:20:37.098701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.168 qpair failed and we were unable to recover it. 00:26:29.168 [2024-07-26 14:20:37.098851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.168 [2024-07-26 14:20:37.098903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.168 qpair failed and we were unable to recover it. 00:26:29.168 [2024-07-26 14:20:37.099180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.168 [2024-07-26 14:20:37.099246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.168 qpair failed and we were unable to recover it. 00:26:29.168 [2024-07-26 14:20:37.099521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.168 [2024-07-26 14:20:37.099599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.168 qpair failed and we were unable to recover it. 00:26:29.168 [2024-07-26 14:20:37.099835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.169 [2024-07-26 14:20:37.099901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.169 qpair failed and we were unable to recover it. 00:26:29.169 [2024-07-26 14:20:37.100179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.169 [2024-07-26 14:20:37.100245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.169 qpair failed and we were unable to recover it. 00:26:29.169 [2024-07-26 14:20:37.100480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.169 [2024-07-26 14:20:37.100554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.169 qpair failed and we were unable to recover it. 00:26:29.169 [2024-07-26 14:20:37.100806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.169 [2024-07-26 14:20:37.100870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.169 qpair failed and we were unable to recover it. 00:26:29.169 [2024-07-26 14:20:37.101116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.169 [2024-07-26 14:20:37.101180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.169 qpair failed and we were unable to recover it. 00:26:29.169 [2024-07-26 14:20:37.101420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.169 [2024-07-26 14:20:37.101485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.169 qpair failed and we were unable to recover it. 00:26:29.169 [2024-07-26 14:20:37.101734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.169 [2024-07-26 14:20:37.101801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.169 qpair failed and we were unable to recover it. 00:26:29.169 [2024-07-26 14:20:37.102069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.169 [2024-07-26 14:20:37.102133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.169 qpair failed and we were unable to recover it. 00:26:29.169 [2024-07-26 14:20:37.102384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.169 [2024-07-26 14:20:37.102448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.169 qpair failed and we were unable to recover it. 00:26:29.169 [2024-07-26 14:20:37.102722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.169 [2024-07-26 14:20:37.102788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.169 qpair failed and we were unable to recover it. 00:26:29.169 [2024-07-26 14:20:37.103011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.169 [2024-07-26 14:20:37.103075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.169 qpair failed and we were unable to recover it. 00:26:29.169 [2024-07-26 14:20:37.103275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.169 [2024-07-26 14:20:37.103340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.169 qpair failed and we were unable to recover it. 00:26:29.169 [2024-07-26 14:20:37.103574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.169 [2024-07-26 14:20:37.103642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.169 qpair failed and we were unable to recover it. 00:26:29.169 [2024-07-26 14:20:37.103890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.169 [2024-07-26 14:20:37.103954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.169 qpair failed and we were unable to recover it. 00:26:29.169 [2024-07-26 14:20:37.104194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.169 [2024-07-26 14:20:37.104258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.169 qpair failed and we were unable to recover it. 00:26:29.169 [2024-07-26 14:20:37.104511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.169 [2024-07-26 14:20:37.104591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.169 qpair failed and we were unable to recover it. 00:26:29.169 [2024-07-26 14:20:37.104844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.169 [2024-07-26 14:20:37.104908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.169 qpair failed and we were unable to recover it. 00:26:29.169 [2024-07-26 14:20:37.105183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.169 [2024-07-26 14:20:37.105247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.169 qpair failed and we were unable to recover it. 00:26:29.169 [2024-07-26 14:20:37.105492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.169 [2024-07-26 14:20:37.105570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.169 qpair failed and we were unable to recover it. 00:26:29.169 [2024-07-26 14:20:37.105818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.169 [2024-07-26 14:20:37.105882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.169 qpair failed and we were unable to recover it. 00:26:29.169 [2024-07-26 14:20:37.106132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.169 [2024-07-26 14:20:37.106196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.169 qpair failed and we were unable to recover it. 00:26:29.169 [2024-07-26 14:20:37.106434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.169 [2024-07-26 14:20:37.106500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.169 qpair failed and we were unable to recover it. 00:26:29.169 [2024-07-26 14:20:37.106730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.169 [2024-07-26 14:20:37.106795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.169 qpair failed and we were unable to recover it. 00:26:29.169 [2024-07-26 14:20:37.107038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.169 [2024-07-26 14:20:37.107102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.169 qpair failed and we were unable to recover it. 00:26:29.169 [2024-07-26 14:20:37.107357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.169 [2024-07-26 14:20:37.107421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.169 qpair failed and we were unable to recover it. 00:26:29.169 [2024-07-26 14:20:37.107733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.169 [2024-07-26 14:20:37.107799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.169 qpair failed and we were unable to recover it. 00:26:29.169 [2024-07-26 14:20:37.108074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.169 [2024-07-26 14:20:37.108138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.169 qpair failed and we were unable to recover it. 00:26:29.169 [2024-07-26 14:20:37.108392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.169 [2024-07-26 14:20:37.108457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.169 qpair failed and we were unable to recover it. 00:26:29.169 [2024-07-26 14:20:37.108725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.169 [2024-07-26 14:20:37.108790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.169 qpair failed and we were unable to recover it. 00:26:29.169 [2024-07-26 14:20:37.109047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.169 [2024-07-26 14:20:37.109099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.169 qpair failed and we were unable to recover it. 00:26:29.169 [2024-07-26 14:20:37.109292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.169 [2024-07-26 14:20:37.109376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.169 qpair failed and we were unable to recover it. 00:26:29.169 [2024-07-26 14:20:37.109635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.169 [2024-07-26 14:20:37.109701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.169 qpair failed and we were unable to recover it. 00:26:29.169 [2024-07-26 14:20:37.109950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.169 [2024-07-26 14:20:37.110014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.169 qpair failed and we were unable to recover it. 00:26:29.170 [2024-07-26 14:20:37.110258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.170 [2024-07-26 14:20:37.110323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.170 qpair failed and we were unable to recover it. 00:26:29.170 [2024-07-26 14:20:37.110611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.170 [2024-07-26 14:20:37.110678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.170 qpair failed and we were unable to recover it. 00:26:29.170 [2024-07-26 14:20:37.110919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.170 [2024-07-26 14:20:37.110984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.170 qpair failed and we were unable to recover it. 00:26:29.170 [2024-07-26 14:20:37.111210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.170 [2024-07-26 14:20:37.111275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.170 qpair failed and we were unable to recover it. 00:26:29.170 [2024-07-26 14:20:37.111477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.170 [2024-07-26 14:20:37.111556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.170 qpair failed and we were unable to recover it. 00:26:29.170 [2024-07-26 14:20:37.111846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.170 [2024-07-26 14:20:37.111898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.170 qpair failed and we were unable to recover it. 00:26:29.170 [2024-07-26 14:20:37.112110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.170 [2024-07-26 14:20:37.112193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.170 qpair failed and we were unable to recover it. 00:26:29.170 [2024-07-26 14:20:37.112426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.170 [2024-07-26 14:20:37.112490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.170 qpair failed and we were unable to recover it. 00:26:29.170 [2024-07-26 14:20:37.112747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.170 [2024-07-26 14:20:37.112812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.170 qpair failed and we were unable to recover it. 00:26:29.170 [2024-07-26 14:20:37.113043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.170 [2024-07-26 14:20:37.113107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.170 qpair failed and we were unable to recover it. 00:26:29.170 [2024-07-26 14:20:37.113391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.170 [2024-07-26 14:20:37.113442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.170 qpair failed and we were unable to recover it. 00:26:29.170 [2024-07-26 14:20:37.113662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.170 [2024-07-26 14:20:37.113746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.170 qpair failed and we were unable to recover it. 00:26:29.170 [2024-07-26 14:20:37.113994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.170 [2024-07-26 14:20:37.114060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.170 qpair failed and we were unable to recover it. 00:26:29.170 [2024-07-26 14:20:37.114297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.170 [2024-07-26 14:20:37.114361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.170 qpair failed and we were unable to recover it. 00:26:29.170 [2024-07-26 14:20:37.114565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.170 [2024-07-26 14:20:37.114631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.170 qpair failed and we were unable to recover it. 00:26:29.170 [2024-07-26 14:20:37.114882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.170 [2024-07-26 14:20:37.114947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.170 qpair failed and we were unable to recover it. 00:26:29.170 [2024-07-26 14:20:37.115188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.170 [2024-07-26 14:20:37.115252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.170 qpair failed and we were unable to recover it. 00:26:29.170 [2024-07-26 14:20:37.115447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.170 [2024-07-26 14:20:37.115512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.170 qpair failed and we were unable to recover it. 00:26:29.170 [2024-07-26 14:20:37.115814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.170 [2024-07-26 14:20:37.115877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.170 qpair failed and we were unable to recover it. 00:26:29.170 [2024-07-26 14:20:37.116149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.170 [2024-07-26 14:20:37.116222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.170 qpair failed and we were unable to recover it. 00:26:29.170 [2024-07-26 14:20:37.116472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.170 [2024-07-26 14:20:37.116550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.170 qpair failed and we were unable to recover it. 00:26:29.170 [2024-07-26 14:20:37.116769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.170 [2024-07-26 14:20:37.116834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.170 qpair failed and we were unable to recover it. 00:26:29.170 [2024-07-26 14:20:37.117081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.170 [2024-07-26 14:20:37.117145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.170 qpair failed and we were unable to recover it. 00:26:29.170 [2024-07-26 14:20:37.117381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.170 [2024-07-26 14:20:37.117445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.170 qpair failed and we were unable to recover it. 00:26:29.170 [2024-07-26 14:20:37.117727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.170 [2024-07-26 14:20:37.117792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.170 qpair failed and we were unable to recover it. 00:26:29.170 [2024-07-26 14:20:37.118002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.170 [2024-07-26 14:20:37.118066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.170 qpair failed and we were unable to recover it. 00:26:29.170 [2024-07-26 14:20:37.118310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.170 [2024-07-26 14:20:37.118374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.170 qpair failed and we were unable to recover it. 00:26:29.170 [2024-07-26 14:20:37.118617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.170 [2024-07-26 14:20:37.118684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.170 qpair failed and we were unable to recover it. 00:26:29.170 [2024-07-26 14:20:37.118922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.170 [2024-07-26 14:20:37.118987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.170 qpair failed and we were unable to recover it. 00:26:29.170 [2024-07-26 14:20:37.119213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.170 [2024-07-26 14:20:37.119278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.170 qpair failed and we were unable to recover it. 00:26:29.170 [2024-07-26 14:20:37.119558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.170 [2024-07-26 14:20:37.119612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.170 qpair failed and we were unable to recover it. 00:26:29.170 [2024-07-26 14:20:37.119781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.170 [2024-07-26 14:20:37.119859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.170 qpair failed and we were unable to recover it. 00:26:29.170 [2024-07-26 14:20:37.120071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.170 [2024-07-26 14:20:37.120134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.170 qpair failed and we were unable to recover it. 00:26:29.170 [2024-07-26 14:20:37.120380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.170 [2024-07-26 14:20:37.120444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.170 qpair failed and we were unable to recover it. 00:26:29.170 [2024-07-26 14:20:37.120701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.170 [2024-07-26 14:20:37.120768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.170 qpair failed and we were unable to recover it. 00:26:29.170 [2024-07-26 14:20:37.121052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.170 [2024-07-26 14:20:37.121116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.170 qpair failed and we were unable to recover it. 00:26:29.170 [2024-07-26 14:20:37.121413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.170 [2024-07-26 14:20:37.121466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.170 qpair failed and we were unable to recover it. 00:26:29.170 [2024-07-26 14:20:37.121670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.170 [2024-07-26 14:20:37.121751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.170 qpair failed and we were unable to recover it. 00:26:29.170 [2024-07-26 14:20:37.122020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.171 [2024-07-26 14:20:37.122084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.171 qpair failed and we were unable to recover it. 00:26:29.171 [2024-07-26 14:20:37.122319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.171 [2024-07-26 14:20:37.122384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.171 qpair failed and we were unable to recover it. 00:26:29.171 [2024-07-26 14:20:37.122737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.171 [2024-07-26 14:20:37.122803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.171 qpair failed and we were unable to recover it. 00:26:29.171 [2024-07-26 14:20:37.123055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.171 [2024-07-26 14:20:37.123119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.171 qpair failed and we were unable to recover it. 00:26:29.171 [2024-07-26 14:20:37.123363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.171 [2024-07-26 14:20:37.123428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.171 qpair failed and we were unable to recover it. 00:26:29.171 [2024-07-26 14:20:37.123689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.171 [2024-07-26 14:20:37.123754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.171 qpair failed and we were unable to recover it. 00:26:29.171 [2024-07-26 14:20:37.123956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.171 [2024-07-26 14:20:37.124020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.171 qpair failed and we were unable to recover it. 00:26:29.171 [2024-07-26 14:20:37.124197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.171 [2024-07-26 14:20:37.124261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.171 qpair failed and we were unable to recover it. 00:26:29.171 [2024-07-26 14:20:37.124478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.171 [2024-07-26 14:20:37.124583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.171 qpair failed and we were unable to recover it. 00:26:29.171 [2024-07-26 14:20:37.124818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.171 [2024-07-26 14:20:37.124882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.171 qpair failed and we were unable to recover it. 00:26:29.171 [2024-07-26 14:20:37.125168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.171 [2024-07-26 14:20:37.125231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.171 qpair failed and we were unable to recover it. 00:26:29.171 [2024-07-26 14:20:37.125509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.171 [2024-07-26 14:20:37.125589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.171 qpair failed and we were unable to recover it. 00:26:29.171 [2024-07-26 14:20:37.125840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.171 [2024-07-26 14:20:37.125905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.171 qpair failed and we were unable to recover it. 00:26:29.171 [2024-07-26 14:20:37.126189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.171 [2024-07-26 14:20:37.126241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.171 qpair failed and we were unable to recover it. 00:26:29.171 [2024-07-26 14:20:37.126448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.171 [2024-07-26 14:20:37.126526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.171 qpair failed and we were unable to recover it. 00:26:29.171 [2024-07-26 14:20:37.126807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.171 [2024-07-26 14:20:37.126859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.171 qpair failed and we were unable to recover it. 00:26:29.171 [2024-07-26 14:20:37.127109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.171 [2024-07-26 14:20:37.127172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.171 qpair failed and we were unable to recover it. 00:26:29.171 [2024-07-26 14:20:37.127450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.171 [2024-07-26 14:20:37.127515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.171 qpair failed and we were unable to recover it. 00:26:29.171 [2024-07-26 14:20:37.127799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.171 [2024-07-26 14:20:37.127863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.171 qpair failed and we were unable to recover it. 00:26:29.171 [2024-07-26 14:20:37.128048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.171 [2024-07-26 14:20:37.128115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.171 qpair failed and we were unable to recover it. 00:26:29.171 [2024-07-26 14:20:37.128395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.171 [2024-07-26 14:20:37.128460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.171 qpair failed and we were unable to recover it. 00:26:29.171 [2024-07-26 14:20:37.128707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.171 [2024-07-26 14:20:37.128772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.171 qpair failed and we were unable to recover it. 00:26:29.171 [2024-07-26 14:20:37.129027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.171 [2024-07-26 14:20:37.129095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.171 qpair failed and we were unable to recover it. 00:26:29.171 [2024-07-26 14:20:37.129284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.171 [2024-07-26 14:20:37.129349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.171 qpair failed and we were unable to recover it. 00:26:29.171 [2024-07-26 14:20:37.129605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.171 [2024-07-26 14:20:37.129671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.171 qpair failed and we were unable to recover it. 00:26:29.171 [2024-07-26 14:20:37.129875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.171 [2024-07-26 14:20:37.129940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.171 qpair failed and we were unable to recover it. 00:26:29.171 [2024-07-26 14:20:37.130223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.171 [2024-07-26 14:20:37.130287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.171 qpair failed and we were unable to recover it. 00:26:29.171 [2024-07-26 14:20:37.130576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.171 [2024-07-26 14:20:37.130641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.171 qpair failed and we were unable to recover it. 00:26:29.171 [2024-07-26 14:20:37.130891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.171 [2024-07-26 14:20:37.130957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.171 qpair failed and we were unable to recover it. 00:26:29.171 [2024-07-26 14:20:37.131255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.171 [2024-07-26 14:20:37.131319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.171 qpair failed and we were unable to recover it. 00:26:29.171 [2024-07-26 14:20:37.131558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.171 [2024-07-26 14:20:37.131624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.171 qpair failed and we were unable to recover it. 00:26:29.171 [2024-07-26 14:20:37.131880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.171 [2024-07-26 14:20:37.131944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.171 qpair failed and we were unable to recover it. 00:26:29.171 [2024-07-26 14:20:37.132145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.171 [2024-07-26 14:20:37.132210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.171 qpair failed and we were unable to recover it. 00:26:29.171 [2024-07-26 14:20:37.132490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.171 [2024-07-26 14:20:37.132577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.171 qpair failed and we were unable to recover it. 00:26:29.171 [2024-07-26 14:20:37.132787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.171 [2024-07-26 14:20:37.132850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.171 qpair failed and we were unable to recover it. 00:26:29.171 [2024-07-26 14:20:37.133113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.171 [2024-07-26 14:20:37.133177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.171 qpair failed and we were unable to recover it. 00:26:29.171 [2024-07-26 14:20:37.133438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.171 [2024-07-26 14:20:37.133502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.171 qpair failed and we were unable to recover it. 00:26:29.171 [2024-07-26 14:20:37.133722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.171 [2024-07-26 14:20:37.133787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.171 qpair failed and we were unable to recover it. 00:26:29.171 [2024-07-26 14:20:37.134069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.172 [2024-07-26 14:20:37.134133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.172 qpair failed and we were unable to recover it. 00:26:29.172 [2024-07-26 14:20:37.134371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.172 [2024-07-26 14:20:37.134437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.172 qpair failed and we were unable to recover it. 00:26:29.172 [2024-07-26 14:20:37.134660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.172 [2024-07-26 14:20:37.134727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.172 qpair failed and we were unable to recover it. 00:26:29.172 [2024-07-26 14:20:37.135014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.172 [2024-07-26 14:20:37.135079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.172 qpair failed and we were unable to recover it. 00:26:29.172 [2024-07-26 14:20:37.135298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.172 [2024-07-26 14:20:37.135362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.172 qpair failed and we were unable to recover it. 00:26:29.172 [2024-07-26 14:20:37.135652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.172 [2024-07-26 14:20:37.135706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.172 qpair failed and we were unable to recover it. 00:26:29.172 [2024-07-26 14:20:37.135892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.172 [2024-07-26 14:20:37.135944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.172 qpair failed and we were unable to recover it. 00:26:29.172 [2024-07-26 14:20:37.136186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.172 [2024-07-26 14:20:37.136249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.172 qpair failed and we were unable to recover it. 00:26:29.172 [2024-07-26 14:20:37.136447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.172 [2024-07-26 14:20:37.136512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.172 qpair failed and we were unable to recover it. 00:26:29.172 [2024-07-26 14:20:37.136772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.172 [2024-07-26 14:20:37.136837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.172 qpair failed and we were unable to recover it. 00:26:29.172 [2024-07-26 14:20:37.137089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.172 [2024-07-26 14:20:37.137152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.172 qpair failed and we were unable to recover it. 00:26:29.172 [2024-07-26 14:20:37.137439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.172 [2024-07-26 14:20:37.137504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.172 qpair failed and we were unable to recover it. 00:26:29.172 [2024-07-26 14:20:37.137809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.172 [2024-07-26 14:20:37.137861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.172 qpair failed and we were unable to recover it. 00:26:29.172 [2024-07-26 14:20:37.138067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.172 [2024-07-26 14:20:37.138144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.172 qpair failed and we were unable to recover it. 00:26:29.172 [2024-07-26 14:20:37.138396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.172 [2024-07-26 14:20:37.138460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.172 qpair failed and we were unable to recover it. 00:26:29.172 [2024-07-26 14:20:37.138682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.172 [2024-07-26 14:20:37.138746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.172 qpair failed and we were unable to recover it. 00:26:29.172 [2024-07-26 14:20:37.139024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.172 [2024-07-26 14:20:37.139088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.172 qpair failed and we were unable to recover it. 00:26:29.172 [2024-07-26 14:20:37.139340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.172 [2024-07-26 14:20:37.139405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.172 qpair failed and we were unable to recover it. 00:26:29.172 [2024-07-26 14:20:37.139660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.172 [2024-07-26 14:20:37.139727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.172 qpair failed and we were unable to recover it. 00:26:29.172 [2024-07-26 14:20:37.140009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.172 [2024-07-26 14:20:37.140073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.172 qpair failed and we were unable to recover it. 00:26:29.172 [2024-07-26 14:20:37.140328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.172 [2024-07-26 14:20:37.140392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.172 qpair failed and we were unable to recover it. 00:26:29.172 [2024-07-26 14:20:37.140677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.172 [2024-07-26 14:20:37.140730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.172 qpair failed and we were unable to recover it. 00:26:29.172 [2024-07-26 14:20:37.140930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.172 [2024-07-26 14:20:37.140981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.172 qpair failed and we were unable to recover it. 00:26:29.172 [2024-07-26 14:20:37.141153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.172 [2024-07-26 14:20:37.141225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.172 qpair failed and we were unable to recover it. 00:26:29.172 [2024-07-26 14:20:37.141469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.172 [2024-07-26 14:20:37.141547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.172 qpair failed and we were unable to recover it. 00:26:29.172 [2024-07-26 14:20:37.141803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.172 [2024-07-26 14:20:37.141867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.172 qpair failed and we were unable to recover it. 00:26:29.172 [2024-07-26 14:20:37.142148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.172 [2024-07-26 14:20:37.142213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.172 qpair failed and we were unable to recover it. 00:26:29.172 [2024-07-26 14:20:37.142465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.172 [2024-07-26 14:20:37.142545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.172 qpair failed and we were unable to recover it. 00:26:29.172 [2024-07-26 14:20:37.142790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.172 [2024-07-26 14:20:37.142854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.172 qpair failed and we were unable to recover it. 00:26:29.172 [2024-07-26 14:20:37.143104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.172 [2024-07-26 14:20:37.143168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.172 qpair failed and we were unable to recover it. 00:26:29.172 [2024-07-26 14:20:37.143424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.172 [2024-07-26 14:20:37.143489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.172 qpair failed and we were unable to recover it. 00:26:29.172 [2024-07-26 14:20:37.143728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.172 [2024-07-26 14:20:37.143794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.172 qpair failed and we were unable to recover it. 00:26:29.172 [2024-07-26 14:20:37.144010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.172 [2024-07-26 14:20:37.144074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.172 qpair failed and we were unable to recover it. 00:26:29.172 [2024-07-26 14:20:37.144264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.172 [2024-07-26 14:20:37.144328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.172 qpair failed and we were unable to recover it. 00:26:29.172 [2024-07-26 14:20:37.144587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.172 [2024-07-26 14:20:37.144668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.172 qpair failed and we were unable to recover it. 00:26:29.172 [2024-07-26 14:20:37.144890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.172 [2024-07-26 14:20:37.144955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.172 qpair failed and we were unable to recover it. 00:26:29.172 [2024-07-26 14:20:37.145189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.172 [2024-07-26 14:20:37.145255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.172 qpair failed and we were unable to recover it. 00:26:29.172 [2024-07-26 14:20:37.145485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.172 [2024-07-26 14:20:37.145550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.172 qpair failed and we were unable to recover it. 00:26:29.172 [2024-07-26 14:20:37.145781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.173 [2024-07-26 14:20:37.145854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.173 qpair failed and we were unable to recover it. 00:26:29.173 [2024-07-26 14:20:37.146150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.173 [2024-07-26 14:20:37.146202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.173 qpair failed and we were unable to recover it. 00:26:29.173 [2024-07-26 14:20:37.146358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.173 [2024-07-26 14:20:37.146410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.173 qpair failed and we were unable to recover it. 00:26:29.173 [2024-07-26 14:20:37.146671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.173 [2024-07-26 14:20:37.146736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.173 qpair failed and we were unable to recover it. 00:26:29.173 [2024-07-26 14:20:37.146971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.173 [2024-07-26 14:20:37.147038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.173 qpair failed and we were unable to recover it. 00:26:29.173 [2024-07-26 14:20:37.147287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.173 [2024-07-26 14:20:37.147352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.173 qpair failed and we were unable to recover it. 00:26:29.173 [2024-07-26 14:20:37.147598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.173 [2024-07-26 14:20:37.147664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.173 qpair failed and we were unable to recover it. 00:26:29.173 [2024-07-26 14:20:37.147953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.173 [2024-07-26 14:20:37.148017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.173 qpair failed and we were unable to recover it. 00:26:29.173 [2024-07-26 14:20:37.148303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.173 [2024-07-26 14:20:37.148367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.173 qpair failed and we were unable to recover it. 00:26:29.173 [2024-07-26 14:20:37.148611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.173 [2024-07-26 14:20:37.148676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.173 qpair failed and we were unable to recover it. 00:26:29.173 [2024-07-26 14:20:37.148915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.173 [2024-07-26 14:20:37.148979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.173 qpair failed and we were unable to recover it. 00:26:29.173 [2024-07-26 14:20:37.149259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.173 [2024-07-26 14:20:37.149311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.173 qpair failed and we were unable to recover it. 00:26:29.173 [2024-07-26 14:20:37.149515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.173 [2024-07-26 14:20:37.149578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.451 qpair failed and we were unable to recover it. 00:26:29.451 [2024-07-26 14:20:37.149838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.451 [2024-07-26 14:20:37.149902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.451 qpair failed and we were unable to recover it. 00:26:29.451 [2024-07-26 14:20:37.150185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.451 [2024-07-26 14:20:37.150249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.452 qpair failed and we were unable to recover it. 00:26:29.452 [2024-07-26 14:20:37.150479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.452 [2024-07-26 14:20:37.150554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.452 qpair failed and we were unable to recover it. 00:26:29.452 [2024-07-26 14:20:37.150755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.452 [2024-07-26 14:20:37.150819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.452 qpair failed and we were unable to recover it. 00:26:29.452 [2024-07-26 14:20:37.151083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.452 [2024-07-26 14:20:37.151135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.452 qpair failed and we were unable to recover it. 00:26:29.452 [2024-07-26 14:20:37.151367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.452 [2024-07-26 14:20:37.151419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.452 qpair failed and we were unable to recover it. 00:26:29.452 [2024-07-26 14:20:37.151687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.452 [2024-07-26 14:20:37.151753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.452 qpair failed and we were unable to recover it. 00:26:29.452 [2024-07-26 14:20:37.151966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.452 [2024-07-26 14:20:37.152029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.452 qpair failed and we were unable to recover it. 00:26:29.452 [2024-07-26 14:20:37.152287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.452 [2024-07-26 14:20:37.152350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.452 qpair failed and we were unable to recover it. 00:26:29.452 [2024-07-26 14:20:37.152596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.452 [2024-07-26 14:20:37.152663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.452 qpair failed and we were unable to recover it. 00:26:29.452 [2024-07-26 14:20:37.152896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.452 [2024-07-26 14:20:37.152961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.452 qpair failed and we were unable to recover it. 00:26:29.452 [2024-07-26 14:20:37.153244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.452 [2024-07-26 14:20:37.153309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.452 qpair failed and we were unable to recover it. 00:26:29.452 [2024-07-26 14:20:37.153600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.452 [2024-07-26 14:20:37.153666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.452 qpair failed and we were unable to recover it. 00:26:29.452 [2024-07-26 14:20:37.153909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.452 [2024-07-26 14:20:37.153972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.452 qpair failed and we were unable to recover it. 00:26:29.452 [2024-07-26 14:20:37.154218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.452 [2024-07-26 14:20:37.154292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.452 qpair failed and we were unable to recover it. 00:26:29.452 [2024-07-26 14:20:37.154540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.452 [2024-07-26 14:20:37.154622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.452 qpair failed and we were unable to recover it. 00:26:29.452 [2024-07-26 14:20:37.154855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.452 [2024-07-26 14:20:37.154920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.452 qpair failed and we were unable to recover it. 00:26:29.452 [2024-07-26 14:20:37.155135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.452 [2024-07-26 14:20:37.155199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.452 qpair failed and we were unable to recover it. 00:26:29.452 [2024-07-26 14:20:37.155428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.452 [2024-07-26 14:20:37.155491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.452 qpair failed and we were unable to recover it. 00:26:29.452 [2024-07-26 14:20:37.155768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.452 [2024-07-26 14:20:37.155833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.452 qpair failed and we were unable to recover it. 00:26:29.452 [2024-07-26 14:20:37.156148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.452 [2024-07-26 14:20:37.156213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.452 qpair failed and we were unable to recover it. 00:26:29.452 [2024-07-26 14:20:37.156471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.452 [2024-07-26 14:20:37.156553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.452 qpair failed and we were unable to recover it. 00:26:29.452 [2024-07-26 14:20:37.156820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.452 [2024-07-26 14:20:37.156885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.452 qpair failed and we were unable to recover it. 00:26:29.452 [2024-07-26 14:20:37.157143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.452 [2024-07-26 14:20:37.157207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.452 qpair failed and we were unable to recover it. 00:26:29.452 [2024-07-26 14:20:37.157488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.452 [2024-07-26 14:20:37.157568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.452 qpair failed and we were unable to recover it. 00:26:29.452 [2024-07-26 14:20:37.157781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.452 [2024-07-26 14:20:37.157846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.452 qpair failed and we were unable to recover it. 00:26:29.452 [2024-07-26 14:20:37.158132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.452 [2024-07-26 14:20:37.158197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.452 qpair failed and we were unable to recover it. 00:26:29.452 [2024-07-26 14:20:37.158477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.452 [2024-07-26 14:20:37.158561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.452 qpair failed and we were unable to recover it. 00:26:29.452 [2024-07-26 14:20:37.158838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.452 [2024-07-26 14:20:37.158891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.452 qpair failed and we were unable to recover it. 00:26:29.452 [2024-07-26 14:20:37.159141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.452 [2024-07-26 14:20:37.159204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.452 qpair failed and we were unable to recover it. 00:26:29.452 [2024-07-26 14:20:37.159487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.452 [2024-07-26 14:20:37.159581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.452 qpair failed and we were unable to recover it. 00:26:29.452 [2024-07-26 14:20:37.159846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.452 [2024-07-26 14:20:37.159911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.452 qpair failed and we were unable to recover it. 00:26:29.452 [2024-07-26 14:20:37.160162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.452 [2024-07-26 14:20:37.160227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.452 qpair failed and we were unable to recover it. 00:26:29.452 [2024-07-26 14:20:37.160436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.452 [2024-07-26 14:20:37.160501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.452 qpair failed and we were unable to recover it. 00:26:29.452 [2024-07-26 14:20:37.160777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.452 [2024-07-26 14:20:37.160842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.452 qpair failed and we were unable to recover it. 00:26:29.452 [2024-07-26 14:20:37.161067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.452 [2024-07-26 14:20:37.161131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.452 qpair failed and we were unable to recover it. 00:26:29.452 [2024-07-26 14:20:37.161362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.452 [2024-07-26 14:20:37.161428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.452 qpair failed and we were unable to recover it. 00:26:29.452 [2024-07-26 14:20:37.161706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.452 [2024-07-26 14:20:37.161772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.452 qpair failed and we were unable to recover it. 00:26:29.452 [2024-07-26 14:20:37.162065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.452 [2024-07-26 14:20:37.162130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.452 qpair failed and we were unable to recover it. 00:26:29.452 [2024-07-26 14:20:37.162419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.453 [2024-07-26 14:20:37.162482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.453 qpair failed and we were unable to recover it. 00:26:29.453 [2024-07-26 14:20:37.162750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.453 [2024-07-26 14:20:37.162815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.453 qpair failed and we were unable to recover it. 00:26:29.453 [2024-07-26 14:20:37.163030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.453 [2024-07-26 14:20:37.163104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.453 qpair failed and we were unable to recover it. 00:26:29.453 [2024-07-26 14:20:37.163345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.453 [2024-07-26 14:20:37.163410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.453 qpair failed and we were unable to recover it. 00:26:29.453 [2024-07-26 14:20:37.163657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.453 [2024-07-26 14:20:37.163723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.453 qpair failed and we were unable to recover it. 00:26:29.453 [2024-07-26 14:20:37.163916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.453 [2024-07-26 14:20:37.163981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.453 qpair failed and we were unable to recover it. 00:26:29.453 [2024-07-26 14:20:37.164268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.453 [2024-07-26 14:20:37.164333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.453 qpair failed and we were unable to recover it. 00:26:29.453 [2024-07-26 14:20:37.164548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.453 [2024-07-26 14:20:37.164613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.453 qpair failed and we were unable to recover it. 00:26:29.453 [2024-07-26 14:20:37.164839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.453 [2024-07-26 14:20:37.164905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.453 qpair failed and we were unable to recover it. 00:26:29.453 [2024-07-26 14:20:37.165201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.453 [2024-07-26 14:20:37.165264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.453 qpair failed and we were unable to recover it. 00:26:29.453 [2024-07-26 14:20:37.165513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.453 [2024-07-26 14:20:37.165592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.453 qpair failed and we were unable to recover it. 00:26:29.453 [2024-07-26 14:20:37.165875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.453 [2024-07-26 14:20:37.165939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.453 qpair failed and we were unable to recover it. 00:26:29.453 [2024-07-26 14:20:37.166227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.453 [2024-07-26 14:20:37.166292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.453 qpair failed and we were unable to recover it. 00:26:29.453 [2024-07-26 14:20:37.166562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.453 [2024-07-26 14:20:37.166628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.453 qpair failed and we were unable to recover it. 00:26:29.453 [2024-07-26 14:20:37.166874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.453 [2024-07-26 14:20:37.166937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.453 qpair failed and we were unable to recover it. 00:26:29.453 [2024-07-26 14:20:37.167125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.453 [2024-07-26 14:20:37.167189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.453 qpair failed and we were unable to recover it. 00:26:29.453 [2024-07-26 14:20:37.167463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.453 [2024-07-26 14:20:37.167562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.453 qpair failed and we were unable to recover it. 00:26:29.453 [2024-07-26 14:20:37.167832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.453 [2024-07-26 14:20:37.167897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.453 qpair failed and we were unable to recover it. 00:26:29.453 [2024-07-26 14:20:37.168174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.453 [2024-07-26 14:20:37.168237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.453 qpair failed and we were unable to recover it. 00:26:29.453 [2024-07-26 14:20:37.168493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.453 [2024-07-26 14:20:37.168578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.453 qpair failed and we were unable to recover it. 00:26:29.453 [2024-07-26 14:20:37.168801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.453 [2024-07-26 14:20:37.168866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.453 qpair failed and we were unable to recover it. 00:26:29.453 [2024-07-26 14:20:37.169112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.453 [2024-07-26 14:20:37.169176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.453 qpair failed and we were unable to recover it. 00:26:29.453 [2024-07-26 14:20:37.169424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.453 [2024-07-26 14:20:37.169488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.453 qpair failed and we were unable to recover it. 00:26:29.453 [2024-07-26 14:20:37.169783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.453 [2024-07-26 14:20:37.169846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.453 qpair failed and we were unable to recover it. 00:26:29.453 [2024-07-26 14:20:37.170095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.453 [2024-07-26 14:20:37.170159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.453 qpair failed and we were unable to recover it. 00:26:29.453 [2024-07-26 14:20:37.170431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.453 [2024-07-26 14:20:37.170495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.453 qpair failed and we were unable to recover it. 00:26:29.453 [2024-07-26 14:20:37.170763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.453 [2024-07-26 14:20:37.170827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.453 qpair failed and we were unable to recover it. 00:26:29.453 [2024-07-26 14:20:37.171039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.453 [2024-07-26 14:20:37.171103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.453 qpair failed and we were unable to recover it. 00:26:29.453 [2024-07-26 14:20:37.171399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.453 [2024-07-26 14:20:37.171451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.453 qpair failed and we were unable to recover it. 00:26:29.453 [2024-07-26 14:20:37.171709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.453 [2024-07-26 14:20:37.171775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.453 qpair failed and we were unable to recover it. 00:26:29.453 [2024-07-26 14:20:37.172071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.453 [2024-07-26 14:20:37.172135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.453 qpair failed and we were unable to recover it. 00:26:29.453 [2024-07-26 14:20:37.172383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.453 [2024-07-26 14:20:37.172447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.453 qpair failed and we were unable to recover it. 00:26:29.453 [2024-07-26 14:20:37.172760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.453 [2024-07-26 14:20:37.172825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.453 qpair failed and we were unable to recover it. 00:26:29.453 [2024-07-26 14:20:37.173032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.453 [2024-07-26 14:20:37.173096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.453 qpair failed and we were unable to recover it. 00:26:29.453 [2024-07-26 14:20:37.173331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.453 [2024-07-26 14:20:37.173395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.453 qpair failed and we were unable to recover it. 00:26:29.453 [2024-07-26 14:20:37.173685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.453 [2024-07-26 14:20:37.173751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.453 qpair failed and we were unable to recover it. 00:26:29.453 [2024-07-26 14:20:37.174033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.454 [2024-07-26 14:20:37.174097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.454 qpair failed and we were unable to recover it. 00:26:29.454 [2024-07-26 14:20:37.174357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.454 [2024-07-26 14:20:37.174421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.454 qpair failed and we were unable to recover it. 00:26:29.454 [2024-07-26 14:20:37.174680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.454 [2024-07-26 14:20:37.174746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.454 qpair failed and we were unable to recover it. 00:26:29.454 [2024-07-26 14:20:37.175004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.454 [2024-07-26 14:20:37.175068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.454 qpair failed and we were unable to recover it. 00:26:29.454 [2024-07-26 14:20:37.175307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.454 [2024-07-26 14:20:37.175371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.454 qpair failed and we were unable to recover it. 00:26:29.454 [2024-07-26 14:20:37.175645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.454 [2024-07-26 14:20:37.175710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.454 qpair failed and we were unable to recover it. 00:26:29.454 [2024-07-26 14:20:37.175994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.454 [2024-07-26 14:20:37.176058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.454 qpair failed and we were unable to recover it. 00:26:29.454 [2024-07-26 14:20:37.176365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.454 [2024-07-26 14:20:37.176430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.454 qpair failed and we were unable to recover it. 00:26:29.454 [2024-07-26 14:20:37.176737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.454 [2024-07-26 14:20:37.176802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.454 qpair failed and we were unable to recover it. 00:26:29.454 [2024-07-26 14:20:37.177103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.454 [2024-07-26 14:20:37.177154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.454 qpair failed and we were unable to recover it. 00:26:29.454 [2024-07-26 14:20:37.177355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.454 [2024-07-26 14:20:37.177433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.454 qpair failed and we were unable to recover it. 00:26:29.454 [2024-07-26 14:20:37.177695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.454 [2024-07-26 14:20:37.177761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.454 qpair failed and we were unable to recover it. 00:26:29.454 [2024-07-26 14:20:37.178003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.454 [2024-07-26 14:20:37.178066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.454 qpair failed and we were unable to recover it. 00:26:29.454 [2024-07-26 14:20:37.178358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.454 [2024-07-26 14:20:37.178422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.454 qpair failed and we were unable to recover it. 00:26:29.454 [2024-07-26 14:20:37.178723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.454 [2024-07-26 14:20:37.178789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.454 qpair failed and we were unable to recover it. 00:26:29.454 [2024-07-26 14:20:37.179079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.454 [2024-07-26 14:20:37.179143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.454 qpair failed and we were unable to recover it. 00:26:29.454 [2024-07-26 14:20:37.179392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.454 [2024-07-26 14:20:37.179455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.454 qpair failed and we were unable to recover it. 00:26:29.454 [2024-07-26 14:20:37.179733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.454 [2024-07-26 14:20:37.179799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.454 qpair failed and we were unable to recover it. 00:26:29.454 [2024-07-26 14:20:37.180083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.454 [2024-07-26 14:20:37.180146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.454 qpair failed and we were unable to recover it. 00:26:29.454 [2024-07-26 14:20:37.180395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.454 [2024-07-26 14:20:37.180458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.454 qpair failed and we were unable to recover it. 00:26:29.454 [2024-07-26 14:20:37.180759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.454 [2024-07-26 14:20:37.180812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.454 qpair failed and we were unable to recover it. 00:26:29.454 [2024-07-26 14:20:37.181071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.454 [2024-07-26 14:20:37.181135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.454 qpair failed and we were unable to recover it. 00:26:29.454 [2024-07-26 14:20:37.181388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.454 [2024-07-26 14:20:37.181451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.454 qpair failed and we were unable to recover it. 00:26:29.454 [2024-07-26 14:20:37.181708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.454 [2024-07-26 14:20:37.181772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.454 qpair failed and we were unable to recover it. 00:26:29.454 [2024-07-26 14:20:37.181994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.454 [2024-07-26 14:20:37.182058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.454 qpair failed and we were unable to recover it. 00:26:29.454 [2024-07-26 14:20:37.182343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.454 [2024-07-26 14:20:37.182408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.454 qpair failed and we were unable to recover it. 00:26:29.454 [2024-07-26 14:20:37.182664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.454 [2024-07-26 14:20:37.182729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.454 qpair failed and we were unable to recover it. 00:26:29.454 [2024-07-26 14:20:37.183020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.454 [2024-07-26 14:20:37.183071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.454 qpair failed and we were unable to recover it. 00:26:29.454 [2024-07-26 14:20:37.183238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.454 [2024-07-26 14:20:37.183309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.454 qpair failed and we were unable to recover it. 00:26:29.454 [2024-07-26 14:20:37.183558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.454 [2024-07-26 14:20:37.183622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.454 qpair failed and we were unable to recover it. 00:26:29.454 [2024-07-26 14:20:37.183876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.454 [2024-07-26 14:20:37.183928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.454 qpair failed and we were unable to recover it. 00:26:29.454 [2024-07-26 14:20:37.184078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.454 [2024-07-26 14:20:37.184132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.454 qpair failed and we were unable to recover it. 00:26:29.454 [2024-07-26 14:20:37.184372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.454 [2024-07-26 14:20:37.184436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.454 qpair failed and we were unable to recover it. 00:26:29.454 [2024-07-26 14:20:37.184623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.454 [2024-07-26 14:20:37.184689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.454 qpair failed and we were unable to recover it. 00:26:29.454 [2024-07-26 14:20:37.184971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.454 [2024-07-26 14:20:37.185044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.454 qpair failed and we were unable to recover it. 00:26:29.454 [2024-07-26 14:20:37.185334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.455 [2024-07-26 14:20:37.185398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.455 qpair failed and we were unable to recover it. 00:26:29.455 [2024-07-26 14:20:37.185688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.455 [2024-07-26 14:20:37.185742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.455 qpair failed and we were unable to recover it. 00:26:29.455 [2024-07-26 14:20:37.185946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.455 [2024-07-26 14:20:37.186023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.455 qpair failed and we were unable to recover it. 00:26:29.455 [2024-07-26 14:20:37.186256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.455 [2024-07-26 14:20:37.186320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.455 qpair failed and we were unable to recover it. 00:26:29.455 [2024-07-26 14:20:37.186560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.455 [2024-07-26 14:20:37.186626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.455 qpair failed and we were unable to recover it. 00:26:29.455 [2024-07-26 14:20:37.186887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.455 [2024-07-26 14:20:37.186951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.455 qpair failed and we were unable to recover it. 00:26:29.455 [2024-07-26 14:20:37.187195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.455 [2024-07-26 14:20:37.187259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.455 qpair failed and we were unable to recover it. 00:26:29.455 [2024-07-26 14:20:37.187577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.455 [2024-07-26 14:20:37.187642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.455 qpair failed and we were unable to recover it. 00:26:29.455 [2024-07-26 14:20:37.187860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.455 [2024-07-26 14:20:37.187924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.455 qpair failed and we were unable to recover it. 00:26:29.455 [2024-07-26 14:20:37.188169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.455 [2024-07-26 14:20:37.188234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.455 qpair failed and we were unable to recover it. 00:26:29.455 [2024-07-26 14:20:37.188482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.455 [2024-07-26 14:20:37.188560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.455 qpair failed and we were unable to recover it. 00:26:29.455 [2024-07-26 14:20:37.188814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.455 [2024-07-26 14:20:37.188877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.455 qpair failed and we were unable to recover it. 00:26:29.455 [2024-07-26 14:20:37.189127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.455 [2024-07-26 14:20:37.189190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.455 qpair failed and we were unable to recover it. 00:26:29.455 [2024-07-26 14:20:37.189453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.455 [2024-07-26 14:20:37.189517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.455 qpair failed and we were unable to recover it. 00:26:29.455 [2024-07-26 14:20:37.189816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.455 [2024-07-26 14:20:37.189880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.455 qpair failed and we were unable to recover it. 00:26:29.455 [2024-07-26 14:20:37.190162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.455 [2024-07-26 14:20:37.190226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.455 qpair failed and we were unable to recover it. 00:26:29.455 [2024-07-26 14:20:37.190427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.455 [2024-07-26 14:20:37.190493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.455 qpair failed and we were unable to recover it. 00:26:29.455 [2024-07-26 14:20:37.190797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.455 [2024-07-26 14:20:37.190863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.455 qpair failed and we were unable to recover it. 00:26:29.455 [2024-07-26 14:20:37.191148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.455 [2024-07-26 14:20:37.191212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.455 qpair failed and we were unable to recover it. 00:26:29.455 [2024-07-26 14:20:37.191454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.455 [2024-07-26 14:20:37.191517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.455 qpair failed and we were unable to recover it. 00:26:29.455 [2024-07-26 14:20:37.191793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.455 [2024-07-26 14:20:37.191858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.455 qpair failed and we were unable to recover it. 00:26:29.455 [2024-07-26 14:20:37.192116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.455 [2024-07-26 14:20:37.192180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.455 qpair failed and we were unable to recover it. 00:26:29.455 [2024-07-26 14:20:37.192467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.455 [2024-07-26 14:20:37.192548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.455 qpair failed and we were unable to recover it. 00:26:29.455 [2024-07-26 14:20:37.192806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.455 [2024-07-26 14:20:37.192870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.455 qpair failed and we were unable to recover it. 00:26:29.455 [2024-07-26 14:20:37.193072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.455 [2024-07-26 14:20:37.193137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.455 qpair failed and we were unable to recover it. 00:26:29.455 [2024-07-26 14:20:37.193413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.455 [2024-07-26 14:20:37.193477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.455 qpair failed and we were unable to recover it. 00:26:29.455 [2024-07-26 14:20:37.193757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.455 [2024-07-26 14:20:37.193832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.455 qpair failed and we were unable to recover it. 00:26:29.455 [2024-07-26 14:20:37.194096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.455 [2024-07-26 14:20:37.194160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.455 qpair failed and we were unable to recover it. 00:26:29.455 [2024-07-26 14:20:37.194413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.455 [2024-07-26 14:20:37.194476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.455 qpair failed and we were unable to recover it. 00:26:29.455 [2024-07-26 14:20:37.194751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.455 [2024-07-26 14:20:37.194815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.455 qpair failed and we were unable to recover it. 00:26:29.455 [2024-07-26 14:20:37.195057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.455 [2024-07-26 14:20:37.195121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.455 qpair failed and we were unable to recover it. 00:26:29.455 [2024-07-26 14:20:37.195367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.455 [2024-07-26 14:20:37.195431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.455 qpair failed and we were unable to recover it. 00:26:29.455 [2024-07-26 14:20:37.195732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.455 [2024-07-26 14:20:37.195798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.455 qpair failed and we were unable to recover it. 00:26:29.455 [2024-07-26 14:20:37.196095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.455 [2024-07-26 14:20:37.196160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.455 qpair failed and we were unable to recover it. 00:26:29.455 [2024-07-26 14:20:37.196451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.455 [2024-07-26 14:20:37.196514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.455 qpair failed and we were unable to recover it. 00:26:29.455 [2024-07-26 14:20:37.196785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.455 [2024-07-26 14:20:37.196837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.455 qpair failed and we were unable to recover it. 00:26:29.455 [2024-07-26 14:20:37.197050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.455 [2024-07-26 14:20:37.197102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.455 qpair failed and we were unable to recover it. 00:26:29.455 [2024-07-26 14:20:37.197301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.455 [2024-07-26 14:20:37.197365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.455 qpair failed and we were unable to recover it. 00:26:29.455 [2024-07-26 14:20:37.197641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.456 [2024-07-26 14:20:37.197707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.456 qpair failed and we were unable to recover it. 00:26:29.456 [2024-07-26 14:20:37.197989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.456 [2024-07-26 14:20:37.198053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.456 qpair failed and we were unable to recover it. 00:26:29.456 [2024-07-26 14:20:37.198303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.456 [2024-07-26 14:20:37.198370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.456 qpair failed and we were unable to recover it. 00:26:29.456 [2024-07-26 14:20:37.198625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.456 [2024-07-26 14:20:37.198691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.456 qpair failed and we were unable to recover it. 00:26:29.456 [2024-07-26 14:20:37.198947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.456 [2024-07-26 14:20:37.199010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.456 qpair failed and we were unable to recover it. 00:26:29.456 [2024-07-26 14:20:37.199311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.456 [2024-07-26 14:20:37.199363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.456 qpair failed and we were unable to recover it. 00:26:29.456 [2024-07-26 14:20:37.199566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.456 [2024-07-26 14:20:37.199644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.456 qpair failed and we were unable to recover it. 00:26:29.456 [2024-07-26 14:20:37.199923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.456 [2024-07-26 14:20:37.199988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.456 qpair failed and we were unable to recover it. 00:26:29.456 [2024-07-26 14:20:37.200211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.456 [2024-07-26 14:20:37.200275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.456 qpair failed and we were unable to recover it. 00:26:29.456 [2024-07-26 14:20:37.200559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.456 [2024-07-26 14:20:37.200627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.456 qpair failed and we were unable to recover it. 00:26:29.456 [2024-07-26 14:20:37.200827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.456 [2024-07-26 14:20:37.200891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.456 qpair failed and we were unable to recover it. 00:26:29.456 [2024-07-26 14:20:37.201058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.456 [2024-07-26 14:20:37.201123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.456 qpair failed and we were unable to recover it. 00:26:29.456 [2024-07-26 14:20:37.201402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.456 [2024-07-26 14:20:37.201467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.456 qpair failed and we were unable to recover it. 00:26:29.456 [2024-07-26 14:20:37.201762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.456 [2024-07-26 14:20:37.201826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.456 qpair failed and we were unable to recover it. 00:26:29.456 [2024-07-26 14:20:37.202117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.456 [2024-07-26 14:20:37.202169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.456 qpair failed and we were unable to recover it. 00:26:29.456 [2024-07-26 14:20:37.202400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.456 [2024-07-26 14:20:37.202464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.456 qpair failed and we were unable to recover it. 00:26:29.456 [2024-07-26 14:20:37.202732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.456 [2024-07-26 14:20:37.202797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.456 qpair failed and we were unable to recover it. 00:26:29.456 [2024-07-26 14:20:37.203073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.456 [2024-07-26 14:20:37.203138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.456 qpair failed and we were unable to recover it. 00:26:29.456 [2024-07-26 14:20:37.203424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.456 [2024-07-26 14:20:37.203489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.456 qpair failed and we were unable to recover it. 00:26:29.456 [2024-07-26 14:20:37.203757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.456 [2024-07-26 14:20:37.203821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.456 qpair failed and we were unable to recover it. 00:26:29.456 [2024-07-26 14:20:37.204064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.456 [2024-07-26 14:20:37.204128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.456 qpair failed and we were unable to recover it. 00:26:29.456 [2024-07-26 14:20:37.204420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.456 [2024-07-26 14:20:37.204485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.456 qpair failed and we were unable to recover it. 00:26:29.456 [2024-07-26 14:20:37.204746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.456 [2024-07-26 14:20:37.204812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.456 qpair failed and we were unable to recover it. 00:26:29.456 [2024-07-26 14:20:37.205071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.456 [2024-07-26 14:20:37.205135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.456 qpair failed and we were unable to recover it. 00:26:29.456 [2024-07-26 14:20:37.205383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.456 [2024-07-26 14:20:37.205446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.456 qpair failed and we were unable to recover it. 00:26:29.456 [2024-07-26 14:20:37.205741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.456 [2024-07-26 14:20:37.205807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.456 qpair failed and we were unable to recover it. 00:26:29.456 [2024-07-26 14:20:37.206085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.456 [2024-07-26 14:20:37.206149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.456 qpair failed and we were unable to recover it. 00:26:29.456 [2024-07-26 14:20:37.206438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.456 [2024-07-26 14:20:37.206502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.456 qpair failed and we were unable to recover it. 00:26:29.456 [2024-07-26 14:20:37.206776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.456 [2024-07-26 14:20:37.206841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.456 qpair failed and we were unable to recover it. 00:26:29.456 [2024-07-26 14:20:37.207125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.456 [2024-07-26 14:20:37.207190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.456 qpair failed and we were unable to recover it. 00:26:29.456 [2024-07-26 14:20:37.207438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.456 [2024-07-26 14:20:37.207501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.456 qpair failed and we were unable to recover it. 00:26:29.456 [2024-07-26 14:20:37.207790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.456 [2024-07-26 14:20:37.207855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.456 qpair failed and we were unable to recover it. 00:26:29.456 [2024-07-26 14:20:37.208116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.456 [2024-07-26 14:20:37.208180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.456 qpair failed and we were unable to recover it. 00:26:29.456 [2024-07-26 14:20:37.208475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.456 [2024-07-26 14:20:37.208558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.456 qpair failed and we were unable to recover it. 00:26:29.456 [2024-07-26 14:20:37.208814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.456 [2024-07-26 14:20:37.208878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.456 qpair failed and we were unable to recover it. 00:26:29.456 [2024-07-26 14:20:37.209112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.456 [2024-07-26 14:20:37.209177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.456 qpair failed and we were unable to recover it. 00:26:29.456 [2024-07-26 14:20:37.209463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.456 [2024-07-26 14:20:37.209545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.456 qpair failed and we were unable to recover it. 00:26:29.456 [2024-07-26 14:20:37.209854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.456 [2024-07-26 14:20:37.209906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.456 qpair failed and we were unable to recover it. 00:26:29.457 [2024-07-26 14:20:37.210099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.457 [2024-07-26 14:20:37.210151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.457 qpair failed and we were unable to recover it. 00:26:29.457 [2024-07-26 14:20:37.210416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.457 [2024-07-26 14:20:37.210480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.457 qpair failed and we were unable to recover it. 00:26:29.457 [2024-07-26 14:20:37.210749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.457 [2024-07-26 14:20:37.210814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.457 qpair failed and we were unable to recover it. 00:26:29.457 [2024-07-26 14:20:37.211065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.457 [2024-07-26 14:20:37.211129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.457 qpair failed and we were unable to recover it. 00:26:29.457 [2024-07-26 14:20:37.211422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.457 [2024-07-26 14:20:37.211486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.457 qpair failed and we were unable to recover it. 00:26:29.457 [2024-07-26 14:20:37.211799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.457 [2024-07-26 14:20:37.211865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.457 qpair failed and we were unable to recover it. 00:26:29.457 [2024-07-26 14:20:37.212112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.457 [2024-07-26 14:20:37.212176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.457 qpair failed and we were unable to recover it. 00:26:29.457 [2024-07-26 14:20:37.212409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.457 [2024-07-26 14:20:37.212475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.457 qpair failed and we were unable to recover it. 00:26:29.457 [2024-07-26 14:20:37.212740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.457 [2024-07-26 14:20:37.212806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.457 qpair failed and we were unable to recover it. 00:26:29.457 [2024-07-26 14:20:37.213047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.457 [2024-07-26 14:20:37.213110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.457 qpair failed and we were unable to recover it. 00:26:29.457 [2024-07-26 14:20:37.213405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.457 [2024-07-26 14:20:37.213457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.457 qpair failed and we were unable to recover it. 00:26:29.457 [2024-07-26 14:20:37.213726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.457 [2024-07-26 14:20:37.213792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.457 qpair failed and we were unable to recover it. 00:26:29.457 [2024-07-26 14:20:37.214088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.457 [2024-07-26 14:20:37.214151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.457 qpair failed and we were unable to recover it. 00:26:29.457 [2024-07-26 14:20:37.214405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.457 [2024-07-26 14:20:37.214470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.457 qpair failed and we were unable to recover it. 00:26:29.457 [2024-07-26 14:20:37.214701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.457 [2024-07-26 14:20:37.214766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.457 qpair failed and we were unable to recover it. 00:26:29.457 [2024-07-26 14:20:37.214960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.457 [2024-07-26 14:20:37.215025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.457 qpair failed and we were unable to recover it. 00:26:29.457 [2024-07-26 14:20:37.215269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.457 [2024-07-26 14:20:37.215333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.457 qpair failed and we were unable to recover it. 00:26:29.457 [2024-07-26 14:20:37.215596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.457 [2024-07-26 14:20:37.215662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.457 qpair failed and we were unable to recover it. 00:26:29.457 [2024-07-26 14:20:37.215948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.457 [2024-07-26 14:20:37.216022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.457 qpair failed and we were unable to recover it. 00:26:29.457 [2024-07-26 14:20:37.216272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.457 [2024-07-26 14:20:37.216336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.457 qpair failed and we were unable to recover it. 00:26:29.457 [2024-07-26 14:20:37.216619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.457 [2024-07-26 14:20:37.216685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.457 qpair failed and we were unable to recover it. 00:26:29.457 [2024-07-26 14:20:37.216895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.457 [2024-07-26 14:20:37.216960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.457 qpair failed and we were unable to recover it. 00:26:29.457 [2024-07-26 14:20:37.217239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.457 [2024-07-26 14:20:37.217302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.457 qpair failed and we were unable to recover it. 00:26:29.457 [2024-07-26 14:20:37.217557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.457 [2024-07-26 14:20:37.217623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.457 qpair failed and we were unable to recover it. 00:26:29.457 [2024-07-26 14:20:37.217856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.457 [2024-07-26 14:20:37.217919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.457 qpair failed and we were unable to recover it. 00:26:29.457 [2024-07-26 14:20:37.218163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.457 [2024-07-26 14:20:37.218227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.457 qpair failed and we were unable to recover it. 00:26:29.457 [2024-07-26 14:20:37.218483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.457 [2024-07-26 14:20:37.218562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.457 qpair failed and we were unable to recover it. 00:26:29.457 [2024-07-26 14:20:37.218794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.457 [2024-07-26 14:20:37.218857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.457 qpair failed and we were unable to recover it. 00:26:29.457 [2024-07-26 14:20:37.219109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.457 [2024-07-26 14:20:37.219172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.457 qpair failed and we were unable to recover it. 00:26:29.457 [2024-07-26 14:20:37.219414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.457 [2024-07-26 14:20:37.219479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.457 qpair failed and we were unable to recover it. 00:26:29.457 [2024-07-26 14:20:37.219752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.457 [2024-07-26 14:20:37.219816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.457 qpair failed and we were unable to recover it. 00:26:29.457 [2024-07-26 14:20:37.220077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.457 [2024-07-26 14:20:37.220141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.457 qpair failed and we were unable to recover it. 00:26:29.457 [2024-07-26 14:20:37.220398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.457 [2024-07-26 14:20:37.220463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.457 qpair failed and we were unable to recover it. 00:26:29.457 [2024-07-26 14:20:37.220710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.457 [2024-07-26 14:20:37.220775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.457 qpair failed and we were unable to recover it. 00:26:29.457 [2024-07-26 14:20:37.221054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.457 [2024-07-26 14:20:37.221118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.457 qpair failed and we were unable to recover it. 00:26:29.457 [2024-07-26 14:20:37.221410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.457 [2024-07-26 14:20:37.221474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.457 qpair failed and we were unable to recover it. 00:26:29.457 [2024-07-26 14:20:37.221746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.457 [2024-07-26 14:20:37.221812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.457 qpair failed and we were unable to recover it. 00:26:29.457 [2024-07-26 14:20:37.222059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.457 [2024-07-26 14:20:37.222122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.458 qpair failed and we were unable to recover it. 00:26:29.458 [2024-07-26 14:20:37.222315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.458 [2024-07-26 14:20:37.222378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.458 qpair failed and we were unable to recover it. 00:26:29.458 [2024-07-26 14:20:37.222637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.458 [2024-07-26 14:20:37.222703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.458 qpair failed and we were unable to recover it. 00:26:29.458 [2024-07-26 14:20:37.222993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.458 [2024-07-26 14:20:37.223058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.458 qpair failed and we were unable to recover it. 00:26:29.458 [2024-07-26 14:20:37.223316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.458 [2024-07-26 14:20:37.223379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.458 qpair failed and we were unable to recover it. 00:26:29.458 [2024-07-26 14:20:37.223633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.458 [2024-07-26 14:20:37.223698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.458 qpair failed and we were unable to recover it. 00:26:29.458 [2024-07-26 14:20:37.223903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.458 [2024-07-26 14:20:37.223967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.458 qpair failed and we were unable to recover it. 00:26:29.458 [2024-07-26 14:20:37.224245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.458 [2024-07-26 14:20:37.224308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.458 qpair failed and we were unable to recover it. 00:26:29.458 [2024-07-26 14:20:37.224567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.458 [2024-07-26 14:20:37.224647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.458 qpair failed and we were unable to recover it. 00:26:29.458 [2024-07-26 14:20:37.224946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.458 [2024-07-26 14:20:37.225010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.458 qpair failed and we were unable to recover it. 00:26:29.458 [2024-07-26 14:20:37.225256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.458 [2024-07-26 14:20:37.225320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.458 qpair failed and we were unable to recover it. 00:26:29.458 [2024-07-26 14:20:37.225543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.458 [2024-07-26 14:20:37.225608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.458 qpair failed and we were unable to recover it. 00:26:29.458 [2024-07-26 14:20:37.225888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.458 [2024-07-26 14:20:37.225951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.458 qpair failed and we were unable to recover it. 00:26:29.458 [2024-07-26 14:20:37.226237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.458 [2024-07-26 14:20:37.226302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.458 qpair failed and we were unable to recover it. 00:26:29.458 [2024-07-26 14:20:37.226576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.458 [2024-07-26 14:20:37.226643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.458 qpair failed and we were unable to recover it. 00:26:29.458 [2024-07-26 14:20:37.226923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.458 [2024-07-26 14:20:37.226988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.458 qpair failed and we were unable to recover it. 00:26:29.458 [2024-07-26 14:20:37.227265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.458 [2024-07-26 14:20:37.227331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.458 qpair failed and we were unable to recover it. 00:26:29.458 [2024-07-26 14:20:37.227518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.458 [2024-07-26 14:20:37.227599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.458 qpair failed and we were unable to recover it. 00:26:29.458 [2024-07-26 14:20:37.227849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.458 [2024-07-26 14:20:37.227914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.458 qpair failed and we were unable to recover it. 00:26:29.458 [2024-07-26 14:20:37.228201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.458 [2024-07-26 14:20:37.228266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.458 qpair failed and we were unable to recover it. 00:26:29.458 [2024-07-26 14:20:37.228576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.458 [2024-07-26 14:20:37.228641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.458 qpair failed and we were unable to recover it. 00:26:29.458 [2024-07-26 14:20:37.228933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.458 [2024-07-26 14:20:37.228998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.458 qpair failed and we were unable to recover it. 00:26:29.458 [2024-07-26 14:20:37.229290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.458 [2024-07-26 14:20:37.229356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.458 qpair failed and we were unable to recover it. 00:26:29.458 [2024-07-26 14:20:37.229616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.458 [2024-07-26 14:20:37.229680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.458 qpair failed and we were unable to recover it. 00:26:29.458 [2024-07-26 14:20:37.229880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.458 [2024-07-26 14:20:37.229945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.458 qpair failed and we were unable to recover it. 00:26:29.458 [2024-07-26 14:20:37.230147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.458 [2024-07-26 14:20:37.230212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.458 qpair failed and we were unable to recover it. 00:26:29.458 [2024-07-26 14:20:37.230497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.458 [2024-07-26 14:20:37.230580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.458 qpair failed and we were unable to recover it. 00:26:29.458 [2024-07-26 14:20:37.230828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.458 [2024-07-26 14:20:37.230894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.458 qpair failed and we were unable to recover it. 00:26:29.458 [2024-07-26 14:20:37.231128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.458 [2024-07-26 14:20:37.231194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.458 qpair failed and we were unable to recover it. 00:26:29.458 [2024-07-26 14:20:37.231442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.458 [2024-07-26 14:20:37.231506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.458 qpair failed and we were unable to recover it. 00:26:29.458 [2024-07-26 14:20:37.231733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.458 [2024-07-26 14:20:37.231797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.458 qpair failed and we were unable to recover it. 00:26:29.458 [2024-07-26 14:20:37.232079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.458 [2024-07-26 14:20:37.232142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.458 qpair failed and we were unable to recover it. 00:26:29.459 [2024-07-26 14:20:37.232340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.459 [2024-07-26 14:20:37.232405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.459 qpair failed and we were unable to recover it. 00:26:29.459 [2024-07-26 14:20:37.232630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.459 [2024-07-26 14:20:37.232696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.459 qpair failed and we were unable to recover it. 00:26:29.459 [2024-07-26 14:20:37.232941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.459 [2024-07-26 14:20:37.233006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.459 qpair failed and we were unable to recover it. 00:26:29.459 [2024-07-26 14:20:37.233284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.459 [2024-07-26 14:20:37.233357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.459 qpair failed and we were unable to recover it. 00:26:29.459 [2024-07-26 14:20:37.233648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.459 [2024-07-26 14:20:37.233713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.459 qpair failed and we were unable to recover it. 00:26:29.459 [2024-07-26 14:20:37.233988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.459 [2024-07-26 14:20:37.234053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.459 qpair failed and we were unable to recover it. 00:26:29.459 [2024-07-26 14:20:37.234325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.459 [2024-07-26 14:20:37.234389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.459 qpair failed and we were unable to recover it. 00:26:29.459 [2024-07-26 14:20:37.234645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.459 [2024-07-26 14:20:37.234711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.459 qpair failed and we were unable to recover it. 00:26:29.459 [2024-07-26 14:20:37.234939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.459 [2024-07-26 14:20:37.235003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.459 qpair failed and we were unable to recover it. 00:26:29.459 [2024-07-26 14:20:37.235252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.459 [2024-07-26 14:20:37.235315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.459 qpair failed and we were unable to recover it. 00:26:29.459 [2024-07-26 14:20:37.235552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.459 [2024-07-26 14:20:37.235618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.459 qpair failed and we were unable to recover it. 00:26:29.459 [2024-07-26 14:20:37.235824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.459 [2024-07-26 14:20:37.235889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.459 qpair failed and we were unable to recover it. 00:26:29.459 [2024-07-26 14:20:37.236091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.459 [2024-07-26 14:20:37.236157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.459 qpair failed and we were unable to recover it. 00:26:29.459 [2024-07-26 14:20:37.236390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.459 [2024-07-26 14:20:37.236456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.459 qpair failed and we were unable to recover it. 00:26:29.459 [2024-07-26 14:20:37.236752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.459 [2024-07-26 14:20:37.236818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.459 qpair failed and we were unable to recover it. 00:26:29.459 [2024-07-26 14:20:37.237102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.459 [2024-07-26 14:20:37.237166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.459 qpair failed and we were unable to recover it. 00:26:29.459 [2024-07-26 14:20:37.237413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.459 [2024-07-26 14:20:37.237478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.459 qpair failed and we were unable to recover it. 00:26:29.459 [2024-07-26 14:20:37.237777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.459 [2024-07-26 14:20:37.237843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.459 qpair failed and we were unable to recover it. 00:26:29.459 [2024-07-26 14:20:37.238132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.459 [2024-07-26 14:20:37.238196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.459 qpair failed and we were unable to recover it. 00:26:29.459 [2024-07-26 14:20:37.238451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.459 [2024-07-26 14:20:37.238515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.459 qpair failed and we were unable to recover it. 00:26:29.459 [2024-07-26 14:20:37.238836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.459 [2024-07-26 14:20:37.238901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.459 qpair failed and we were unable to recover it. 00:26:29.459 [2024-07-26 14:20:37.239191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.459 [2024-07-26 14:20:37.239256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.459 qpair failed and we were unable to recover it. 00:26:29.459 [2024-07-26 14:20:37.239513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.459 [2024-07-26 14:20:37.239596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.459 qpair failed and we were unable to recover it. 00:26:29.459 [2024-07-26 14:20:37.239824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.459 [2024-07-26 14:20:37.239889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.459 qpair failed and we were unable to recover it. 00:26:29.459 [2024-07-26 14:20:37.240185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.459 [2024-07-26 14:20:37.240238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.459 qpair failed and we were unable to recover it. 00:26:29.459 [2024-07-26 14:20:37.240457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.459 [2024-07-26 14:20:37.240564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.459 qpair failed and we were unable to recover it. 00:26:29.459 [2024-07-26 14:20:37.240853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.459 [2024-07-26 14:20:37.240918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.459 qpair failed and we were unable to recover it. 00:26:29.459 [2024-07-26 14:20:37.241164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.459 [2024-07-26 14:20:37.241228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.459 qpair failed and we were unable to recover it. 00:26:29.459 [2024-07-26 14:20:37.241508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.459 [2024-07-26 14:20:37.241595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.459 qpair failed and we were unable to recover it. 00:26:29.459 [2024-07-26 14:20:37.241852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.459 [2024-07-26 14:20:37.241915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.459 qpair failed and we were unable to recover it. 00:26:29.459 [2024-07-26 14:20:37.242214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.459 [2024-07-26 14:20:37.242278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.459 qpair failed and we were unable to recover it. 00:26:29.459 [2024-07-26 14:20:37.242524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.459 [2024-07-26 14:20:37.242608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.459 qpair failed and we were unable to recover it. 00:26:29.459 [2024-07-26 14:20:37.242858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.459 [2024-07-26 14:20:37.242923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.459 qpair failed and we were unable to recover it. 00:26:29.459 [2024-07-26 14:20:37.243163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.459 [2024-07-26 14:20:37.243228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.459 qpair failed and we were unable to recover it. 00:26:29.459 [2024-07-26 14:20:37.243449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.459 [2024-07-26 14:20:37.243514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.459 qpair failed and we were unable to recover it. 00:26:29.459 [2024-07-26 14:20:37.243771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.459 [2024-07-26 14:20:37.243838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.459 qpair failed and we were unable to recover it. 00:26:29.459 [2024-07-26 14:20:37.244088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.459 [2024-07-26 14:20:37.244153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.459 qpair failed and we were unable to recover it. 00:26:29.459 [2024-07-26 14:20:37.244430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.459 [2024-07-26 14:20:37.244496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.459 qpair failed and we were unable to recover it. 00:26:29.460 [2024-07-26 14:20:37.244785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.460 [2024-07-26 14:20:37.244837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.460 qpair failed and we were unable to recover it. 00:26:29.460 [2024-07-26 14:20:37.245042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.460 [2024-07-26 14:20:37.245123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.460 qpair failed and we were unable to recover it. 00:26:29.460 [2024-07-26 14:20:37.245414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.460 [2024-07-26 14:20:37.245479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.460 qpair failed and we were unable to recover it. 00:26:29.460 [2024-07-26 14:20:37.245777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.460 [2024-07-26 14:20:37.245842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.460 qpair failed and we were unable to recover it. 00:26:29.460 [2024-07-26 14:20:37.246101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.460 [2024-07-26 14:20:37.246166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.460 qpair failed and we were unable to recover it. 00:26:29.460 [2024-07-26 14:20:37.246457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.460 [2024-07-26 14:20:37.246522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.460 qpair failed and we were unable to recover it. 00:26:29.460 [2024-07-26 14:20:37.246840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.460 [2024-07-26 14:20:37.246904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.460 qpair failed and we were unable to recover it. 00:26:29.460 [2024-07-26 14:20:37.247198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.460 [2024-07-26 14:20:37.247263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.460 qpair failed and we were unable to recover it. 00:26:29.460 [2024-07-26 14:20:37.247562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.460 [2024-07-26 14:20:37.247629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.460 qpair failed and we were unable to recover it. 00:26:29.460 [2024-07-26 14:20:37.247922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.460 [2024-07-26 14:20:37.247974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.460 qpair failed and we were unable to recover it. 00:26:29.460 [2024-07-26 14:20:37.248238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.460 [2024-07-26 14:20:37.248303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.460 qpair failed and we were unable to recover it. 00:26:29.460 [2024-07-26 14:20:37.248591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.460 [2024-07-26 14:20:37.248656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.460 qpair failed and we were unable to recover it. 00:26:29.460 [2024-07-26 14:20:37.248905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.460 [2024-07-26 14:20:37.248969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.460 qpair failed and we were unable to recover it. 00:26:29.460 [2024-07-26 14:20:37.249263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.460 [2024-07-26 14:20:37.249327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.460 qpair failed and we were unable to recover it. 00:26:29.460 [2024-07-26 14:20:37.249614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.460 [2024-07-26 14:20:37.249679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.460 qpair failed and we were unable to recover it. 00:26:29.460 [2024-07-26 14:20:37.249918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.460 [2024-07-26 14:20:37.249983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.460 qpair failed and we were unable to recover it. 00:26:29.460 [2024-07-26 14:20:37.250260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.460 [2024-07-26 14:20:37.250323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.460 qpair failed and we were unable to recover it. 00:26:29.460 [2024-07-26 14:20:37.250608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.460 [2024-07-26 14:20:37.250674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.460 qpair failed and we were unable to recover it. 00:26:29.460 [2024-07-26 14:20:37.250912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.460 [2024-07-26 14:20:37.250977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.460 qpair failed and we were unable to recover it. 00:26:29.460 [2024-07-26 14:20:37.251266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.460 [2024-07-26 14:20:37.251329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.460 qpair failed and we were unable to recover it. 00:26:29.460 [2024-07-26 14:20:37.251631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.460 [2024-07-26 14:20:37.251684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.460 qpair failed and we were unable to recover it. 00:26:29.460 [2024-07-26 14:20:37.251866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.460 [2024-07-26 14:20:37.251919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.460 qpair failed and we were unable to recover it. 00:26:29.460 [2024-07-26 14:20:37.252104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.460 [2024-07-26 14:20:37.252155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.460 qpair failed and we were unable to recover it. 00:26:29.460 [2024-07-26 14:20:37.252361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.460 [2024-07-26 14:20:37.252440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.460 qpair failed and we were unable to recover it. 00:26:29.460 [2024-07-26 14:20:37.252725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.460 [2024-07-26 14:20:37.252790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.460 qpair failed and we were unable to recover it. 00:26:29.460 [2024-07-26 14:20:37.253040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.460 [2024-07-26 14:20:37.253105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.460 qpair failed and we were unable to recover it. 00:26:29.460 [2024-07-26 14:20:37.253344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.460 [2024-07-26 14:20:37.253411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.460 qpair failed and we were unable to recover it. 00:26:29.460 [2024-07-26 14:20:37.253694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.460 [2024-07-26 14:20:37.253760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.460 qpair failed and we were unable to recover it. 00:26:29.460 [2024-07-26 14:20:37.253969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.460 [2024-07-26 14:20:37.254032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.460 qpair failed and we were unable to recover it. 00:26:29.460 [2024-07-26 14:20:37.254279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.460 [2024-07-26 14:20:37.254343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.460 qpair failed and we were unable to recover it. 00:26:29.460 [2024-07-26 14:20:37.254635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.460 [2024-07-26 14:20:37.254701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.460 qpair failed and we were unable to recover it. 00:26:29.460 [2024-07-26 14:20:37.254989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.460 [2024-07-26 14:20:37.255053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.460 qpair failed and we were unable to recover it. 00:26:29.460 [2024-07-26 14:20:37.255347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.460 [2024-07-26 14:20:37.255411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.460 qpair failed and we were unable to recover it. 00:26:29.460 [2024-07-26 14:20:37.255662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.460 [2024-07-26 14:20:37.255736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.460 qpair failed and we were unable to recover it. 00:26:29.460 [2024-07-26 14:20:37.255971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.460 [2024-07-26 14:20:37.256035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.460 qpair failed and we were unable to recover it. 00:26:29.460 [2024-07-26 14:20:37.256291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.460 [2024-07-26 14:20:37.256355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.460 qpair failed and we were unable to recover it. 00:26:29.460 [2024-07-26 14:20:37.256627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.460 [2024-07-26 14:20:37.256693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.460 qpair failed and we were unable to recover it. 00:26:29.460 [2024-07-26 14:20:37.256937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.460 [2024-07-26 14:20:37.256988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.461 qpair failed and we were unable to recover it. 00:26:29.461 [2024-07-26 14:20:37.257139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.461 [2024-07-26 14:20:37.257191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.461 qpair failed and we were unable to recover it. 00:26:29.461 [2024-07-26 14:20:37.257445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.461 [2024-07-26 14:20:37.257510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.461 qpair failed and we were unable to recover it. 00:26:29.461 [2024-07-26 14:20:37.257752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.461 [2024-07-26 14:20:37.257816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.461 qpair failed and we were unable to recover it. 00:26:29.461 [2024-07-26 14:20:37.258017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.461 [2024-07-26 14:20:37.258082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.461 qpair failed and we were unable to recover it. 00:26:29.461 [2024-07-26 14:20:37.258365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.461 [2024-07-26 14:20:37.258430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.461 qpair failed and we were unable to recover it. 00:26:29.461 [2024-07-26 14:20:37.258723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.461 [2024-07-26 14:20:37.258789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.461 qpair failed and we were unable to recover it. 00:26:29.461 [2024-07-26 14:20:37.258994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.461 [2024-07-26 14:20:37.259061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.461 qpair failed and we were unable to recover it. 00:26:29.461 [2024-07-26 14:20:37.259250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.461 [2024-07-26 14:20:37.259314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.461 qpair failed and we were unable to recover it. 00:26:29.461 [2024-07-26 14:20:37.259593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.461 [2024-07-26 14:20:37.259659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.461 qpair failed and we were unable to recover it. 00:26:29.461 [2024-07-26 14:20:37.259883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.461 [2024-07-26 14:20:37.259948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.461 qpair failed and we were unable to recover it. 00:26:29.461 [2024-07-26 14:20:37.260205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.461 [2024-07-26 14:20:37.260269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.461 qpair failed and we were unable to recover it. 00:26:29.461 [2024-07-26 14:20:37.260483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.461 [2024-07-26 14:20:37.260563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.461 qpair failed and we were unable to recover it. 00:26:29.461 [2024-07-26 14:20:37.260810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.461 [2024-07-26 14:20:37.260875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.461 qpair failed and we were unable to recover it. 00:26:29.461 [2024-07-26 14:20:37.261129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.461 [2024-07-26 14:20:37.261193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.461 qpair failed and we were unable to recover it. 00:26:29.461 [2024-07-26 14:20:37.261446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.461 [2024-07-26 14:20:37.261498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.461 qpair failed and we were unable to recover it. 00:26:29.461 [2024-07-26 14:20:37.261713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.461 [2024-07-26 14:20:37.261766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.461 qpair failed and we were unable to recover it. 00:26:29.461 [2024-07-26 14:20:37.261931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.461 [2024-07-26 14:20:37.262006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.461 qpair failed and we were unable to recover it. 00:26:29.461 [2024-07-26 14:20:37.262205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.461 [2024-07-26 14:20:37.262268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.461 qpair failed and we were unable to recover it. 00:26:29.461 [2024-07-26 14:20:37.262543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.461 [2024-07-26 14:20:37.262595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.461 qpair failed and we were unable to recover it. 00:26:29.461 [2024-07-26 14:20:37.262798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.461 [2024-07-26 14:20:37.262878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.461 qpair failed and we were unable to recover it. 00:26:29.461 [2024-07-26 14:20:37.263158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.461 [2024-07-26 14:20:37.263222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.461 qpair failed and we were unable to recover it. 00:26:29.461 [2024-07-26 14:20:37.263513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.461 [2024-07-26 14:20:37.263596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.461 qpair failed and we were unable to recover it. 00:26:29.461 [2024-07-26 14:20:37.263886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.461 [2024-07-26 14:20:37.263959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.461 qpair failed and we were unable to recover it. 00:26:29.461 [2024-07-26 14:20:37.264221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.461 [2024-07-26 14:20:37.264286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.461 qpair failed and we were unable to recover it. 00:26:29.461 [2024-07-26 14:20:37.264586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.461 [2024-07-26 14:20:37.264652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.461 qpair failed and we were unable to recover it. 00:26:29.461 [2024-07-26 14:20:37.264909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.461 [2024-07-26 14:20:37.264974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.461 qpair failed and we were unable to recover it. 00:26:29.461 [2024-07-26 14:20:37.265257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.461 [2024-07-26 14:20:37.265321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.461 qpair failed and we were unable to recover it. 00:26:29.461 [2024-07-26 14:20:37.265620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.461 [2024-07-26 14:20:37.265673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.461 qpair failed and we were unable to recover it. 00:26:29.461 [2024-07-26 14:20:37.265868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.461 [2024-07-26 14:20:37.265920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.461 qpair failed and we were unable to recover it. 00:26:29.461 [2024-07-26 14:20:37.266174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.461 [2024-07-26 14:20:37.266238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.461 qpair failed and we were unable to recover it. 00:26:29.461 [2024-07-26 14:20:37.266453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.461 [2024-07-26 14:20:37.266517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.461 qpair failed and we were unable to recover it. 00:26:29.461 [2024-07-26 14:20:37.266769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.461 [2024-07-26 14:20:37.266834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.461 qpair failed and we were unable to recover it. 00:26:29.461 [2024-07-26 14:20:37.267066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.461 [2024-07-26 14:20:37.267130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.461 qpair failed and we were unable to recover it. 00:26:29.461 [2024-07-26 14:20:37.267410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.461 [2024-07-26 14:20:37.267475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.461 qpair failed and we were unable to recover it. 00:26:29.461 [2024-07-26 14:20:37.267730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.461 [2024-07-26 14:20:37.267794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.461 qpair failed and we were unable to recover it. 00:26:29.461 [2024-07-26 14:20:37.268076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.461 [2024-07-26 14:20:37.268141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.461 qpair failed and we were unable to recover it. 00:26:29.461 [2024-07-26 14:20:37.268426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.461 [2024-07-26 14:20:37.268491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.461 qpair failed and we were unable to recover it. 00:26:29.462 [2024-07-26 14:20:37.268748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.462 [2024-07-26 14:20:37.268813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.462 qpair failed and we were unable to recover it. 00:26:29.462 [2024-07-26 14:20:37.269060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.462 [2024-07-26 14:20:37.269124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.462 qpair failed and we were unable to recover it. 00:26:29.462 [2024-07-26 14:20:37.269410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.462 [2024-07-26 14:20:37.269475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.462 qpair failed and we were unable to recover it. 00:26:29.462 [2024-07-26 14:20:37.269730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.462 [2024-07-26 14:20:37.269794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.462 qpair failed and we were unable to recover it. 00:26:29.462 [2024-07-26 14:20:37.270079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.462 [2024-07-26 14:20:37.270144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.462 qpair failed and we were unable to recover it. 00:26:29.462 [2024-07-26 14:20:37.270435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.462 [2024-07-26 14:20:37.270499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.462 qpair failed and we were unable to recover it. 00:26:29.462 [2024-07-26 14:20:37.270806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.462 [2024-07-26 14:20:37.270871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.462 qpair failed and we were unable to recover it. 00:26:29.462 [2024-07-26 14:20:37.271113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.462 [2024-07-26 14:20:37.271178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.462 qpair failed and we were unable to recover it. 00:26:29.462 [2024-07-26 14:20:37.271384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.462 [2024-07-26 14:20:37.271448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.462 qpair failed and we were unable to recover it. 00:26:29.462 [2024-07-26 14:20:37.271749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.462 [2024-07-26 14:20:37.271815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.462 qpair failed and we were unable to recover it. 00:26:29.462 [2024-07-26 14:20:37.272034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.462 [2024-07-26 14:20:37.272099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.462 qpair failed and we were unable to recover it. 00:26:29.462 [2024-07-26 14:20:37.272339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.462 [2024-07-26 14:20:37.272404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.462 qpair failed and we were unable to recover it. 00:26:29.462 [2024-07-26 14:20:37.272645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.462 [2024-07-26 14:20:37.272714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.462 qpair failed and we were unable to recover it. 00:26:29.462 [2024-07-26 14:20:37.272957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.462 [2024-07-26 14:20:37.273021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.462 qpair failed and we were unable to recover it. 00:26:29.462 [2024-07-26 14:20:37.273257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.462 [2024-07-26 14:20:37.273323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.462 qpair failed and we were unable to recover it. 00:26:29.462 [2024-07-26 14:20:37.273544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.462 [2024-07-26 14:20:37.273611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.462 qpair failed and we were unable to recover it. 00:26:29.462 [2024-07-26 14:20:37.273893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.462 [2024-07-26 14:20:37.273959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.462 qpair failed and we were unable to recover it. 00:26:29.462 [2024-07-26 14:20:37.274201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.462 [2024-07-26 14:20:37.274268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.462 qpair failed and we were unable to recover it. 00:26:29.462 [2024-07-26 14:20:37.274514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.462 [2024-07-26 14:20:37.274592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.462 qpair failed and we were unable to recover it. 00:26:29.462 [2024-07-26 14:20:37.274795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.462 [2024-07-26 14:20:37.274862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.462 qpair failed and we were unable to recover it. 00:26:29.462 [2024-07-26 14:20:37.275090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.462 [2024-07-26 14:20:37.275155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.462 qpair failed and we were unable to recover it. 00:26:29.462 [2024-07-26 14:20:37.275397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.462 [2024-07-26 14:20:37.275461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.462 qpair failed and we were unable to recover it. 00:26:29.462 [2024-07-26 14:20:37.275743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.462 [2024-07-26 14:20:37.275796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.462 qpair failed and we were unable to recover it. 00:26:29.462 [2024-07-26 14:20:37.276004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.462 [2024-07-26 14:20:37.276083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.462 qpair failed and we were unable to recover it. 00:26:29.462 [2024-07-26 14:20:37.276339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.462 [2024-07-26 14:20:37.276404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.462 qpair failed and we were unable to recover it. 00:26:29.462 [2024-07-26 14:20:37.276644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.462 [2024-07-26 14:20:37.276710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.462 qpair failed and we were unable to recover it. 00:26:29.462 [2024-07-26 14:20:37.277012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.462 [2024-07-26 14:20:37.277075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.462 qpair failed and we were unable to recover it. 00:26:29.462 [2024-07-26 14:20:37.277365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.462 [2024-07-26 14:20:37.277429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.462 qpair failed and we were unable to recover it. 00:26:29.462 [2024-07-26 14:20:37.277687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.462 [2024-07-26 14:20:37.277752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.462 qpair failed and we were unable to recover it. 00:26:29.462 [2024-07-26 14:20:37.277994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.462 [2024-07-26 14:20:37.278061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.462 qpair failed and we were unable to recover it. 00:26:29.462 [2024-07-26 14:20:37.278355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.462 [2024-07-26 14:20:37.278407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.462 qpair failed and we were unable to recover it. 00:26:29.462 [2024-07-26 14:20:37.278591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.462 [2024-07-26 14:20:37.278674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.462 qpair failed and we were unable to recover it. 00:26:29.462 [2024-07-26 14:20:37.278919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.462 [2024-07-26 14:20:37.278983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.462 qpair failed and we were unable to recover it. 00:26:29.462 [2024-07-26 14:20:37.279217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.462 [2024-07-26 14:20:37.279282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.462 qpair failed and we were unable to recover it. 00:26:29.462 [2024-07-26 14:20:37.279564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.462 [2024-07-26 14:20:37.279629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.462 qpair failed and we were unable to recover it. 00:26:29.462 [2024-07-26 14:20:37.279874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.462 [2024-07-26 14:20:37.279937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.462 qpair failed and we were unable to recover it. 00:26:29.462 [2024-07-26 14:20:37.280158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.462 [2024-07-26 14:20:37.280222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.462 qpair failed and we were unable to recover it. 00:26:29.462 [2024-07-26 14:20:37.280428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.462 [2024-07-26 14:20:37.280493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.463 qpair failed and we were unable to recover it. 00:26:29.463 [2024-07-26 14:20:37.280805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.463 [2024-07-26 14:20:37.280870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.463 qpair failed and we were unable to recover it. 00:26:29.463 [2024-07-26 14:20:37.281123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.463 [2024-07-26 14:20:37.281186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.463 qpair failed and we were unable to recover it. 00:26:29.463 [2024-07-26 14:20:37.281455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.463 [2024-07-26 14:20:37.281519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.463 qpair failed and we were unable to recover it. 00:26:29.463 [2024-07-26 14:20:37.281794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.463 [2024-07-26 14:20:37.281860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.463 qpair failed and we were unable to recover it. 00:26:29.463 [2024-07-26 14:20:37.282077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.463 [2024-07-26 14:20:37.282142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.463 qpair failed and we were unable to recover it. 00:26:29.463 [2024-07-26 14:20:37.282415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.463 [2024-07-26 14:20:37.282480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.463 qpair failed and we were unable to recover it. 00:26:29.463 [2024-07-26 14:20:37.282751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.463 [2024-07-26 14:20:37.282817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.463 qpair failed and we were unable to recover it. 00:26:29.463 [2024-07-26 14:20:37.283045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.463 [2024-07-26 14:20:37.283110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.463 qpair failed and we were unable to recover it. 00:26:29.463 [2024-07-26 14:20:37.283346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.463 [2024-07-26 14:20:37.283410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.463 qpair failed and we were unable to recover it. 00:26:29.463 [2024-07-26 14:20:37.283707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.463 [2024-07-26 14:20:37.283774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.463 qpair failed and we were unable to recover it. 00:26:29.463 [2024-07-26 14:20:37.284018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.463 [2024-07-26 14:20:37.284082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.463 qpair failed and we were unable to recover it. 00:26:29.463 [2024-07-26 14:20:37.284356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.463 [2024-07-26 14:20:37.284421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.463 qpair failed and we were unable to recover it. 00:26:29.463 [2024-07-26 14:20:37.284713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.463 [2024-07-26 14:20:37.284778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.463 qpair failed and we were unable to recover it. 00:26:29.463 [2024-07-26 14:20:37.284980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.463 [2024-07-26 14:20:37.285046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.463 qpair failed and we were unable to recover it. 00:26:29.463 [2024-07-26 14:20:37.285326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.463 [2024-07-26 14:20:37.285392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.463 qpair failed and we were unable to recover it. 00:26:29.463 [2024-07-26 14:20:37.285593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.463 [2024-07-26 14:20:37.285676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.463 qpair failed and we were unable to recover it. 00:26:29.463 [2024-07-26 14:20:37.285961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.463 [2024-07-26 14:20:37.286013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.463 qpair failed and we were unable to recover it. 00:26:29.463 [2024-07-26 14:20:37.286213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.463 [2024-07-26 14:20:37.286289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.463 qpair failed and we were unable to recover it. 00:26:29.463 [2024-07-26 14:20:37.286556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.463 [2024-07-26 14:20:37.286622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.463 qpair failed and we were unable to recover it. 00:26:29.463 [2024-07-26 14:20:37.286833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.463 [2024-07-26 14:20:37.286901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.463 qpair failed and we were unable to recover it. 00:26:29.463 [2024-07-26 14:20:37.287151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.463 [2024-07-26 14:20:37.287215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.463 qpair failed and we were unable to recover it. 00:26:29.463 [2024-07-26 14:20:37.287498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.463 [2024-07-26 14:20:37.287582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.463 qpair failed and we were unable to recover it. 00:26:29.463 [2024-07-26 14:20:37.287836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.463 [2024-07-26 14:20:37.287900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.463 qpair failed and we were unable to recover it. 00:26:29.463 [2024-07-26 14:20:37.288148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.463 [2024-07-26 14:20:37.288211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.463 qpair failed and we were unable to recover it. 00:26:29.463 [2024-07-26 14:20:37.288407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.463 [2024-07-26 14:20:37.288471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.463 qpair failed and we were unable to recover it. 00:26:29.463 [2024-07-26 14:20:37.288711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.463 [2024-07-26 14:20:37.288778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.463 qpair failed and we were unable to recover it. 00:26:29.463 [2024-07-26 14:20:37.289033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.463 [2024-07-26 14:20:37.289097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.463 qpair failed and we were unable to recover it. 00:26:29.463 [2024-07-26 14:20:37.289346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.463 [2024-07-26 14:20:37.289410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.463 qpair failed and we were unable to recover it. 00:26:29.463 [2024-07-26 14:20:37.289659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.463 [2024-07-26 14:20:37.289726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.463 qpair failed and we were unable to recover it. 00:26:29.463 [2024-07-26 14:20:37.290002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.463 [2024-07-26 14:20:37.290066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.463 qpair failed and we were unable to recover it. 00:26:29.463 [2024-07-26 14:20:37.290353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.463 [2024-07-26 14:20:37.290417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.463 qpair failed and we were unable to recover it. 00:26:29.464 [2024-07-26 14:20:37.290683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.464 [2024-07-26 14:20:37.290749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.464 qpair failed and we were unable to recover it. 00:26:29.464 [2024-07-26 14:20:37.290958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.464 [2024-07-26 14:20:37.291022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.464 qpair failed and we were unable to recover it. 00:26:29.464 [2024-07-26 14:20:37.291221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.464 [2024-07-26 14:20:37.291285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.464 qpair failed and we were unable to recover it. 00:26:29.464 [2024-07-26 14:20:37.291501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.464 [2024-07-26 14:20:37.291580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.464 qpair failed and we were unable to recover it. 00:26:29.464 [2024-07-26 14:20:37.291827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.464 [2024-07-26 14:20:37.291892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.464 qpair failed and we were unable to recover it. 00:26:29.464 [2024-07-26 14:20:37.292126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.464 [2024-07-26 14:20:37.292191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.464 qpair failed and we were unable to recover it. 00:26:29.464 [2024-07-26 14:20:37.292473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.464 [2024-07-26 14:20:37.292551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.464 qpair failed and we were unable to recover it. 00:26:29.464 [2024-07-26 14:20:37.292792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.464 [2024-07-26 14:20:37.292857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.464 qpair failed and we were unable to recover it. 00:26:29.464 [2024-07-26 14:20:37.293078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.464 [2024-07-26 14:20:37.293144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.464 qpair failed and we were unable to recover it. 00:26:29.464 [2024-07-26 14:20:37.293403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.464 [2024-07-26 14:20:37.293468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.464 qpair failed and we were unable to recover it. 00:26:29.464 [2024-07-26 14:20:37.293741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.464 [2024-07-26 14:20:37.293808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.464 qpair failed and we were unable to recover it. 00:26:29.464 [2024-07-26 14:20:37.294053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.464 [2024-07-26 14:20:37.294127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.464 qpair failed and we were unable to recover it. 00:26:29.464 [2024-07-26 14:20:37.294370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.464 [2024-07-26 14:20:37.294437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.464 qpair failed and we were unable to recover it. 00:26:29.464 [2024-07-26 14:20:37.294708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.464 [2024-07-26 14:20:37.294775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.464 qpair failed and we were unable to recover it. 00:26:29.464 [2024-07-26 14:20:37.295040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.464 [2024-07-26 14:20:37.295104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.464 qpair failed and we were unable to recover it. 00:26:29.464 [2024-07-26 14:20:37.295382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.464 [2024-07-26 14:20:37.295446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.464 qpair failed and we were unable to recover it. 00:26:29.464 [2024-07-26 14:20:37.295711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.464 [2024-07-26 14:20:37.295777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.464 qpair failed and we were unable to recover it. 00:26:29.464 [2024-07-26 14:20:37.295989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.464 [2024-07-26 14:20:37.296053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.464 qpair failed and we were unable to recover it. 00:26:29.464 [2024-07-26 14:20:37.296300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.464 [2024-07-26 14:20:37.296365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.464 qpair failed and we were unable to recover it. 00:26:29.464 [2024-07-26 14:20:37.296648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.464 [2024-07-26 14:20:37.296715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.464 qpair failed and we were unable to recover it. 00:26:29.464 [2024-07-26 14:20:37.297008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.464 [2024-07-26 14:20:37.297073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.464 qpair failed and we were unable to recover it. 00:26:29.464 [2024-07-26 14:20:37.297354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.464 [2024-07-26 14:20:37.297418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.464 qpair failed and we were unable to recover it. 00:26:29.464 [2024-07-26 14:20:37.297720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.464 [2024-07-26 14:20:37.297784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.464 qpair failed and we were unable to recover it. 00:26:29.464 [2024-07-26 14:20:37.298028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.464 [2024-07-26 14:20:37.298092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.464 qpair failed and we were unable to recover it. 00:26:29.464 [2024-07-26 14:20:37.298274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.464 [2024-07-26 14:20:37.298338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.464 qpair failed and we were unable to recover it. 00:26:29.464 [2024-07-26 14:20:37.298607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.464 [2024-07-26 14:20:37.298672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.464 qpair failed and we were unable to recover it. 00:26:29.464 [2024-07-26 14:20:37.298906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.464 [2024-07-26 14:20:37.298971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.464 qpair failed and we were unable to recover it. 00:26:29.464 [2024-07-26 14:20:37.299217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.464 [2024-07-26 14:20:37.299280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.464 qpair failed and we were unable to recover it. 00:26:29.464 [2024-07-26 14:20:37.299471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.464 [2024-07-26 14:20:37.299550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.464 qpair failed and we were unable to recover it. 00:26:29.464 [2024-07-26 14:20:37.299781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.464 [2024-07-26 14:20:37.299846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.464 qpair failed and we were unable to recover it. 00:26:29.464 [2024-07-26 14:20:37.300091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.464 [2024-07-26 14:20:37.300155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.464 qpair failed and we were unable to recover it. 00:26:29.464 [2024-07-26 14:20:37.300368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.464 [2024-07-26 14:20:37.300432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.464 qpair failed and we were unable to recover it. 00:26:29.464 [2024-07-26 14:20:37.300675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.464 [2024-07-26 14:20:37.300742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.464 qpair failed and we were unable to recover it. 00:26:29.464 [2024-07-26 14:20:37.300979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.464 [2024-07-26 14:20:37.301044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.464 qpair failed and we were unable to recover it. 00:26:29.464 [2024-07-26 14:20:37.301336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.464 [2024-07-26 14:20:37.301401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.464 qpair failed and we were unable to recover it. 00:26:29.464 [2024-07-26 14:20:37.301673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.464 [2024-07-26 14:20:37.301738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.464 qpair failed and we were unable to recover it. 00:26:29.464 [2024-07-26 14:20:37.301963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.464 [2024-07-26 14:20:37.302027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.464 qpair failed and we were unable to recover it. 00:26:29.464 [2024-07-26 14:20:37.302254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.464 [2024-07-26 14:20:37.302289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.464 qpair failed and we were unable to recover it. 00:26:29.465 [2024-07-26 14:20:37.302519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.465 [2024-07-26 14:20:37.302611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.465 qpair failed and we were unable to recover it. 00:26:29.465 [2024-07-26 14:20:37.302756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.465 [2024-07-26 14:20:37.302789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.465 qpair failed and we were unable to recover it. 00:26:29.465 [2024-07-26 14:20:37.303039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.465 [2024-07-26 14:20:37.303074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.465 qpair failed and we were unable to recover it. 00:26:29.465 [2024-07-26 14:20:37.303236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.465 [2024-07-26 14:20:37.303300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.465 qpair failed and we were unable to recover it. 00:26:29.465 [2024-07-26 14:20:37.303553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.465 [2024-07-26 14:20:37.303610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.465 qpair failed and we were unable to recover it. 00:26:29.465 [2024-07-26 14:20:37.303742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.465 [2024-07-26 14:20:37.303776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.465 qpair failed and we were unable to recover it. 00:26:29.465 [2024-07-26 14:20:37.303972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.465 [2024-07-26 14:20:37.304022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.465 qpair failed and we were unable to recover it. 00:26:29.465 [2024-07-26 14:20:37.304230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.465 [2024-07-26 14:20:37.304322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.465 qpair failed and we were unable to recover it. 00:26:29.465 [2024-07-26 14:20:37.304554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.465 [2024-07-26 14:20:37.304621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.465 qpair failed and we were unable to recover it. 00:26:29.465 [2024-07-26 14:20:37.304732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.465 [2024-07-26 14:20:37.304766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.465 qpair failed and we were unable to recover it. 00:26:29.465 [2024-07-26 14:20:37.304940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.465 [2024-07-26 14:20:37.305006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.465 qpair failed and we were unable to recover it. 00:26:29.465 [2024-07-26 14:20:37.305207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.465 [2024-07-26 14:20:37.305271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.465 qpair failed and we were unable to recover it. 00:26:29.465 [2024-07-26 14:20:37.305609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.465 [2024-07-26 14:20:37.305643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.465 qpair failed and we were unable to recover it. 00:26:29.465 [2024-07-26 14:20:37.305746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.465 [2024-07-26 14:20:37.305778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.465 qpair failed and we were unable to recover it. 00:26:29.465 [2024-07-26 14:20:37.305926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.465 [2024-07-26 14:20:37.305960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.465 qpair failed and we were unable to recover it. 00:26:29.465 [2024-07-26 14:20:37.306147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.465 [2024-07-26 14:20:37.306211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.465 qpair failed and we were unable to recover it. 00:26:29.465 [2024-07-26 14:20:37.306492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.465 [2024-07-26 14:20:37.306571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.465 qpair failed and we were unable to recover it. 00:26:29.465 [2024-07-26 14:20:37.306716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.465 [2024-07-26 14:20:37.306749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.465 qpair failed and we were unable to recover it. 00:26:29.465 [2024-07-26 14:20:37.306864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.465 [2024-07-26 14:20:37.306899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.465 qpair failed and we were unable to recover it. 00:26:29.465 [2024-07-26 14:20:37.307100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.465 [2024-07-26 14:20:37.307164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.465 qpair failed and we were unable to recover it. 00:26:29.465 [2024-07-26 14:20:37.307418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.465 [2024-07-26 14:20:37.307483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.465 qpair failed and we were unable to recover it. 00:26:29.465 [2024-07-26 14:20:37.307683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.465 [2024-07-26 14:20:37.307717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.465 qpair failed and we were unable to recover it. 00:26:29.465 [2024-07-26 14:20:37.307831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.465 [2024-07-26 14:20:37.307865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.465 qpair failed and we were unable to recover it. 00:26:29.465 [2024-07-26 14:20:37.308028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.465 [2024-07-26 14:20:37.308093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.465 qpair failed and we were unable to recover it. 00:26:29.465 [2024-07-26 14:20:37.308373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.465 [2024-07-26 14:20:37.308438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.465 qpair failed and we were unable to recover it. 00:26:29.465 [2024-07-26 14:20:37.308658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.465 [2024-07-26 14:20:37.308693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.465 qpair failed and we were unable to recover it. 00:26:29.465 [2024-07-26 14:20:37.308790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.465 [2024-07-26 14:20:37.308824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.465 qpair failed and we were unable to recover it. 00:26:29.465 [2024-07-26 14:20:37.309062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.465 [2024-07-26 14:20:37.309126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.465 qpair failed and we were unable to recover it. 00:26:29.465 [2024-07-26 14:20:37.309391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.465 [2024-07-26 14:20:37.309454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.465 qpair failed and we were unable to recover it. 00:26:29.465 [2024-07-26 14:20:37.309674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.465 [2024-07-26 14:20:37.309709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.465 qpair failed and we were unable to recover it. 00:26:29.465 [2024-07-26 14:20:37.309838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.465 [2024-07-26 14:20:37.309872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.465 qpair failed and we were unable to recover it. 00:26:29.465 [2024-07-26 14:20:37.310073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.465 [2024-07-26 14:20:37.310107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.465 qpair failed and we were unable to recover it. 00:26:29.465 [2024-07-26 14:20:37.310417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.465 [2024-07-26 14:20:37.310450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.465 qpair failed and we were unable to recover it. 00:26:29.465 [2024-07-26 14:20:37.310585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.465 [2024-07-26 14:20:37.310617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.465 qpair failed and we were unable to recover it. 00:26:29.465 [2024-07-26 14:20:37.310752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.465 [2024-07-26 14:20:37.310793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.465 qpair failed and we were unable to recover it. 00:26:29.465 [2024-07-26 14:20:37.310931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.465 [2024-07-26 14:20:37.310964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.465 qpair failed and we were unable to recover it. 00:26:29.465 [2024-07-26 14:20:37.311077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.465 [2024-07-26 14:20:37.311109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.465 qpair failed and we were unable to recover it. 00:26:29.465 [2024-07-26 14:20:37.311211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.465 [2024-07-26 14:20:37.311244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.465 qpair failed and we were unable to recover it. 00:26:29.466 [2024-07-26 14:20:37.311356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.466 [2024-07-26 14:20:37.311388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.466 qpair failed and we were unable to recover it. 00:26:29.466 [2024-07-26 14:20:37.311538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.466 [2024-07-26 14:20:37.311582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.466 qpair failed and we were unable to recover it. 00:26:29.466 [2024-07-26 14:20:37.311682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.466 [2024-07-26 14:20:37.311715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.466 qpair failed and we were unable to recover it. 00:26:29.466 [2024-07-26 14:20:37.311828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.466 [2024-07-26 14:20:37.311860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.466 qpair failed and we were unable to recover it. 00:26:29.466 [2024-07-26 14:20:37.311956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.466 [2024-07-26 14:20:37.311989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.466 qpair failed and we were unable to recover it. 00:26:29.466 [2024-07-26 14:20:37.312114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.466 [2024-07-26 14:20:37.312145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.466 qpair failed and we were unable to recover it. 00:26:29.466 [2024-07-26 14:20:37.312284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.466 [2024-07-26 14:20:37.312314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.466 qpair failed and we were unable to recover it. 00:26:29.466 [2024-07-26 14:20:37.312427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.466 [2024-07-26 14:20:37.312458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.466 qpair failed and we were unable to recover it. 00:26:29.466 [2024-07-26 14:20:37.312589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.466 [2024-07-26 14:20:37.312620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.466 qpair failed and we were unable to recover it. 00:26:29.466 [2024-07-26 14:20:37.312730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.466 [2024-07-26 14:20:37.312762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.466 qpair failed and we were unable to recover it. 00:26:29.466 [2024-07-26 14:20:37.312892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.466 [2024-07-26 14:20:37.312924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.466 qpair failed and we were unable to recover it. 00:26:29.466 [2024-07-26 14:20:37.313032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.466 [2024-07-26 14:20:37.313063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.466 qpair failed and we were unable to recover it. 00:26:29.466 [2024-07-26 14:20:37.313156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.466 [2024-07-26 14:20:37.313188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.466 qpair failed and we were unable to recover it. 00:26:29.466 [2024-07-26 14:20:37.313286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.466 [2024-07-26 14:20:37.313317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.466 qpair failed and we were unable to recover it. 00:26:29.466 [2024-07-26 14:20:37.313447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.466 [2024-07-26 14:20:37.313477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.466 qpair failed and we were unable to recover it. 00:26:29.466 [2024-07-26 14:20:37.313621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.466 [2024-07-26 14:20:37.313653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.466 qpair failed and we were unable to recover it. 00:26:29.466 [2024-07-26 14:20:37.313784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.466 [2024-07-26 14:20:37.313845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.466 qpair failed and we were unable to recover it. 00:26:29.466 [2024-07-26 14:20:37.314000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.466 [2024-07-26 14:20:37.314033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.466 qpair failed and we were unable to recover it. 00:26:29.466 [2024-07-26 14:20:37.314205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.466 [2024-07-26 14:20:37.314239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.466 qpair failed and we were unable to recover it. 00:26:29.466 [2024-07-26 14:20:37.314353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.466 [2024-07-26 14:20:37.314386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.466 qpair failed and we were unable to recover it. 00:26:29.466 [2024-07-26 14:20:37.314506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.466 [2024-07-26 14:20:37.314550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.466 qpair failed and we were unable to recover it. 00:26:29.466 [2024-07-26 14:20:37.314670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.466 [2024-07-26 14:20:37.314703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.466 qpair failed and we were unable to recover it. 00:26:29.466 [2024-07-26 14:20:37.314826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.466 [2024-07-26 14:20:37.314859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.466 qpair failed and we were unable to recover it. 00:26:29.466 [2024-07-26 14:20:37.315006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.466 [2024-07-26 14:20:37.315054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.466 qpair failed and we were unable to recover it. 00:26:29.466 [2024-07-26 14:20:37.315191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.466 [2024-07-26 14:20:37.315225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.466 qpair failed and we were unable to recover it. 00:26:29.466 [2024-07-26 14:20:37.315342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.466 [2024-07-26 14:20:37.315376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.466 qpair failed and we were unable to recover it. 00:26:29.466 [2024-07-26 14:20:37.315580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.466 [2024-07-26 14:20:37.315614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.466 qpair failed and we were unable to recover it. 00:26:29.466 [2024-07-26 14:20:37.315724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.466 [2024-07-26 14:20:37.315756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.466 qpair failed and we were unable to recover it. 00:26:29.466 [2024-07-26 14:20:37.315900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.466 [2024-07-26 14:20:37.315933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.466 qpair failed and we were unable to recover it. 00:26:29.466 [2024-07-26 14:20:37.316079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.466 [2024-07-26 14:20:37.316113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.466 qpair failed and we were unable to recover it. 00:26:29.466 [2024-07-26 14:20:37.316236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.466 [2024-07-26 14:20:37.316275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.466 qpair failed and we were unable to recover it. 00:26:29.466 [2024-07-26 14:20:37.316432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.466 [2024-07-26 14:20:37.316466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.466 qpair failed and we were unable to recover it. 00:26:29.466 [2024-07-26 14:20:37.316626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.466 [2024-07-26 14:20:37.316659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.466 qpair failed and we were unable to recover it. 00:26:29.466 [2024-07-26 14:20:37.316778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.466 [2024-07-26 14:20:37.316813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.466 qpair failed and we were unable to recover it. 00:26:29.466 [2024-07-26 14:20:37.316923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.466 [2024-07-26 14:20:37.316955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.466 qpair failed and we were unable to recover it. 00:26:29.466 [2024-07-26 14:20:37.317120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.466 [2024-07-26 14:20:37.317151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.466 qpair failed and we were unable to recover it. 00:26:29.466 [2024-07-26 14:20:37.317302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.466 [2024-07-26 14:20:37.317336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.466 qpair failed and we were unable to recover it. 00:26:29.466 [2024-07-26 14:20:37.317476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.466 [2024-07-26 14:20:37.317509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.466 qpair failed and we were unable to recover it. 00:26:29.467 [2024-07-26 14:20:37.317595] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x103e230 (9): Bad file descriptor 00:26:29.467 [2024-07-26 14:20:37.317797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.467 [2024-07-26 14:20:37.317847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.467 qpair failed and we were unable to recover it. 00:26:29.467 [2024-07-26 14:20:37.317989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.467 [2024-07-26 14:20:37.318023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.467 qpair failed and we were unable to recover it. 00:26:29.467 [2024-07-26 14:20:37.318130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.467 [2024-07-26 14:20:37.318163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.467 qpair failed and we were unable to recover it. 00:26:29.467 [2024-07-26 14:20:37.318295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.467 [2024-07-26 14:20:37.318327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.467 qpair failed and we were unable to recover it. 00:26:29.467 [2024-07-26 14:20:37.318464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.467 [2024-07-26 14:20:37.318508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.467 qpair failed and we were unable to recover it. 00:26:29.467 [2024-07-26 14:20:37.318661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.467 [2024-07-26 14:20:37.318701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.467 qpair failed and we were unable to recover it. 00:26:29.467 [2024-07-26 14:20:37.318823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.467 [2024-07-26 14:20:37.318855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.467 qpair failed and we were unable to recover it. 00:26:29.467 [2024-07-26 14:20:37.319030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.467 [2024-07-26 14:20:37.319063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.467 qpair failed and we were unable to recover it. 00:26:29.467 [2024-07-26 14:20:37.319230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.467 [2024-07-26 14:20:37.319280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.467 qpair failed and we were unable to recover it. 00:26:29.467 [2024-07-26 14:20:37.319432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.467 [2024-07-26 14:20:37.319462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.467 qpair failed and we were unable to recover it. 00:26:29.467 [2024-07-26 14:20:37.319602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.467 [2024-07-26 14:20:37.319636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.467 qpair failed and we were unable to recover it. 00:26:29.467 [2024-07-26 14:20:37.319737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.467 [2024-07-26 14:20:37.319770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.467 qpair failed and we were unable to recover it. 00:26:29.467 [2024-07-26 14:20:37.319940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.467 [2024-07-26 14:20:37.319973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.467 qpair failed and we were unable to recover it. 00:26:29.467 [2024-07-26 14:20:37.320117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.467 [2024-07-26 14:20:37.320150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.467 qpair failed and we were unable to recover it. 00:26:29.467 [2024-07-26 14:20:37.320309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.467 [2024-07-26 14:20:37.320358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.467 qpair failed and we were unable to recover it. 00:26:29.467 [2024-07-26 14:20:37.320477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.467 [2024-07-26 14:20:37.320510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.467 qpair failed and we were unable to recover it. 00:26:29.467 [2024-07-26 14:20:37.320645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.467 [2024-07-26 14:20:37.320678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.467 qpair failed and we were unable to recover it. 00:26:29.467 [2024-07-26 14:20:37.320781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.467 [2024-07-26 14:20:37.320814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.467 qpair failed and we were unable to recover it. 00:26:29.467 [2024-07-26 14:20:37.320972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.467 [2024-07-26 14:20:37.321005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.467 qpair failed and we were unable to recover it. 00:26:29.467 [2024-07-26 14:20:37.321119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.467 [2024-07-26 14:20:37.321151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.467 qpair failed and we were unable to recover it. 00:26:29.467 [2024-07-26 14:20:37.321300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.467 [2024-07-26 14:20:37.321335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.467 qpair failed and we were unable to recover it. 00:26:29.467 [2024-07-26 14:20:37.321475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.467 [2024-07-26 14:20:37.321507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.467 qpair failed and we were unable to recover it. 00:26:29.467 [2024-07-26 14:20:37.321648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.467 [2024-07-26 14:20:37.321681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.467 qpair failed and we were unable to recover it. 00:26:29.467 [2024-07-26 14:20:37.321817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.467 [2024-07-26 14:20:37.321849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.467 qpair failed and we were unable to recover it. 00:26:29.467 [2024-07-26 14:20:37.322020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.467 [2024-07-26 14:20:37.322053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.467 qpair failed and we were unable to recover it. 00:26:29.467 [2024-07-26 14:20:37.322166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.467 [2024-07-26 14:20:37.322215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.467 qpair failed and we were unable to recover it. 00:26:29.467 [2024-07-26 14:20:37.322321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.467 [2024-07-26 14:20:37.322353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.467 qpair failed and we were unable to recover it. 00:26:29.467 [2024-07-26 14:20:37.322484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.467 [2024-07-26 14:20:37.322539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.467 qpair failed and we were unable to recover it. 00:26:29.467 [2024-07-26 14:20:37.322643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.467 [2024-07-26 14:20:37.322674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.467 qpair failed and we were unable to recover it. 00:26:29.467 [2024-07-26 14:20:37.322784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.467 [2024-07-26 14:20:37.322833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.467 qpair failed and we were unable to recover it. 00:26:29.467 [2024-07-26 14:20:37.322975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.467 [2024-07-26 14:20:37.323009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.467 qpair failed and we were unable to recover it. 00:26:29.467 [2024-07-26 14:20:37.323174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.467 [2024-07-26 14:20:37.323208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.467 qpair failed and we were unable to recover it. 00:26:29.467 [2024-07-26 14:20:37.323338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.467 [2024-07-26 14:20:37.323370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.467 qpair failed and we were unable to recover it. 00:26:29.467 [2024-07-26 14:20:37.323470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.467 [2024-07-26 14:20:37.323504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.467 qpair failed and we were unable to recover it. 00:26:29.467 [2024-07-26 14:20:37.323657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.467 [2024-07-26 14:20:37.323689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.467 qpair failed and we were unable to recover it. 00:26:29.467 [2024-07-26 14:20:37.323796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.467 [2024-07-26 14:20:37.323852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.467 qpair failed and we were unable to recover it. 00:26:29.467 [2024-07-26 14:20:37.324033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.468 [2024-07-26 14:20:37.324067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.468 qpair failed and we were unable to recover it. 00:26:29.468 [2024-07-26 14:20:37.324176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.468 [2024-07-26 14:20:37.324209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.468 qpair failed and we were unable to recover it. 00:26:29.468 [2024-07-26 14:20:37.324322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.468 [2024-07-26 14:20:37.324353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.468 qpair failed and we were unable to recover it. 00:26:29.468 [2024-07-26 14:20:37.324459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.468 [2024-07-26 14:20:37.324492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.468 qpair failed and we were unable to recover it. 00:26:29.468 [2024-07-26 14:20:37.324641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.468 [2024-07-26 14:20:37.324680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.468 qpair failed and we were unable to recover it. 00:26:29.468 [2024-07-26 14:20:37.324816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.468 [2024-07-26 14:20:37.324847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.468 qpair failed and we were unable to recover it. 00:26:29.468 [2024-07-26 14:20:37.324944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.468 [2024-07-26 14:20:37.324975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.468 qpair failed and we were unable to recover it. 00:26:29.468 [2024-07-26 14:20:37.325116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.468 [2024-07-26 14:20:37.325147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.468 qpair failed and we were unable to recover it. 00:26:29.468 [2024-07-26 14:20:37.325310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.468 [2024-07-26 14:20:37.325343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.468 qpair failed and we were unable to recover it. 00:26:29.468 [2024-07-26 14:20:37.325477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.468 [2024-07-26 14:20:37.325522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.468 qpair failed and we were unable to recover it. 00:26:29.468 [2024-07-26 14:20:37.325669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.468 [2024-07-26 14:20:37.325701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.468 qpair failed and we were unable to recover it. 00:26:29.468 [2024-07-26 14:20:37.325832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.468 [2024-07-26 14:20:37.325878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.468 qpair failed and we were unable to recover it. 00:26:29.468 [2024-07-26 14:20:37.326035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.468 [2024-07-26 14:20:37.326070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.468 qpair failed and we were unable to recover it. 00:26:29.468 [2024-07-26 14:20:37.326182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.468 [2024-07-26 14:20:37.326234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.468 qpair failed and we were unable to recover it. 00:26:29.468 [2024-07-26 14:20:37.326350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.468 [2024-07-26 14:20:37.326382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.468 qpair failed and we were unable to recover it. 00:26:29.468 [2024-07-26 14:20:37.326516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.468 [2024-07-26 14:20:37.326575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.468 qpair failed and we were unable to recover it. 00:26:29.468 [2024-07-26 14:20:37.326693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.468 [2024-07-26 14:20:37.326725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.468 qpair failed and we were unable to recover it. 00:26:29.468 [2024-07-26 14:20:37.326890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.468 [2024-07-26 14:20:37.326925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.468 qpair failed and we were unable to recover it. 00:26:29.468 [2024-07-26 14:20:37.327062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.468 [2024-07-26 14:20:37.327113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.468 qpair failed and we were unable to recover it. 00:26:29.468 [2024-07-26 14:20:37.327219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.468 [2024-07-26 14:20:37.327251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.468 qpair failed and we were unable to recover it. 00:26:29.468 [2024-07-26 14:20:37.327403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.468 [2024-07-26 14:20:37.327465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.468 qpair failed and we were unable to recover it. 00:26:29.468 [2024-07-26 14:20:37.327653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.468 [2024-07-26 14:20:37.327685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.468 qpair failed and we were unable to recover it. 00:26:29.468 [2024-07-26 14:20:37.327786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.468 [2024-07-26 14:20:37.327824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.468 qpair failed and we were unable to recover it. 00:26:29.468 [2024-07-26 14:20:37.327965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.468 [2024-07-26 14:20:37.327995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.468 qpair failed and we were unable to recover it. 00:26:29.468 [2024-07-26 14:20:37.328161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.468 [2024-07-26 14:20:37.328226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.468 qpair failed and we were unable to recover it. 00:26:29.468 [2024-07-26 14:20:37.328443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.468 [2024-07-26 14:20:37.328510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.468 qpair failed and we were unable to recover it. 00:26:29.468 [2024-07-26 14:20:37.328700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.468 [2024-07-26 14:20:37.328733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.468 qpair failed and we were unable to recover it. 00:26:29.468 [2024-07-26 14:20:37.328850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.468 [2024-07-26 14:20:37.328883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.468 qpair failed and we were unable to recover it. 00:26:29.468 [2024-07-26 14:20:37.329048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.468 [2024-07-26 14:20:37.329080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.468 qpair failed and we were unable to recover it. 00:26:29.468 [2024-07-26 14:20:37.329360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.468 [2024-07-26 14:20:37.329411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.468 qpair failed and we were unable to recover it. 00:26:29.468 [2024-07-26 14:20:37.329607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.468 [2024-07-26 14:20:37.329640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.468 qpair failed and we were unable to recover it. 00:26:29.468 [2024-07-26 14:20:37.329780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.468 [2024-07-26 14:20:37.329812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.468 qpair failed and we were unable to recover it. 00:26:29.468 [2024-07-26 14:20:37.330007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.468 [2024-07-26 14:20:37.330072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.468 qpair failed and we were unable to recover it. 00:26:29.469 [2024-07-26 14:20:37.330293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.469 [2024-07-26 14:20:37.330357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.469 qpair failed and we were unable to recover it. 00:26:29.469 [2024-07-26 14:20:37.330609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.469 [2024-07-26 14:20:37.330643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.469 qpair failed and we were unable to recover it. 00:26:29.469 [2024-07-26 14:20:37.330768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.469 [2024-07-26 14:20:37.330804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.469 qpair failed and we were unable to recover it. 00:26:29.469 [2024-07-26 14:20:37.331025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.469 [2024-07-26 14:20:37.331059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.469 qpair failed and we were unable to recover it. 00:26:29.469 [2024-07-26 14:20:37.331221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.469 [2024-07-26 14:20:37.331253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.469 qpair failed and we were unable to recover it. 00:26:29.469 [2024-07-26 14:20:37.331355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.469 [2024-07-26 14:20:37.331386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.469 qpair failed and we were unable to recover it. 00:26:29.469 [2024-07-26 14:20:37.331541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.469 [2024-07-26 14:20:37.331617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.469 qpair failed and we were unable to recover it. 00:26:29.469 [2024-07-26 14:20:37.331752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.469 [2024-07-26 14:20:37.331796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.469 qpair failed and we were unable to recover it. 00:26:29.469 [2024-07-26 14:20:37.332045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.469 [2024-07-26 14:20:37.332078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.469 qpair failed and we were unable to recover it. 00:26:29.469 [2024-07-26 14:20:37.332227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.469 [2024-07-26 14:20:37.332260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.469 qpair failed and we were unable to recover it. 00:26:29.469 [2024-07-26 14:20:37.332533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.469 [2024-07-26 14:20:37.332578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.469 qpair failed and we were unable to recover it. 00:26:29.469 [2024-07-26 14:20:37.332717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.469 [2024-07-26 14:20:37.332749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.469 qpair failed and we were unable to recover it. 00:26:29.469 [2024-07-26 14:20:37.332919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.469 [2024-07-26 14:20:37.332954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.469 qpair failed and we were unable to recover it. 00:26:29.469 [2024-07-26 14:20:37.333062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.469 [2024-07-26 14:20:37.333095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.469 qpair failed and we were unable to recover it. 00:26:29.469 [2024-07-26 14:20:37.333247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.469 [2024-07-26 14:20:37.333286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.469 qpair failed and we were unable to recover it. 00:26:29.469 [2024-07-26 14:20:37.333391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.469 [2024-07-26 14:20:37.333423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.469 qpair failed and we were unable to recover it. 00:26:29.469 [2024-07-26 14:20:37.333611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.469 [2024-07-26 14:20:37.333650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.469 qpair failed and we were unable to recover it. 00:26:29.469 [2024-07-26 14:20:37.333760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.469 [2024-07-26 14:20:37.333802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.469 qpair failed and we were unable to recover it. 00:26:29.469 [2024-07-26 14:20:37.333951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.469 [2024-07-26 14:20:37.334015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.469 qpair failed and we were unable to recover it. 00:26:29.469 [2024-07-26 14:20:37.334248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.469 [2024-07-26 14:20:37.334314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.469 qpair failed and we were unable to recover it. 00:26:29.469 [2024-07-26 14:20:37.334582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.469 [2024-07-26 14:20:37.334617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.469 qpair failed and we were unable to recover it. 00:26:29.469 [2024-07-26 14:20:37.334735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.469 [2024-07-26 14:20:37.334770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.469 qpair failed and we were unable to recover it. 00:26:29.469 [2024-07-26 14:20:37.334982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.469 [2024-07-26 14:20:37.335040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.469 qpair failed and we were unable to recover it. 00:26:29.469 [2024-07-26 14:20:37.335173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.469 [2024-07-26 14:20:37.335203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.469 qpair failed and we were unable to recover it. 00:26:29.469 [2024-07-26 14:20:37.335423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.469 [2024-07-26 14:20:37.335488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.469 qpair failed and we were unable to recover it. 00:26:29.469 [2024-07-26 14:20:37.335673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.469 [2024-07-26 14:20:37.335721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.469 qpair failed and we were unable to recover it. 00:26:29.469 [2024-07-26 14:20:37.335838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.469 [2024-07-26 14:20:37.335870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.469 qpair failed and we were unable to recover it. 00:26:29.469 [2024-07-26 14:20:37.335999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.469 [2024-07-26 14:20:37.336064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.469 qpair failed and we were unable to recover it. 00:26:29.469 [2024-07-26 14:20:37.336282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.469 [2024-07-26 14:20:37.336346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.469 qpair failed and we were unable to recover it. 00:26:29.469 [2024-07-26 14:20:37.336605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.469 [2024-07-26 14:20:37.336640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.469 qpair failed and we were unable to recover it. 00:26:29.469 [2024-07-26 14:20:37.336747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.469 [2024-07-26 14:20:37.336780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.469 qpair failed and we were unable to recover it. 00:26:29.469 [2024-07-26 14:20:37.337069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.469 [2024-07-26 14:20:37.337135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.469 qpair failed and we were unable to recover it. 00:26:29.469 [2024-07-26 14:20:37.337362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.469 [2024-07-26 14:20:37.337427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.469 qpair failed and we were unable to recover it. 00:26:29.469 [2024-07-26 14:20:37.337677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.469 [2024-07-26 14:20:37.337713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.469 qpair failed and we were unable to recover it. 00:26:29.469 [2024-07-26 14:20:37.337851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.469 [2024-07-26 14:20:37.337886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.469 qpair failed and we were unable to recover it. 00:26:29.469 [2024-07-26 14:20:37.338109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.469 [2024-07-26 14:20:37.338173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.469 qpair failed and we were unable to recover it. 00:26:29.469 [2024-07-26 14:20:37.338420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.469 [2024-07-26 14:20:37.338455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.469 qpair failed and we were unable to recover it. 00:26:29.469 [2024-07-26 14:20:37.338560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.470 [2024-07-26 14:20:37.338595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.470 qpair failed and we were unable to recover it. 00:26:29.470 [2024-07-26 14:20:37.338743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.470 [2024-07-26 14:20:37.338776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.470 qpair failed and we were unable to recover it. 00:26:29.470 [2024-07-26 14:20:37.338887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.470 [2024-07-26 14:20:37.338921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.470 qpair failed and we were unable to recover it. 00:26:29.470 [2024-07-26 14:20:37.339025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.470 [2024-07-26 14:20:37.339056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.470 qpair failed and we were unable to recover it. 00:26:29.470 [2024-07-26 14:20:37.339234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.470 [2024-07-26 14:20:37.339300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.470 qpair failed and we were unable to recover it. 00:26:29.470 [2024-07-26 14:20:37.339489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.470 [2024-07-26 14:20:37.339523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.470 qpair failed and we were unable to recover it. 00:26:29.470 [2024-07-26 14:20:37.339680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.470 [2024-07-26 14:20:37.339715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.470 qpair failed and we were unable to recover it. 00:26:29.470 [2024-07-26 14:20:37.339836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.470 [2024-07-26 14:20:37.339870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.470 qpair failed and we were unable to recover it. 00:26:29.470 [2024-07-26 14:20:37.340000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.470 [2024-07-26 14:20:37.340033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.470 qpair failed and we were unable to recover it. 00:26:29.470 [2024-07-26 14:20:37.340163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.470 [2024-07-26 14:20:37.340197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.470 qpair failed and we were unable to recover it. 00:26:29.470 [2024-07-26 14:20:37.340310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.470 [2024-07-26 14:20:37.340341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.470 qpair failed and we were unable to recover it. 00:26:29.470 [2024-07-26 14:20:37.340440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.470 [2024-07-26 14:20:37.340473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.470 qpair failed and we were unable to recover it. 00:26:29.470 [2024-07-26 14:20:37.340613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.470 [2024-07-26 14:20:37.340646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.470 qpair failed and we were unable to recover it. 00:26:29.470 [2024-07-26 14:20:37.340753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.470 [2024-07-26 14:20:37.340786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.470 qpair failed and we were unable to recover it. 00:26:29.470 [2024-07-26 14:20:37.341040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.470 [2024-07-26 14:20:37.341072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.470 qpair failed and we were unable to recover it. 00:26:29.470 [2024-07-26 14:20:37.341203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.470 [2024-07-26 14:20:37.341234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.470 qpair failed and we were unable to recover it. 00:26:29.470 [2024-07-26 14:20:37.341472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.470 [2024-07-26 14:20:37.341550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.470 qpair failed and we were unable to recover it. 00:26:29.470 [2024-07-26 14:20:37.341712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.470 [2024-07-26 14:20:37.341746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.470 qpair failed and we were unable to recover it. 00:26:29.470 [2024-07-26 14:20:37.341938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.470 [2024-07-26 14:20:37.341973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.470 qpair failed and we were unable to recover it. 00:26:29.470 [2024-07-26 14:20:37.342138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.470 [2024-07-26 14:20:37.342205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.470 qpair failed and we were unable to recover it. 00:26:29.470 [2024-07-26 14:20:37.342432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.470 [2024-07-26 14:20:37.342466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.470 qpair failed and we were unable to recover it. 00:26:29.470 [2024-07-26 14:20:37.342576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.470 [2024-07-26 14:20:37.342609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.470 qpair failed and we were unable to recover it. 00:26:29.470 [2024-07-26 14:20:37.342756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.470 [2024-07-26 14:20:37.342826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.470 qpair failed and we were unable to recover it. 00:26:29.470 [2024-07-26 14:20:37.343017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.470 [2024-07-26 14:20:37.343073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.470 qpair failed and we were unable to recover it. 00:26:29.470 [2024-07-26 14:20:37.343308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.470 [2024-07-26 14:20:37.343373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.470 qpair failed and we were unable to recover it. 00:26:29.470 [2024-07-26 14:20:37.343618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.470 [2024-07-26 14:20:37.343668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.470 qpair failed and we were unable to recover it. 00:26:29.470 [2024-07-26 14:20:37.343777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.470 [2024-07-26 14:20:37.343819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.470 qpair failed and we were unable to recover it. 00:26:29.470 [2024-07-26 14:20:37.343964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.470 [2024-07-26 14:20:37.343997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.470 qpair failed and we were unable to recover it. 00:26:29.470 [2024-07-26 14:20:37.344250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.470 [2024-07-26 14:20:37.344315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.470 qpair failed and we were unable to recover it. 00:26:29.470 [2024-07-26 14:20:37.344548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.470 [2024-07-26 14:20:37.344588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.470 qpair failed and we were unable to recover it. 00:26:29.470 [2024-07-26 14:20:37.344686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.470 [2024-07-26 14:20:37.344719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.470 qpair failed and we were unable to recover it. 00:26:29.470 [2024-07-26 14:20:37.344853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.470 [2024-07-26 14:20:37.344887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.470 qpair failed and we were unable to recover it. 00:26:29.470 [2024-07-26 14:20:37.345101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.470 [2024-07-26 14:20:37.345165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.470 qpair failed and we were unable to recover it. 00:26:29.470 [2024-07-26 14:20:37.345409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.470 [2024-07-26 14:20:37.345474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.470 qpair failed and we were unable to recover it. 00:26:29.470 [2024-07-26 14:20:37.345734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.470 [2024-07-26 14:20:37.345815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.470 qpair failed and we were unable to recover it. 00:26:29.470 [2024-07-26 14:20:37.346054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.470 [2024-07-26 14:20:37.346088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.470 qpair failed and we were unable to recover it. 00:26:29.470 [2024-07-26 14:20:37.346246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.470 [2024-07-26 14:20:37.346279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.470 qpair failed and we were unable to recover it. 00:26:29.470 [2024-07-26 14:20:37.346380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.470 [2024-07-26 14:20:37.346411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.470 qpair failed and we were unable to recover it. 00:26:29.471 [2024-07-26 14:20:37.346554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.471 [2024-07-26 14:20:37.346595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.471 qpair failed and we were unable to recover it. 00:26:29.471 [2024-07-26 14:20:37.346730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.471 [2024-07-26 14:20:37.346763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.471 qpair failed and we were unable to recover it. 00:26:29.471 [2024-07-26 14:20:37.346972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.471 [2024-07-26 14:20:37.347038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.471 qpair failed and we were unable to recover it. 00:26:29.471 [2024-07-26 14:20:37.347265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.471 [2024-07-26 14:20:37.347299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.471 qpair failed and we were unable to recover it. 00:26:29.471 [2024-07-26 14:20:37.347412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.471 [2024-07-26 14:20:37.347445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.471 qpair failed and we were unable to recover it. 00:26:29.471 [2024-07-26 14:20:37.347610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.471 [2024-07-26 14:20:37.347645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.471 qpair failed and we were unable to recover it. 00:26:29.471 [2024-07-26 14:20:37.347761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.471 [2024-07-26 14:20:37.347806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.471 qpair failed and we were unable to recover it. 00:26:29.471 [2024-07-26 14:20:37.347979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.471 [2024-07-26 14:20:37.348043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.471 qpair failed and we were unable to recover it. 00:26:29.471 [2024-07-26 14:20:37.348257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.471 [2024-07-26 14:20:37.348324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.471 qpair failed and we were unable to recover it. 00:26:29.471 [2024-07-26 14:20:37.348579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.471 [2024-07-26 14:20:37.348614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.471 qpair failed and we were unable to recover it. 00:26:29.471 [2024-07-26 14:20:37.348747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.471 [2024-07-26 14:20:37.348780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.471 qpair failed and we were unable to recover it. 00:26:29.471 [2024-07-26 14:20:37.348923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.471 [2024-07-26 14:20:37.348956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.471 qpair failed and we were unable to recover it. 00:26:29.471 [2024-07-26 14:20:37.349068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.471 [2024-07-26 14:20:37.349101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.471 qpair failed and we were unable to recover it. 00:26:29.471 [2024-07-26 14:20:37.349304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.471 [2024-07-26 14:20:37.349369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.471 qpair failed and we were unable to recover it. 00:26:29.471 [2024-07-26 14:20:37.349614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.471 [2024-07-26 14:20:37.349649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.471 qpair failed and we were unable to recover it. 00:26:29.471 [2024-07-26 14:20:37.349789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.471 [2024-07-26 14:20:37.349824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.471 qpair failed and we were unable to recover it. 00:26:29.471 [2024-07-26 14:20:37.350038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.471 [2024-07-26 14:20:37.350103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.471 qpair failed and we were unable to recover it. 00:26:29.471 [2024-07-26 14:20:37.350282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.471 [2024-07-26 14:20:37.350348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.471 qpair failed and we were unable to recover it. 00:26:29.471 [2024-07-26 14:20:37.350585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.471 [2024-07-26 14:20:37.350619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.471 qpair failed and we were unable to recover it. 00:26:29.471 [2024-07-26 14:20:37.350757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.471 [2024-07-26 14:20:37.350800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.471 qpair failed and we were unable to recover it. 00:26:29.471 [2024-07-26 14:20:37.350956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.471 [2024-07-26 14:20:37.351023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.471 qpair failed and we were unable to recover it. 00:26:29.471 [2024-07-26 14:20:37.351276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.471 [2024-07-26 14:20:37.351315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.471 qpair failed and we were unable to recover it. 00:26:29.471 [2024-07-26 14:20:37.351456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.471 [2024-07-26 14:20:37.351489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.471 qpair failed and we were unable to recover it. 00:26:29.471 [2024-07-26 14:20:37.351626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.471 [2024-07-26 14:20:37.351661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.471 qpair failed and we were unable to recover it. 00:26:29.471 [2024-07-26 14:20:37.351770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.471 [2024-07-26 14:20:37.351808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.471 qpair failed and we were unable to recover it. 00:26:29.471 [2024-07-26 14:20:37.351969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.471 [2024-07-26 14:20:37.352037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.471 qpair failed and we were unable to recover it. 00:26:29.471 [2024-07-26 14:20:37.352282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.471 [2024-07-26 14:20:37.352331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.471 qpair failed and we were unable to recover it. 00:26:29.471 [2024-07-26 14:20:37.352473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.471 [2024-07-26 14:20:37.352506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.471 qpair failed and we were unable to recover it. 00:26:29.471 [2024-07-26 14:20:37.352719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.471 [2024-07-26 14:20:37.352755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.471 qpair failed and we were unable to recover it. 00:26:29.471 [2024-07-26 14:20:37.352877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.471 [2024-07-26 14:20:37.352912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.471 qpair failed and we were unable to recover it. 00:26:29.471 [2024-07-26 14:20:37.353089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.471 [2024-07-26 14:20:37.353153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.471 qpair failed and we were unable to recover it. 00:26:29.471 [2024-07-26 14:20:37.353397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.471 [2024-07-26 14:20:37.353433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.471 qpair failed and we were unable to recover it. 00:26:29.471 [2024-07-26 14:20:37.353552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.471 [2024-07-26 14:20:37.353595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.471 qpair failed and we were unable to recover it. 00:26:29.471 [2024-07-26 14:20:37.353712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.471 [2024-07-26 14:20:37.353747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.471 qpair failed and we were unable to recover it. 00:26:29.471 [2024-07-26 14:20:37.353891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.471 [2024-07-26 14:20:37.353926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.471 qpair failed and we were unable to recover it. 00:26:29.471 [2024-07-26 14:20:37.354158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.471 [2024-07-26 14:20:37.354211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.471 qpair failed and we were unable to recover it. 00:26:29.471 [2024-07-26 14:20:37.354370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.471 [2024-07-26 14:20:37.354413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.471 qpair failed and we were unable to recover it. 00:26:29.471 [2024-07-26 14:20:37.354667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.472 [2024-07-26 14:20:37.354700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.472 qpair failed and we were unable to recover it. 00:26:29.472 [2024-07-26 14:20:37.354804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.472 [2024-07-26 14:20:37.354836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.472 qpair failed and we were unable to recover it. 00:26:29.472 [2024-07-26 14:20:37.355037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.472 [2024-07-26 14:20:37.355101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.472 qpair failed and we were unable to recover it. 00:26:29.472 [2024-07-26 14:20:37.355325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.472 [2024-07-26 14:20:37.355358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.472 qpair failed and we were unable to recover it. 00:26:29.472 [2024-07-26 14:20:37.355454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.472 [2024-07-26 14:20:37.355486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.472 qpair failed and we were unable to recover it. 00:26:29.472 [2024-07-26 14:20:37.355637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.472 [2024-07-26 14:20:37.355687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.472 qpair failed and we were unable to recover it. 00:26:29.472 [2024-07-26 14:20:37.355822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.472 [2024-07-26 14:20:37.355854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.472 qpair failed and we were unable to recover it. 00:26:29.472 [2024-07-26 14:20:37.356044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.472 [2024-07-26 14:20:37.356109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.472 qpair failed and we were unable to recover it. 00:26:29.472 [2024-07-26 14:20:37.356356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.472 [2024-07-26 14:20:37.356424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.472 qpair failed and we were unable to recover it. 00:26:29.472 [2024-07-26 14:20:37.356688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.472 [2024-07-26 14:20:37.356722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.472 qpair failed and we were unable to recover it. 00:26:29.472 [2024-07-26 14:20:37.356863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.472 [2024-07-26 14:20:37.356897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.472 qpair failed and we were unable to recover it. 00:26:29.472 [2024-07-26 14:20:37.357041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.472 [2024-07-26 14:20:37.357093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.472 qpair failed and we were unable to recover it. 00:26:29.472 [2024-07-26 14:20:37.357324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.472 [2024-07-26 14:20:37.357390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.472 qpair failed and we were unable to recover it. 00:26:29.472 [2024-07-26 14:20:37.357637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.472 [2024-07-26 14:20:37.357673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.472 qpair failed and we were unable to recover it. 00:26:29.472 [2024-07-26 14:20:37.357841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.472 [2024-07-26 14:20:37.357909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.472 qpair failed and we were unable to recover it. 00:26:29.472 [2024-07-26 14:20:37.358158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.472 [2024-07-26 14:20:37.358191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.472 qpair failed and we were unable to recover it. 00:26:29.472 [2024-07-26 14:20:37.358303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.472 [2024-07-26 14:20:37.358333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.472 qpair failed and we were unable to recover it. 00:26:29.472 [2024-07-26 14:20:37.358509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.472 [2024-07-26 14:20:37.358550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.472 qpair failed and we were unable to recover it. 00:26:29.472 [2024-07-26 14:20:37.358782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.472 [2024-07-26 14:20:37.358825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.472 qpair failed and we were unable to recover it. 00:26:29.472 [2024-07-26 14:20:37.358946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.472 [2024-07-26 14:20:37.358977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.472 qpair failed and we were unable to recover it. 00:26:29.472 [2024-07-26 14:20:37.359212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.472 [2024-07-26 14:20:37.359277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.472 qpair failed and we were unable to recover it. 00:26:29.472 [2024-07-26 14:20:37.359514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.472 [2024-07-26 14:20:37.359613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.472 qpair failed and we were unable to recover it. 00:26:29.472 [2024-07-26 14:20:37.359830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.472 [2024-07-26 14:20:37.359897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.472 qpair failed and we were unable to recover it. 00:26:29.472 [2024-07-26 14:20:37.360157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.472 [2024-07-26 14:20:37.360192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.472 qpair failed and we were unable to recover it. 00:26:29.472 [2024-07-26 14:20:37.360330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.472 [2024-07-26 14:20:37.360369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.472 qpair failed and we were unable to recover it. 00:26:29.472 [2024-07-26 14:20:37.360524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.472 [2024-07-26 14:20:37.360565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.472 qpair failed and we were unable to recover it. 00:26:29.472 [2024-07-26 14:20:37.360722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.472 [2024-07-26 14:20:37.360755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.472 qpair failed and we were unable to recover it. 00:26:29.472 [2024-07-26 14:20:37.360859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.472 [2024-07-26 14:20:37.360891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.472 qpair failed and we were unable to recover it. 00:26:29.472 [2024-07-26 14:20:37.361106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.472 [2024-07-26 14:20:37.361140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.472 qpair failed and we were unable to recover it. 00:26:29.472 [2024-07-26 14:20:37.361276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.472 [2024-07-26 14:20:37.361309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.472 qpair failed and we were unable to recover it. 00:26:29.472 [2024-07-26 14:20:37.361548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.472 [2024-07-26 14:20:37.361585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.472 qpair failed and we were unable to recover it. 00:26:29.472 [2024-07-26 14:20:37.361709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.472 [2024-07-26 14:20:37.361756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.472 qpair failed and we were unable to recover it. 00:26:29.473 [2024-07-26 14:20:37.361887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.473 [2024-07-26 14:20:37.361922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.473 qpair failed and we were unable to recover it. 00:26:29.473 [2024-07-26 14:20:37.362103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.473 [2024-07-26 14:20:37.362160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.473 qpair failed and we were unable to recover it. 00:26:29.473 [2024-07-26 14:20:37.362402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.473 [2024-07-26 14:20:37.362468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.473 qpair failed and we were unable to recover it. 00:26:29.473 [2024-07-26 14:20:37.362810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.473 [2024-07-26 14:20:37.362879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.473 qpair failed and we were unable to recover it. 00:26:29.473 [2024-07-26 14:20:37.363125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.473 [2024-07-26 14:20:37.363190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.473 qpair failed and we were unable to recover it. 00:26:29.473 [2024-07-26 14:20:37.363424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.473 [2024-07-26 14:20:37.363488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.473 qpair failed and we were unable to recover it. 00:26:29.473 [2024-07-26 14:20:37.363806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.473 [2024-07-26 14:20:37.363840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.473 qpair failed and we were unable to recover it. 00:26:29.473 [2024-07-26 14:20:37.363948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.473 [2024-07-26 14:20:37.363981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.473 qpair failed and we were unable to recover it. 00:26:29.473 [2024-07-26 14:20:37.364161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.473 [2024-07-26 14:20:37.364225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.473 qpair failed and we were unable to recover it. 00:26:29.473 [2024-07-26 14:20:37.364485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.473 [2024-07-26 14:20:37.364518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.473 qpair failed and we were unable to recover it. 00:26:29.473 [2024-07-26 14:20:37.364640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.473 [2024-07-26 14:20:37.364672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.473 qpair failed and we were unable to recover it. 00:26:29.473 [2024-07-26 14:20:37.364851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.473 [2024-07-26 14:20:37.364886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.473 qpair failed and we were unable to recover it. 00:26:29.473 [2024-07-26 14:20:37.364998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.473 [2024-07-26 14:20:37.365030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.473 qpair failed and we were unable to recover it. 00:26:29.473 [2024-07-26 14:20:37.365170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.473 [2024-07-26 14:20:37.365203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.473 qpair failed and we were unable to recover it. 00:26:29.473 [2024-07-26 14:20:37.365319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.473 [2024-07-26 14:20:37.365350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.473 qpair failed and we were unable to recover it. 00:26:29.473 [2024-07-26 14:20:37.365511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.473 [2024-07-26 14:20:37.365549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.473 qpair failed and we were unable to recover it. 00:26:29.473 [2024-07-26 14:20:37.365684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.473 [2024-07-26 14:20:37.365718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.473 qpair failed and we were unable to recover it. 00:26:29.473 [2024-07-26 14:20:37.365810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.473 [2024-07-26 14:20:37.365841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.473 qpair failed and we were unable to recover it. 00:26:29.473 [2024-07-26 14:20:37.365967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.473 [2024-07-26 14:20:37.366001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.473 qpair failed and we were unable to recover it. 00:26:29.473 [2024-07-26 14:20:37.366225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.473 [2024-07-26 14:20:37.366275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.473 qpair failed and we were unable to recover it. 00:26:29.473 [2024-07-26 14:20:37.366393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.473 [2024-07-26 14:20:37.366428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.473 qpair failed and we were unable to recover it. 00:26:29.473 [2024-07-26 14:20:37.366603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.473 [2024-07-26 14:20:37.366638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.473 qpair failed and we were unable to recover it. 00:26:29.473 [2024-07-26 14:20:37.366805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.473 [2024-07-26 14:20:37.366837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.473 qpair failed and we were unable to recover it. 00:26:29.473 [2024-07-26 14:20:37.367070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.473 [2024-07-26 14:20:37.367135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.473 qpair failed and we were unable to recover it. 00:26:29.473 [2024-07-26 14:20:37.367410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.473 [2024-07-26 14:20:37.367475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.473 qpair failed and we were unable to recover it. 00:26:29.473 [2024-07-26 14:20:37.367722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.473 [2024-07-26 14:20:37.367773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.473 qpair failed and we were unable to recover it. 00:26:29.473 [2024-07-26 14:20:37.367913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.473 [2024-07-26 14:20:37.367946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.473 qpair failed and we were unable to recover it. 00:26:29.473 [2024-07-26 14:20:37.368165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.473 [2024-07-26 14:20:37.368198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.473 qpair failed and we were unable to recover it. 00:26:29.473 [2024-07-26 14:20:37.368331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.473 [2024-07-26 14:20:37.368364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.473 qpair failed and we were unable to recover it. 00:26:29.473 [2024-07-26 14:20:37.368523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.473 [2024-07-26 14:20:37.368586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.473 qpair failed and we were unable to recover it. 00:26:29.473 [2024-07-26 14:20:37.368814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.473 [2024-07-26 14:20:37.368879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.473 qpair failed and we were unable to recover it. 00:26:29.473 [2024-07-26 14:20:37.369171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.473 [2024-07-26 14:20:37.369237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.473 qpair failed and we were unable to recover it. 00:26:29.473 [2024-07-26 14:20:37.369546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.473 [2024-07-26 14:20:37.369625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.473 qpair failed and we were unable to recover it. 00:26:29.473 [2024-07-26 14:20:37.369906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.473 [2024-07-26 14:20:37.369970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.473 qpair failed and we were unable to recover it. 00:26:29.473 [2024-07-26 14:20:37.370207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.473 [2024-07-26 14:20:37.370272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.473 qpair failed and we were unable to recover it. 00:26:29.473 [2024-07-26 14:20:37.370511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.473 [2024-07-26 14:20:37.370555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.473 qpair failed and we were unable to recover it. 00:26:29.473 [2024-07-26 14:20:37.370694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.473 [2024-07-26 14:20:37.370727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.473 qpair failed and we were unable to recover it. 00:26:29.473 [2024-07-26 14:20:37.370839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.473 [2024-07-26 14:20:37.370871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.474 qpair failed and we were unable to recover it. 00:26:29.474 [2024-07-26 14:20:37.371054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.474 [2024-07-26 14:20:37.371089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.474 qpair failed and we were unable to recover it. 00:26:29.474 [2024-07-26 14:20:37.371212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.474 [2024-07-26 14:20:37.371262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.474 qpair failed and we were unable to recover it. 00:26:29.474 [2024-07-26 14:20:37.371424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.474 [2024-07-26 14:20:37.371457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.474 qpair failed and we were unable to recover it. 00:26:29.474 [2024-07-26 14:20:37.371774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.474 [2024-07-26 14:20:37.371809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.474 qpair failed and we were unable to recover it. 00:26:29.474 [2024-07-26 14:20:37.371947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.474 [2024-07-26 14:20:37.371982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.474 qpair failed and we were unable to recover it. 00:26:29.474 [2024-07-26 14:20:37.372232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.474 [2024-07-26 14:20:37.372299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.474 qpair failed and we were unable to recover it. 00:26:29.474 [2024-07-26 14:20:37.372547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.474 [2024-07-26 14:20:37.372582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.474 qpair failed and we were unable to recover it. 00:26:29.474 [2024-07-26 14:20:37.372727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.474 [2024-07-26 14:20:37.372760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.474 qpair failed and we were unable to recover it. 00:26:29.474 [2024-07-26 14:20:37.373045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.474 [2024-07-26 14:20:37.373079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.474 qpair failed and we were unable to recover it. 00:26:29.474 [2024-07-26 14:20:37.373219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.474 [2024-07-26 14:20:37.373253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.474 qpair failed and we were unable to recover it. 00:26:29.474 [2024-07-26 14:20:37.373509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.474 [2024-07-26 14:20:37.373594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.474 qpair failed and we were unable to recover it. 00:26:29.474 [2024-07-26 14:20:37.373871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.474 [2024-07-26 14:20:37.373936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.474 qpair failed and we were unable to recover it. 00:26:29.474 [2024-07-26 14:20:37.374184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.474 [2024-07-26 14:20:37.374249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.474 qpair failed and we were unable to recover it. 00:26:29.474 [2024-07-26 14:20:37.374476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.474 [2024-07-26 14:20:37.374510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.474 qpair failed and we were unable to recover it. 00:26:29.474 [2024-07-26 14:20:37.374656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.474 [2024-07-26 14:20:37.374689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.474 qpair failed and we were unable to recover it. 00:26:29.474 [2024-07-26 14:20:37.374819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.474 [2024-07-26 14:20:37.374854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.474 qpair failed and we were unable to recover it. 00:26:29.474 [2024-07-26 14:20:37.374994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.474 [2024-07-26 14:20:37.375028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.474 qpair failed and we were unable to recover it. 00:26:29.474 [2024-07-26 14:20:37.375316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.474 [2024-07-26 14:20:37.375382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.474 qpair failed and we were unable to recover it. 00:26:29.474 [2024-07-26 14:20:37.375670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.474 [2024-07-26 14:20:37.375737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.474 qpair failed and we were unable to recover it. 00:26:29.474 [2024-07-26 14:20:37.375980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.474 [2024-07-26 14:20:37.376045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.474 qpair failed and we were unable to recover it. 00:26:29.474 [2024-07-26 14:20:37.376328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.474 [2024-07-26 14:20:37.376393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.474 qpair failed and we were unable to recover it. 00:26:29.474 [2024-07-26 14:20:37.376653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.474 [2024-07-26 14:20:37.376720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.474 qpair failed and we were unable to recover it. 00:26:29.474 [2024-07-26 14:20:37.377038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.474 [2024-07-26 14:20:37.377103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.474 qpair failed and we were unable to recover it. 00:26:29.474 [2024-07-26 14:20:37.377389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.474 [2024-07-26 14:20:37.377455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.474 qpair failed and we were unable to recover it. 00:26:29.474 [2024-07-26 14:20:37.377769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.474 [2024-07-26 14:20:37.377836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.474 qpair failed and we were unable to recover it. 00:26:29.474 [2024-07-26 14:20:37.378078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.474 [2024-07-26 14:20:37.378143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.474 qpair failed and we were unable to recover it. 00:26:29.474 [2024-07-26 14:20:37.378389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.474 [2024-07-26 14:20:37.378423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.474 qpair failed and we were unable to recover it. 00:26:29.474 [2024-07-26 14:20:37.378562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.474 [2024-07-26 14:20:37.378594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.474 qpair failed and we were unable to recover it. 00:26:29.474 [2024-07-26 14:20:37.378756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.474 [2024-07-26 14:20:37.378791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.474 qpair failed and we were unable to recover it. 00:26:29.474 [2024-07-26 14:20:37.378929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.474 [2024-07-26 14:20:37.378962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.474 qpair failed and we were unable to recover it. 00:26:29.474 [2024-07-26 14:20:37.379248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.474 [2024-07-26 14:20:37.379313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.474 qpair failed and we were unable to recover it. 00:26:29.474 [2024-07-26 14:20:37.379604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.474 [2024-07-26 14:20:37.379670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.474 qpair failed and we were unable to recover it. 00:26:29.474 [2024-07-26 14:20:37.379963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.474 [2024-07-26 14:20:37.380028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.474 qpair failed and we were unable to recover it. 00:26:29.474 [2024-07-26 14:20:37.380245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.474 [2024-07-26 14:20:37.380310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.474 qpair failed and we were unable to recover it. 00:26:29.474 [2024-07-26 14:20:37.380590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.474 [2024-07-26 14:20:37.380667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.474 qpair failed and we were unable to recover it. 00:26:29.474 [2024-07-26 14:20:37.380922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.474 [2024-07-26 14:20:37.380989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.474 qpair failed and we were unable to recover it. 00:26:29.474 [2024-07-26 14:20:37.381283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.474 [2024-07-26 14:20:37.381349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.474 qpair failed and we were unable to recover it. 00:26:29.474 [2024-07-26 14:20:37.381571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.475 [2024-07-26 14:20:37.381606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.475 qpair failed and we were unable to recover it. 00:26:29.475 [2024-07-26 14:20:37.381778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.475 [2024-07-26 14:20:37.381812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.475 qpair failed and we were unable to recover it. 00:26:29.475 [2024-07-26 14:20:37.382070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.475 [2024-07-26 14:20:37.382103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.475 qpair failed and we were unable to recover it. 00:26:29.475 [2024-07-26 14:20:37.382265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.475 [2024-07-26 14:20:37.382298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.475 qpair failed and we were unable to recover it. 00:26:29.475 [2024-07-26 14:20:37.382553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.475 [2024-07-26 14:20:37.382619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.475 qpair failed and we were unable to recover it. 00:26:29.475 [2024-07-26 14:20:37.382902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.475 [2024-07-26 14:20:37.382966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.475 qpair failed and we were unable to recover it. 00:26:29.475 [2024-07-26 14:20:37.383209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.475 [2024-07-26 14:20:37.383243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.475 qpair failed and we were unable to recover it. 00:26:29.475 [2024-07-26 14:20:37.383354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.475 [2024-07-26 14:20:37.383385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.475 qpair failed and we were unable to recover it. 00:26:29.475 [2024-07-26 14:20:37.383547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.475 [2024-07-26 14:20:37.383581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.475 qpair failed and we were unable to recover it. 00:26:29.475 [2024-07-26 14:20:37.383795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.475 [2024-07-26 14:20:37.383862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.475 qpair failed and we were unable to recover it. 00:26:29.475 [2024-07-26 14:20:37.384123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.475 [2024-07-26 14:20:37.384187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.475 qpair failed and we were unable to recover it. 00:26:29.475 [2024-07-26 14:20:37.384484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.475 [2024-07-26 14:20:37.384571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.475 qpair failed and we were unable to recover it. 00:26:29.475 [2024-07-26 14:20:37.384832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.475 [2024-07-26 14:20:37.384898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.475 qpair failed and we were unable to recover it. 00:26:29.475 [2024-07-26 14:20:37.385093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.475 [2024-07-26 14:20:37.385150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.475 qpair failed and we were unable to recover it. 00:26:29.475 [2024-07-26 14:20:37.385275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.475 [2024-07-26 14:20:37.385344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.475 qpair failed and we were unable to recover it. 00:26:29.475 [2024-07-26 14:20:37.385621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.475 [2024-07-26 14:20:37.385688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.475 qpair failed and we were unable to recover it. 00:26:29.475 [2024-07-26 14:20:37.385945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.475 [2024-07-26 14:20:37.386009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.475 qpair failed and we were unable to recover it. 00:26:29.475 [2024-07-26 14:20:37.386249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.475 [2024-07-26 14:20:37.386315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.475 qpair failed and we were unable to recover it. 00:26:29.475 [2024-07-26 14:20:37.386594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.475 [2024-07-26 14:20:37.386659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.475 qpair failed and we were unable to recover it. 00:26:29.475 [2024-07-26 14:20:37.386887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.475 [2024-07-26 14:20:37.386952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.475 qpair failed and we were unable to recover it. 00:26:29.475 [2024-07-26 14:20:37.387192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.475 [2024-07-26 14:20:37.387257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.475 qpair failed and we were unable to recover it. 00:26:29.475 [2024-07-26 14:20:37.387482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.475 [2024-07-26 14:20:37.387560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.475 qpair failed and we were unable to recover it. 00:26:29.475 [2024-07-26 14:20:37.387772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.475 [2024-07-26 14:20:37.387837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.475 qpair failed and we were unable to recover it. 00:26:29.475 [2024-07-26 14:20:37.388044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.475 [2024-07-26 14:20:37.388111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.475 qpair failed and we were unable to recover it. 00:26:29.475 [2024-07-26 14:20:37.388367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.475 [2024-07-26 14:20:37.388433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.475 qpair failed and we were unable to recover it. 00:26:29.475 [2024-07-26 14:20:37.388696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.475 [2024-07-26 14:20:37.388762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.475 qpair failed and we were unable to recover it. 00:26:29.475 [2024-07-26 14:20:37.389059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.475 [2024-07-26 14:20:37.389123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.475 qpair failed and we were unable to recover it. 00:26:29.475 [2024-07-26 14:20:37.389367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.475 [2024-07-26 14:20:37.389432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.475 qpair failed and we were unable to recover it. 00:26:29.475 [2024-07-26 14:20:37.389661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.475 [2024-07-26 14:20:37.389729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.475 qpair failed and we were unable to recover it. 00:26:29.475 [2024-07-26 14:20:37.390025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.475 [2024-07-26 14:20:37.390090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.475 qpair failed and we were unable to recover it. 00:26:29.475 [2024-07-26 14:20:37.390280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.475 [2024-07-26 14:20:37.390347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.475 qpair failed and we were unable to recover it. 00:26:29.475 [2024-07-26 14:20:37.390638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.475 [2024-07-26 14:20:37.390706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.475 qpair failed and we were unable to recover it. 00:26:29.475 [2024-07-26 14:20:37.390943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.475 [2024-07-26 14:20:37.391009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.475 qpair failed and we were unable to recover it. 00:26:29.475 [2024-07-26 14:20:37.391231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.475 [2024-07-26 14:20:37.391296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.475 qpair failed and we were unable to recover it. 00:26:29.475 [2024-07-26 14:20:37.391573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.475 [2024-07-26 14:20:37.391639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.475 qpair failed and we were unable to recover it. 00:26:29.475 [2024-07-26 14:20:37.391893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.475 [2024-07-26 14:20:37.391958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.475 qpair failed and we were unable to recover it. 00:26:29.475 [2024-07-26 14:20:37.392245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.475 [2024-07-26 14:20:37.392310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.475 qpair failed and we were unable to recover it. 00:26:29.475 [2024-07-26 14:20:37.392587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.475 [2024-07-26 14:20:37.392663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.475 qpair failed and we were unable to recover it. 00:26:29.476 [2024-07-26 14:20:37.392902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.476 [2024-07-26 14:20:37.392968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.476 qpair failed and we were unable to recover it. 00:26:29.476 [2024-07-26 14:20:37.393170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.476 [2024-07-26 14:20:37.393237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.476 qpair failed and we were unable to recover it. 00:26:29.476 [2024-07-26 14:20:37.393518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.476 [2024-07-26 14:20:37.393608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.476 qpair failed and we were unable to recover it. 00:26:29.476 [2024-07-26 14:20:37.393862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.476 [2024-07-26 14:20:37.393928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.476 qpair failed and we were unable to recover it. 00:26:29.476 [2024-07-26 14:20:37.394170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.476 [2024-07-26 14:20:37.394236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.476 qpair failed and we were unable to recover it. 00:26:29.476 [2024-07-26 14:20:37.394427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.476 [2024-07-26 14:20:37.394492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.476 qpair failed and we were unable to recover it. 00:26:29.476 [2024-07-26 14:20:37.394795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.476 [2024-07-26 14:20:37.394861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.476 qpair failed and we were unable to recover it. 00:26:29.476 [2024-07-26 14:20:37.395154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.476 [2024-07-26 14:20:37.395218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.476 qpair failed and we were unable to recover it. 00:26:29.476 [2024-07-26 14:20:37.395497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.476 [2024-07-26 14:20:37.395581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.476 qpair failed and we were unable to recover it. 00:26:29.476 [2024-07-26 14:20:37.395795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.476 [2024-07-26 14:20:37.395861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.476 qpair failed and we were unable to recover it. 00:26:29.476 [2024-07-26 14:20:37.396147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.476 [2024-07-26 14:20:37.396211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.476 qpair failed and we were unable to recover it. 00:26:29.476 [2024-07-26 14:20:37.396458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.476 [2024-07-26 14:20:37.396524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.476 qpair failed and we were unable to recover it. 00:26:29.476 [2024-07-26 14:20:37.396834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.476 [2024-07-26 14:20:37.396901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.476 qpair failed and we were unable to recover it. 00:26:29.476 [2024-07-26 14:20:37.397199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.476 [2024-07-26 14:20:37.397264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.476 qpair failed and we were unable to recover it. 00:26:29.476 [2024-07-26 14:20:37.397473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.476 [2024-07-26 14:20:37.397558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.476 qpair failed and we were unable to recover it. 00:26:29.476 [2024-07-26 14:20:37.397841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.476 [2024-07-26 14:20:37.397906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.476 qpair failed and we were unable to recover it. 00:26:29.476 [2024-07-26 14:20:37.398149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.476 [2024-07-26 14:20:37.398214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.476 qpair failed and we were unable to recover it. 00:26:29.476 [2024-07-26 14:20:37.398454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.476 [2024-07-26 14:20:37.398520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.476 qpair failed and we were unable to recover it. 00:26:29.476 [2024-07-26 14:20:37.398851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.476 [2024-07-26 14:20:37.398916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.476 qpair failed and we were unable to recover it. 00:26:29.476 [2024-07-26 14:20:37.399200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.476 [2024-07-26 14:20:37.399265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.476 qpair failed and we were unable to recover it. 00:26:29.476 [2024-07-26 14:20:37.399583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.476 [2024-07-26 14:20:37.399649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.476 qpair failed and we were unable to recover it. 00:26:29.476 [2024-07-26 14:20:37.399890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.476 [2024-07-26 14:20:37.399955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.476 qpair failed and we were unable to recover it. 00:26:29.476 [2024-07-26 14:20:37.400191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.476 [2024-07-26 14:20:37.400256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.476 qpair failed and we were unable to recover it. 00:26:29.476 [2024-07-26 14:20:37.400495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.476 [2024-07-26 14:20:37.400580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.476 qpair failed and we were unable to recover it. 00:26:29.476 [2024-07-26 14:20:37.400835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.476 [2024-07-26 14:20:37.400901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.476 qpair failed and we were unable to recover it. 00:26:29.476 [2024-07-26 14:20:37.401151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.476 [2024-07-26 14:20:37.401216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.476 qpair failed and we were unable to recover it. 00:26:29.476 [2024-07-26 14:20:37.401418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.476 [2024-07-26 14:20:37.401484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.476 qpair failed and we were unable to recover it. 00:26:29.476 [2024-07-26 14:20:37.401739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.476 [2024-07-26 14:20:37.401805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.476 qpair failed and we were unable to recover it. 00:26:29.476 [2024-07-26 14:20:37.402083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.476 [2024-07-26 14:20:37.402147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.476 qpair failed and we were unable to recover it. 00:26:29.476 [2024-07-26 14:20:37.402440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.476 [2024-07-26 14:20:37.402505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.476 qpair failed and we were unable to recover it. 00:26:29.476 [2024-07-26 14:20:37.402807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.476 [2024-07-26 14:20:37.402872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.476 qpair failed and we were unable to recover it. 00:26:29.476 [2024-07-26 14:20:37.403123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.476 [2024-07-26 14:20:37.403188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.476 qpair failed and we were unable to recover it. 00:26:29.476 [2024-07-26 14:20:37.403385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.476 [2024-07-26 14:20:37.403450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.476 qpair failed and we were unable to recover it. 00:26:29.476 [2024-07-26 14:20:37.403684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.476 [2024-07-26 14:20:37.403750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.476 qpair failed and we were unable to recover it. 00:26:29.476 [2024-07-26 14:20:37.403976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.476 [2024-07-26 14:20:37.404040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.476 qpair failed and we were unable to recover it. 00:26:29.476 [2024-07-26 14:20:37.404286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.476 [2024-07-26 14:20:37.404352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.476 qpair failed and we were unable to recover it. 00:26:29.476 [2024-07-26 14:20:37.404605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.476 [2024-07-26 14:20:37.404672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.476 qpair failed and we were unable to recover it. 00:26:29.476 [2024-07-26 14:20:37.404897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.477 [2024-07-26 14:20:37.404962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.477 qpair failed and we were unable to recover it. 00:26:29.477 [2024-07-26 14:20:37.405241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.477 [2024-07-26 14:20:37.405307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.477 qpair failed and we were unable to recover it. 00:26:29.477 [2024-07-26 14:20:37.405575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.477 [2024-07-26 14:20:37.405650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.477 qpair failed and we were unable to recover it. 00:26:29.477 [2024-07-26 14:20:37.405867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.477 [2024-07-26 14:20:37.405933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.477 qpair failed and we were unable to recover it. 00:26:29.477 [2024-07-26 14:20:37.406225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.477 [2024-07-26 14:20:37.406289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.477 qpair failed and we were unable to recover it. 00:26:29.477 [2024-07-26 14:20:37.406556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.477 [2024-07-26 14:20:37.406621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.477 qpair failed and we were unable to recover it. 00:26:29.477 [2024-07-26 14:20:37.406861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.477 [2024-07-26 14:20:37.406927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.477 qpair failed and we were unable to recover it. 00:26:29.477 [2024-07-26 14:20:37.407210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.477 [2024-07-26 14:20:37.407275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.477 qpair failed and we were unable to recover it. 00:26:29.477 [2024-07-26 14:20:37.407512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.477 [2024-07-26 14:20:37.407609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.477 qpair failed and we were unable to recover it. 00:26:29.477 [2024-07-26 14:20:37.407849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.477 [2024-07-26 14:20:37.407914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.477 qpair failed and we were unable to recover it. 00:26:29.477 [2024-07-26 14:20:37.408194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.477 [2024-07-26 14:20:37.408259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.477 qpair failed and we were unable to recover it. 00:26:29.477 [2024-07-26 14:20:37.408519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.477 [2024-07-26 14:20:37.408603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.477 qpair failed and we were unable to recover it. 00:26:29.477 [2024-07-26 14:20:37.408906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.477 [2024-07-26 14:20:37.408972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.477 qpair failed and we were unable to recover it. 00:26:29.477 [2024-07-26 14:20:37.409216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.477 [2024-07-26 14:20:37.409281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.477 qpair failed and we were unable to recover it. 00:26:29.477 [2024-07-26 14:20:37.409523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.477 [2024-07-26 14:20:37.409606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.477 qpair failed and we were unable to recover it. 00:26:29.477 [2024-07-26 14:20:37.409896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.477 [2024-07-26 14:20:37.409962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.477 qpair failed and we were unable to recover it. 00:26:29.477 [2024-07-26 14:20:37.410220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.477 [2024-07-26 14:20:37.410285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.477 qpair failed and we were unable to recover it. 00:26:29.477 [2024-07-26 14:20:37.410548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.477 [2024-07-26 14:20:37.410615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.477 qpair failed and we were unable to recover it. 00:26:29.477 [2024-07-26 14:20:37.410826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.477 [2024-07-26 14:20:37.410891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.477 qpair failed and we were unable to recover it. 00:26:29.477 [2024-07-26 14:20:37.411119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.477 [2024-07-26 14:20:37.411185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.477 qpair failed and we were unable to recover it. 00:26:29.477 [2024-07-26 14:20:37.411428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.477 [2024-07-26 14:20:37.411493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.477 qpair failed and we were unable to recover it. 00:26:29.477 [2024-07-26 14:20:37.411800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.477 [2024-07-26 14:20:37.411866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.477 qpair failed and we were unable to recover it. 00:26:29.477 [2024-07-26 14:20:37.412163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.477 [2024-07-26 14:20:37.412227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.477 qpair failed and we were unable to recover it. 00:26:29.477 [2024-07-26 14:20:37.412482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.477 [2024-07-26 14:20:37.412567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.477 qpair failed and we were unable to recover it. 00:26:29.477 [2024-07-26 14:20:37.412863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.477 [2024-07-26 14:20:37.412929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.477 qpair failed and we were unable to recover it. 00:26:29.477 [2024-07-26 14:20:37.413191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.477 [2024-07-26 14:20:37.413256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.477 qpair failed and we were unable to recover it. 00:26:29.477 [2024-07-26 14:20:37.413550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.477 [2024-07-26 14:20:37.413615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.477 qpair failed and we were unable to recover it. 00:26:29.477 [2024-07-26 14:20:37.413835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.477 [2024-07-26 14:20:37.413898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.477 qpair failed and we were unable to recover it. 00:26:29.477 [2024-07-26 14:20:37.414174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.477 [2024-07-26 14:20:37.414239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.477 qpair failed and we were unable to recover it. 00:26:29.477 [2024-07-26 14:20:37.414497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.477 [2024-07-26 14:20:37.414589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.477 qpair failed and we were unable to recover it. 00:26:29.477 [2024-07-26 14:20:37.414847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.477 [2024-07-26 14:20:37.414914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.477 qpair failed and we were unable to recover it. 00:26:29.477 [2024-07-26 14:20:37.415169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.477 [2024-07-26 14:20:37.415236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.477 qpair failed and we were unable to recover it. 00:26:29.477 [2024-07-26 14:20:37.415483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.477 [2024-07-26 14:20:37.415581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.477 qpair failed and we were unable to recover it. 00:26:29.478 [2024-07-26 14:20:37.415869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.478 [2024-07-26 14:20:37.415934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.478 qpair failed and we were unable to recover it. 00:26:29.478 [2024-07-26 14:20:37.416214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.478 [2024-07-26 14:20:37.416279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.478 qpair failed and we were unable to recover it. 00:26:29.478 [2024-07-26 14:20:37.416521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.478 [2024-07-26 14:20:37.416606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.478 qpair failed and we were unable to recover it. 00:26:29.478 [2024-07-26 14:20:37.416851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.478 [2024-07-26 14:20:37.416916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.478 qpair failed and we were unable to recover it. 00:26:29.478 [2024-07-26 14:20:37.417193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.478 [2024-07-26 14:20:37.417258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.478 qpair failed and we were unable to recover it. 00:26:29.478 [2024-07-26 14:20:37.417477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.478 [2024-07-26 14:20:37.417560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.478 qpair failed and we were unable to recover it. 00:26:29.478 [2024-07-26 14:20:37.417846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.478 [2024-07-26 14:20:37.417911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.478 qpair failed and we were unable to recover it. 00:26:29.478 [2024-07-26 14:20:37.418151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.478 [2024-07-26 14:20:37.418216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.478 qpair failed and we were unable to recover it. 00:26:29.478 [2024-07-26 14:20:37.418458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.478 [2024-07-26 14:20:37.418523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.478 qpair failed and we were unable to recover it. 00:26:29.478 [2024-07-26 14:20:37.418805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.478 [2024-07-26 14:20:37.418871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.478 qpair failed and we were unable to recover it. 00:26:29.478 [2024-07-26 14:20:37.419164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.478 [2024-07-26 14:20:37.419229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.478 qpair failed and we were unable to recover it. 00:26:29.478 [2024-07-26 14:20:37.419471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.478 [2024-07-26 14:20:37.419553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.478 qpair failed and we were unable to recover it. 00:26:29.478 [2024-07-26 14:20:37.419849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.478 [2024-07-26 14:20:37.419914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.478 qpair failed and we were unable to recover it. 00:26:29.478 [2024-07-26 14:20:37.420159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.478 [2024-07-26 14:20:37.420223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.478 qpair failed and we were unable to recover it. 00:26:29.478 [2024-07-26 14:20:37.420465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.478 [2024-07-26 14:20:37.420548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.478 qpair failed and we were unable to recover it. 00:26:29.478 [2024-07-26 14:20:37.420810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.478 [2024-07-26 14:20:37.420876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.478 qpair failed and we were unable to recover it. 00:26:29.478 [2024-07-26 14:20:37.421129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.478 [2024-07-26 14:20:37.421194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.478 qpair failed and we were unable to recover it. 00:26:29.478 [2024-07-26 14:20:37.421485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.478 [2024-07-26 14:20:37.421567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.478 qpair failed and we were unable to recover it. 00:26:29.478 [2024-07-26 14:20:37.421858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.478 [2024-07-26 14:20:37.421923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.478 qpair failed and we were unable to recover it. 00:26:29.478 [2024-07-26 14:20:37.422201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.478 [2024-07-26 14:20:37.422265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.478 qpair failed and we were unable to recover it. 00:26:29.478 [2024-07-26 14:20:37.422559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.478 [2024-07-26 14:20:37.422625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.478 qpair failed and we were unable to recover it. 00:26:29.478 [2024-07-26 14:20:37.422826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.478 [2024-07-26 14:20:37.422891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.478 qpair failed and we were unable to recover it. 00:26:29.478 [2024-07-26 14:20:37.423133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.478 [2024-07-26 14:20:37.423200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.478 qpair failed and we were unable to recover it. 00:26:29.478 [2024-07-26 14:20:37.423420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.478 [2024-07-26 14:20:37.423486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.478 qpair failed and we were unable to recover it. 00:26:29.478 [2024-07-26 14:20:37.423819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.478 [2024-07-26 14:20:37.423884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.478 qpair failed and we were unable to recover it. 00:26:29.478 [2024-07-26 14:20:37.424126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.478 [2024-07-26 14:20:37.424190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.478 qpair failed and we were unable to recover it. 00:26:29.478 [2024-07-26 14:20:37.424470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.478 [2024-07-26 14:20:37.424554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.478 qpair failed and we were unable to recover it. 00:26:29.478 [2024-07-26 14:20:37.424799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.478 [2024-07-26 14:20:37.424864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.478 qpair failed and we were unable to recover it. 00:26:29.478 [2024-07-26 14:20:37.425053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.478 [2024-07-26 14:20:37.425119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.478 qpair failed and we were unable to recover it. 00:26:29.478 [2024-07-26 14:20:37.425362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.478 [2024-07-26 14:20:37.425429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.478 qpair failed and we were unable to recover it. 00:26:29.478 [2024-07-26 14:20:37.425684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.478 [2024-07-26 14:20:37.425751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.478 qpair failed and we were unable to recover it. 00:26:29.478 [2024-07-26 14:20:37.426011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.478 [2024-07-26 14:20:37.426077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.478 qpair failed and we were unable to recover it. 00:26:29.478 [2024-07-26 14:20:37.426316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.478 [2024-07-26 14:20:37.426380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.478 qpair failed and we were unable to recover it. 00:26:29.478 [2024-07-26 14:20:37.426629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.478 [2024-07-26 14:20:37.426698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.478 qpair failed and we were unable to recover it. 00:26:29.478 [2024-07-26 14:20:37.426999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.478 [2024-07-26 14:20:37.427064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.478 qpair failed and we were unable to recover it. 00:26:29.478 [2024-07-26 14:20:37.427354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.478 [2024-07-26 14:20:37.427419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.478 qpair failed and we were unable to recover it. 00:26:29.478 [2024-07-26 14:20:37.427640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.478 [2024-07-26 14:20:37.427717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.478 qpair failed and we were unable to recover it. 00:26:29.478 [2024-07-26 14:20:37.427916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.479 [2024-07-26 14:20:37.427982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.479 qpair failed and we were unable to recover it. 00:26:29.479 [2024-07-26 14:20:37.428231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.479 [2024-07-26 14:20:37.428298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.479 qpair failed and we were unable to recover it. 00:26:29.479 [2024-07-26 14:20:37.428581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.479 [2024-07-26 14:20:37.428648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.479 qpair failed and we were unable to recover it. 00:26:29.479 [2024-07-26 14:20:37.428939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.479 [2024-07-26 14:20:37.429004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.479 qpair failed and we were unable to recover it. 00:26:29.479 [2024-07-26 14:20:37.429257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.479 [2024-07-26 14:20:37.429322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.479 qpair failed and we were unable to recover it. 00:26:29.479 [2024-07-26 14:20:37.429557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.479 [2024-07-26 14:20:37.429623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.479 qpair failed and we were unable to recover it. 00:26:29.479 [2024-07-26 14:20:37.429850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.479 [2024-07-26 14:20:37.429914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.479 qpair failed and we were unable to recover it. 00:26:29.479 [2024-07-26 14:20:37.430191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.479 [2024-07-26 14:20:37.430256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.479 qpair failed and we were unable to recover it. 00:26:29.479 [2024-07-26 14:20:37.430498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.479 [2024-07-26 14:20:37.430581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.479 qpair failed and we were unable to recover it. 00:26:29.479 [2024-07-26 14:20:37.430828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.479 [2024-07-26 14:20:37.430896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.479 qpair failed and we were unable to recover it. 00:26:29.479 [2024-07-26 14:20:37.431140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.479 [2024-07-26 14:20:37.431207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.479 qpair failed and we were unable to recover it. 00:26:29.479 [2024-07-26 14:20:37.431458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.479 [2024-07-26 14:20:37.431523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.479 qpair failed and we were unable to recover it. 00:26:29.479 [2024-07-26 14:20:37.431785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.479 [2024-07-26 14:20:37.431849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.479 qpair failed and we were unable to recover it. 00:26:29.479 [2024-07-26 14:20:37.432140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.479 [2024-07-26 14:20:37.432205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.479 qpair failed and we were unable to recover it. 00:26:29.479 [2024-07-26 14:20:37.432419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.479 [2024-07-26 14:20:37.432483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.479 qpair failed and we were unable to recover it. 00:26:29.479 [2024-07-26 14:20:37.432755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.479 [2024-07-26 14:20:37.432820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.479 qpair failed and we were unable to recover it. 00:26:29.479 [2024-07-26 14:20:37.433028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.479 [2024-07-26 14:20:37.433095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.479 qpair failed and we were unable to recover it. 00:26:29.479 [2024-07-26 14:20:37.433380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.479 [2024-07-26 14:20:37.433444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.479 qpair failed and we were unable to recover it. 00:26:29.479 [2024-07-26 14:20:37.433760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.479 [2024-07-26 14:20:37.433826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.479 qpair failed and we were unable to recover it. 00:26:29.479 [2024-07-26 14:20:37.434098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.479 [2024-07-26 14:20:37.434163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.479 qpair failed and we were unable to recover it. 00:26:29.479 [2024-07-26 14:20:37.434409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.479 [2024-07-26 14:20:37.434472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.479 qpair failed and we were unable to recover it. 00:26:29.479 [2024-07-26 14:20:37.434737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.479 [2024-07-26 14:20:37.434803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.479 qpair failed and we were unable to recover it. 00:26:29.479 [2024-07-26 14:20:37.435059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.479 [2024-07-26 14:20:37.435123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.479 qpair failed and we were unable to recover it. 00:26:29.479 [2024-07-26 14:20:37.435333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.479 [2024-07-26 14:20:37.435399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.479 qpair failed and we were unable to recover it. 00:26:29.479 [2024-07-26 14:20:37.435681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.479 [2024-07-26 14:20:37.435749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.479 qpair failed and we were unable to recover it. 00:26:29.479 [2024-07-26 14:20:37.435993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.479 [2024-07-26 14:20:37.436055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.479 qpair failed and we were unable to recover it. 00:26:29.479 [2024-07-26 14:20:37.436348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.479 [2024-07-26 14:20:37.436414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.479 qpair failed and we were unable to recover it. 00:26:29.479 [2024-07-26 14:20:37.436637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.479 [2024-07-26 14:20:37.436699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.479 qpair failed and we were unable to recover it. 00:26:29.479 [2024-07-26 14:20:37.436925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.479 [2024-07-26 14:20:37.436990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.479 qpair failed and we were unable to recover it. 00:26:29.479 [2024-07-26 14:20:37.437281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.479 [2024-07-26 14:20:37.437346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.479 qpair failed and we were unable to recover it. 00:26:29.479 [2024-07-26 14:20:37.437598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.479 [2024-07-26 14:20:37.437664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.479 qpair failed and we were unable to recover it. 00:26:29.479 [2024-07-26 14:20:37.437953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.479 [2024-07-26 14:20:37.438018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.479 qpair failed and we were unable to recover it. 00:26:29.479 [2024-07-26 14:20:37.438300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.479 [2024-07-26 14:20:37.438365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.479 qpair failed and we were unable to recover it. 00:26:29.479 [2024-07-26 14:20:37.438619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.479 [2024-07-26 14:20:37.438685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.479 qpair failed and we were unable to recover it. 00:26:29.479 [2024-07-26 14:20:37.438954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.479 [2024-07-26 14:20:37.439020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.479 qpair failed and we were unable to recover it. 00:26:29.479 [2024-07-26 14:20:37.439303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.479 [2024-07-26 14:20:37.439368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.479 qpair failed and we were unable to recover it. 00:26:29.479 [2024-07-26 14:20:37.439592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.479 [2024-07-26 14:20:37.439659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.479 qpair failed and we were unable to recover it. 00:26:29.479 [2024-07-26 14:20:37.439897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.479 [2024-07-26 14:20:37.439962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.479 qpair failed and we were unable to recover it. 00:26:29.480 [2024-07-26 14:20:37.440249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.480 [2024-07-26 14:20:37.440316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.480 qpair failed and we were unable to recover it. 00:26:29.480 [2024-07-26 14:20:37.440595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.480 [2024-07-26 14:20:37.440671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.480 qpair failed and we were unable to recover it. 00:26:29.480 [2024-07-26 14:20:37.440961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.480 [2024-07-26 14:20:37.441026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.480 qpair failed and we were unable to recover it. 00:26:29.480 [2024-07-26 14:20:37.441275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.480 [2024-07-26 14:20:37.441340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.480 qpair failed and we were unable to recover it. 00:26:29.480 [2024-07-26 14:20:37.441619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.480 [2024-07-26 14:20:37.441687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.480 qpair failed and we were unable to recover it. 00:26:29.480 [2024-07-26 14:20:37.441928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.480 [2024-07-26 14:20:37.441994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.480 qpair failed and we were unable to recover it. 00:26:29.480 [2024-07-26 14:20:37.442268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.480 [2024-07-26 14:20:37.442334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.480 qpair failed and we were unable to recover it. 00:26:29.480 [2024-07-26 14:20:37.442573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.480 [2024-07-26 14:20:37.442640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.480 qpair failed and we were unable to recover it. 00:26:29.480 [2024-07-26 14:20:37.442877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.480 [2024-07-26 14:20:37.442941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.480 qpair failed and we were unable to recover it. 00:26:29.480 [2024-07-26 14:20:37.443182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.480 [2024-07-26 14:20:37.443249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.480 qpair failed and we were unable to recover it. 00:26:29.480 [2024-07-26 14:20:37.443499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.480 [2024-07-26 14:20:37.443581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.480 qpair failed and we were unable to recover it. 00:26:29.480 [2024-07-26 14:20:37.443842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.480 [2024-07-26 14:20:37.443906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.480 qpair failed and we were unable to recover it. 00:26:29.480 [2024-07-26 14:20:37.444143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.480 [2024-07-26 14:20:37.444210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.480 qpair failed and we were unable to recover it. 00:26:29.480 [2024-07-26 14:20:37.444494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.480 [2024-07-26 14:20:37.444578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.480 qpair failed and we were unable to recover it. 00:26:29.480 [2024-07-26 14:20:37.444824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.480 [2024-07-26 14:20:37.444891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.480 qpair failed and we were unable to recover it. 00:26:29.480 [2024-07-26 14:20:37.445147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.480 [2024-07-26 14:20:37.445213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.480 qpair failed and we were unable to recover it. 00:26:29.480 [2024-07-26 14:20:37.445412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.480 [2024-07-26 14:20:37.445479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.480 qpair failed and we were unable to recover it. 00:26:29.480 [2024-07-26 14:20:37.445741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.480 [2024-07-26 14:20:37.445808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.480 qpair failed and we were unable to recover it. 00:26:29.480 [2024-07-26 14:20:37.446038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.480 [2024-07-26 14:20:37.446103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.480 qpair failed and we were unable to recover it. 00:26:29.480 [2024-07-26 14:20:37.446343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.480 [2024-07-26 14:20:37.446407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.480 qpair failed and we were unable to recover it. 00:26:29.480 [2024-07-26 14:20:37.446689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.480 [2024-07-26 14:20:37.446756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.480 qpair failed and we were unable to recover it. 00:26:29.480 [2024-07-26 14:20:37.446964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.480 [2024-07-26 14:20:37.447031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.480 qpair failed and we were unable to recover it. 00:26:29.480 [2024-07-26 14:20:37.447284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.480 [2024-07-26 14:20:37.447349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.480 qpair failed and we were unable to recover it. 00:26:29.480 [2024-07-26 14:20:37.447601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.480 [2024-07-26 14:20:37.447668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.480 qpair failed and we were unable to recover it. 00:26:29.480 [2024-07-26 14:20:37.447880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.480 [2024-07-26 14:20:37.447945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.480 qpair failed and we were unable to recover it. 00:26:29.480 [2024-07-26 14:20:37.448184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.480 [2024-07-26 14:20:37.448249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.480 qpair failed and we were unable to recover it. 00:26:29.480 [2024-07-26 14:20:37.448550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.480 [2024-07-26 14:20:37.448617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.480 qpair failed and we were unable to recover it. 00:26:29.480 [2024-07-26 14:20:37.448864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.480 [2024-07-26 14:20:37.448929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.480 qpair failed and we were unable to recover it. 00:26:29.480 [2024-07-26 14:20:37.449220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.480 [2024-07-26 14:20:37.449285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.480 qpair failed and we were unable to recover it. 00:26:29.480 [2024-07-26 14:20:37.449578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.480 [2024-07-26 14:20:37.449644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.480 qpair failed and we were unable to recover it. 00:26:29.480 [2024-07-26 14:20:37.449846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.480 [2024-07-26 14:20:37.449914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.480 qpair failed and we were unable to recover it. 00:26:29.480 [2024-07-26 14:20:37.450140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.480 [2024-07-26 14:20:37.450206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.480 qpair failed and we were unable to recover it. 00:26:29.480 [2024-07-26 14:20:37.450445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.480 [2024-07-26 14:20:37.450512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.480 qpair failed and we were unable to recover it. 00:26:29.480 [2024-07-26 14:20:37.450781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.480 [2024-07-26 14:20:37.450848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.480 qpair failed and we were unable to recover it. 00:26:29.755 [2024-07-26 14:20:37.451088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.755 [2024-07-26 14:20:37.451155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.755 qpair failed and we were unable to recover it. 00:26:29.755 [2024-07-26 14:20:37.451399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.755 [2024-07-26 14:20:37.451466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.755 qpair failed and we were unable to recover it. 00:26:29.755 [2024-07-26 14:20:37.451726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.755 [2024-07-26 14:20:37.451793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.755 qpair failed and we were unable to recover it. 00:26:29.755 [2024-07-26 14:20:37.452040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.755 [2024-07-26 14:20:37.452106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.755 qpair failed and we were unable to recover it. 00:26:29.755 [2024-07-26 14:20:37.452349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.755 [2024-07-26 14:20:37.452416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.755 qpair failed and we were unable to recover it. 00:26:29.755 [2024-07-26 14:20:37.452654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.755 [2024-07-26 14:20:37.452722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.755 qpair failed and we were unable to recover it. 00:26:29.755 [2024-07-26 14:20:37.452935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.755 [2024-07-26 14:20:37.453002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.755 qpair failed and we were unable to recover it. 00:26:29.755 [2024-07-26 14:20:37.453247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.755 [2024-07-26 14:20:37.453329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.755 qpair failed and we were unable to recover it. 00:26:29.755 [2024-07-26 14:20:37.453544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.755 [2024-07-26 14:20:37.453611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.755 qpair failed and we were unable to recover it. 00:26:29.755 [2024-07-26 14:20:37.453906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.755 [2024-07-26 14:20:37.453971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.755 qpair failed and we were unable to recover it. 00:26:29.755 [2024-07-26 14:20:37.454248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.755 [2024-07-26 14:20:37.454313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.755 qpair failed and we were unable to recover it. 00:26:29.755 [2024-07-26 14:20:37.454590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.755 [2024-07-26 14:20:37.454657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.755 qpair failed and we were unable to recover it. 00:26:29.755 [2024-07-26 14:20:37.454876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.755 [2024-07-26 14:20:37.454941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.755 qpair failed and we were unable to recover it. 00:26:29.755 [2024-07-26 14:20:37.455197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.755 [2024-07-26 14:20:37.455261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.755 qpair failed and we were unable to recover it. 00:26:29.755 [2024-07-26 14:20:37.455506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.755 [2024-07-26 14:20:37.455586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.755 qpair failed and we were unable to recover it. 00:26:29.755 [2024-07-26 14:20:37.455801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.755 [2024-07-26 14:20:37.455866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.755 qpair failed and we were unable to recover it. 00:26:29.755 [2024-07-26 14:20:37.456156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.755 [2024-07-26 14:20:37.456220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.755 qpair failed and we were unable to recover it. 00:26:29.755 [2024-07-26 14:20:37.456494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.755 [2024-07-26 14:20:37.456590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.755 qpair failed and we were unable to recover it. 00:26:29.755 [2024-07-26 14:20:37.456889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.755 [2024-07-26 14:20:37.456953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.755 qpair failed and we were unable to recover it. 00:26:29.755 [2024-07-26 14:20:37.457199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.755 [2024-07-26 14:20:37.457267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.755 qpair failed and we were unable to recover it. 00:26:29.755 [2024-07-26 14:20:37.457503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.755 [2024-07-26 14:20:37.457587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.756 qpair failed and we were unable to recover it. 00:26:29.756 [2024-07-26 14:20:37.457850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.756 [2024-07-26 14:20:37.457916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.756 qpair failed and we were unable to recover it. 00:26:29.756 [2024-07-26 14:20:37.458178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.756 [2024-07-26 14:20:37.458242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.756 qpair failed and we were unable to recover it. 00:26:29.756 [2024-07-26 14:20:37.458555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.756 [2024-07-26 14:20:37.458621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.756 qpair failed and we were unable to recover it. 00:26:29.756 [2024-07-26 14:20:37.458855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.756 [2024-07-26 14:20:37.458920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.756 qpair failed and we were unable to recover it. 00:26:29.756 [2024-07-26 14:20:37.459133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.756 [2024-07-26 14:20:37.459199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.756 qpair failed and we were unable to recover it. 00:26:29.756 [2024-07-26 14:20:37.459483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.756 [2024-07-26 14:20:37.459566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.756 qpair failed and we were unable to recover it. 00:26:29.756 [2024-07-26 14:20:37.459825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.756 [2024-07-26 14:20:37.459889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.756 qpair failed and we were unable to recover it. 00:26:29.756 [2024-07-26 14:20:37.460110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.756 [2024-07-26 14:20:37.460174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.756 qpair failed and we were unable to recover it. 00:26:29.756 [2024-07-26 14:20:37.460464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.756 [2024-07-26 14:20:37.460565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.756 qpair failed and we were unable to recover it. 00:26:29.756 [2024-07-26 14:20:37.460855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.756 [2024-07-26 14:20:37.460920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.756 qpair failed and we were unable to recover it. 00:26:29.756 [2024-07-26 14:20:37.461199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.756 [2024-07-26 14:20:37.461264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.756 qpair failed and we were unable to recover it. 00:26:29.756 [2024-07-26 14:20:37.461456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.756 [2024-07-26 14:20:37.461522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.756 qpair failed and we were unable to recover it. 00:26:29.756 [2024-07-26 14:20:37.461804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.756 [2024-07-26 14:20:37.461870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.756 qpair failed and we were unable to recover it. 00:26:29.756 [2024-07-26 14:20:37.462173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.756 [2024-07-26 14:20:37.462238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.756 qpair failed and we were unable to recover it. 00:26:29.756 [2024-07-26 14:20:37.462544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.756 [2024-07-26 14:20:37.462610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.756 qpair failed and we were unable to recover it. 00:26:29.756 [2024-07-26 14:20:37.462893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.756 [2024-07-26 14:20:37.462958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.756 qpair failed and we were unable to recover it. 00:26:29.756 [2024-07-26 14:20:37.463237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.756 [2024-07-26 14:20:37.463303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.756 qpair failed and we were unable to recover it. 00:26:29.756 [2024-07-26 14:20:37.463590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.756 [2024-07-26 14:20:37.463657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.756 qpair failed and we were unable to recover it. 00:26:29.756 [2024-07-26 14:20:37.463898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.756 [2024-07-26 14:20:37.463963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.756 qpair failed and we were unable to recover it. 00:26:29.756 [2024-07-26 14:20:37.464209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.756 [2024-07-26 14:20:37.464274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.756 qpair failed and we were unable to recover it. 00:26:29.756 [2024-07-26 14:20:37.464510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.756 [2024-07-26 14:20:37.464606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.756 qpair failed and we were unable to recover it. 00:26:29.756 [2024-07-26 14:20:37.464865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.756 [2024-07-26 14:20:37.464930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.756 qpair failed and we were unable to recover it. 00:26:29.756 [2024-07-26 14:20:37.465170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.756 [2024-07-26 14:20:37.465236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.756 qpair failed and we were unable to recover it. 00:26:29.756 [2024-07-26 14:20:37.465524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.756 [2024-07-26 14:20:37.465607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.756 qpair failed and we were unable to recover it. 00:26:29.756 [2024-07-26 14:20:37.465896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.756 [2024-07-26 14:20:37.465962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.756 qpair failed and we were unable to recover it. 00:26:29.756 [2024-07-26 14:20:37.466221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.756 [2024-07-26 14:20:37.466285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.756 qpair failed and we were unable to recover it. 00:26:29.756 [2024-07-26 14:20:37.466577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.756 [2024-07-26 14:20:37.466671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.756 qpair failed and we were unable to recover it. 00:26:29.756 [2024-07-26 14:20:37.466942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.756 [2024-07-26 14:20:37.467009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.756 qpair failed and we were unable to recover it. 00:26:29.756 [2024-07-26 14:20:37.467306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.756 [2024-07-26 14:20:37.467370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.756 qpair failed and we were unable to recover it. 00:26:29.756 [2024-07-26 14:20:37.467632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.756 [2024-07-26 14:20:37.467699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.756 qpair failed and we were unable to recover it. 00:26:29.756 [2024-07-26 14:20:37.467980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.756 [2024-07-26 14:20:37.468045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.756 qpair failed and we were unable to recover it. 00:26:29.756 [2024-07-26 14:20:37.468321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.756 [2024-07-26 14:20:37.468386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.756 qpair failed and we were unable to recover it. 00:26:29.756 [2024-07-26 14:20:37.468636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.756 [2024-07-26 14:20:37.468705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.756 qpair failed and we were unable to recover it. 00:26:29.756 [2024-07-26 14:20:37.468991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.756 [2024-07-26 14:20:37.469057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.756 qpair failed and we were unable to recover it. 00:26:29.756 [2024-07-26 14:20:37.469342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.756 [2024-07-26 14:20:37.469408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.756 qpair failed and we were unable to recover it. 00:26:29.756 [2024-07-26 14:20:37.469687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.756 [2024-07-26 14:20:37.469755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.756 qpair failed and we were unable to recover it. 00:26:29.756 [2024-07-26 14:20:37.470039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.757 [2024-07-26 14:20:37.470103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.757 qpair failed and we were unable to recover it. 00:26:29.757 [2024-07-26 14:20:37.470349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.757 [2024-07-26 14:20:37.470414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.757 qpair failed and we were unable to recover it. 00:26:29.757 [2024-07-26 14:20:37.470675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.757 [2024-07-26 14:20:37.470743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.757 qpair failed and we were unable to recover it. 00:26:29.757 [2024-07-26 14:20:37.470989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.757 [2024-07-26 14:20:37.471054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.757 qpair failed and we were unable to recover it. 00:26:29.757 [2024-07-26 14:20:37.471334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.757 [2024-07-26 14:20:37.471401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.757 qpair failed and we were unable to recover it. 00:26:29.757 [2024-07-26 14:20:37.471675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.757 [2024-07-26 14:20:37.471742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.757 qpair failed and we were unable to recover it. 00:26:29.757 [2024-07-26 14:20:37.471984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.757 [2024-07-26 14:20:37.472050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.757 qpair failed and we were unable to recover it. 00:26:29.757 [2024-07-26 14:20:37.472289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.757 [2024-07-26 14:20:37.472354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.757 qpair failed and we were unable to recover it. 00:26:29.757 [2024-07-26 14:20:37.472646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.757 [2024-07-26 14:20:37.472713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.757 qpair failed and we were unable to recover it. 00:26:29.757 [2024-07-26 14:20:37.472951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.757 [2024-07-26 14:20:37.473016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.757 qpair failed and we were unable to recover it. 00:26:29.757 [2024-07-26 14:20:37.473267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.757 [2024-07-26 14:20:37.473332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.757 qpair failed and we were unable to recover it. 00:26:29.757 [2024-07-26 14:20:37.473621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.757 [2024-07-26 14:20:37.473688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.757 qpair failed and we were unable to recover it. 00:26:29.757 [2024-07-26 14:20:37.473907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.757 [2024-07-26 14:20:37.473972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.757 qpair failed and we were unable to recover it. 00:26:29.757 [2024-07-26 14:20:37.474233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.757 [2024-07-26 14:20:37.474298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.757 qpair failed and we were unable to recover it. 00:26:29.757 [2024-07-26 14:20:37.474589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.757 [2024-07-26 14:20:37.474657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.757 qpair failed and we were unable to recover it. 00:26:29.757 [2024-07-26 14:20:37.474926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.757 [2024-07-26 14:20:37.474990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.757 qpair failed and we were unable to recover it. 00:26:29.757 [2024-07-26 14:20:37.475219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.757 [2024-07-26 14:20:37.475284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.757 qpair failed and we were unable to recover it. 00:26:29.757 [2024-07-26 14:20:37.475579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.757 [2024-07-26 14:20:37.475646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.757 qpair failed and we were unable to recover it. 00:26:29.757 [2024-07-26 14:20:37.475895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.757 [2024-07-26 14:20:37.475963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.757 qpair failed and we were unable to recover it. 00:26:29.757 [2024-07-26 14:20:37.476213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.757 [2024-07-26 14:20:37.476278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.757 qpair failed and we were unable to recover it. 00:26:29.757 [2024-07-26 14:20:37.476555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.757 [2024-07-26 14:20:37.476622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.757 qpair failed and we were unable to recover it. 00:26:29.757 [2024-07-26 14:20:37.476906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.757 [2024-07-26 14:20:37.476971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.757 qpair failed and we were unable to recover it. 00:26:29.757 [2024-07-26 14:20:37.477229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.757 [2024-07-26 14:20:37.477294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.757 qpair failed and we were unable to recover it. 00:26:29.757 [2024-07-26 14:20:37.477564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.757 [2024-07-26 14:20:37.477631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.757 qpair failed and we were unable to recover it. 00:26:29.757 [2024-07-26 14:20:37.477849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.757 [2024-07-26 14:20:37.477915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.757 qpair failed and we were unable to recover it. 00:26:29.757 [2024-07-26 14:20:37.478136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.757 [2024-07-26 14:20:37.478200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.757 qpair failed and we were unable to recover it. 00:26:29.757 [2024-07-26 14:20:37.478429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.757 [2024-07-26 14:20:37.478494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.757 qpair failed and we were unable to recover it. 00:26:29.757 [2024-07-26 14:20:37.478770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.757 [2024-07-26 14:20:37.478836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.757 qpair failed and we were unable to recover it. 00:26:29.757 [2024-07-26 14:20:37.479048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.757 [2024-07-26 14:20:37.479114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.757 qpair failed and we were unable to recover it. 00:26:29.757 [2024-07-26 14:20:37.479363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.757 [2024-07-26 14:20:37.479431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.757 qpair failed and we were unable to recover it. 00:26:29.757 [2024-07-26 14:20:37.479698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.757 [2024-07-26 14:20:37.479775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.757 qpair failed and we were unable to recover it. 00:26:29.757 [2024-07-26 14:20:37.480078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.757 [2024-07-26 14:20:37.480145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.757 qpair failed and we were unable to recover it. 00:26:29.757 [2024-07-26 14:20:37.480431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.757 [2024-07-26 14:20:37.480496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.757 qpair failed and we were unable to recover it. 00:26:29.757 [2024-07-26 14:20:37.480803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.757 [2024-07-26 14:20:37.480869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.757 qpair failed and we were unable to recover it. 00:26:29.757 [2024-07-26 14:20:37.481169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.757 [2024-07-26 14:20:37.481234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.757 qpair failed and we were unable to recover it. 00:26:29.757 [2024-07-26 14:20:37.481510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.757 [2024-07-26 14:20:37.481594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.757 qpair failed and we were unable to recover it. 00:26:29.757 [2024-07-26 14:20:37.481863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.757 [2024-07-26 14:20:37.481928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.757 qpair failed and we were unable to recover it. 00:26:29.757 [2024-07-26 14:20:37.482168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.757 [2024-07-26 14:20:37.482235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.757 qpair failed and we were unable to recover it. 00:26:29.758 [2024-07-26 14:20:37.482476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.758 [2024-07-26 14:20:37.482563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.758 qpair failed and we were unable to recover it. 00:26:29.758 [2024-07-26 14:20:37.482827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.758 [2024-07-26 14:20:37.482891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.758 qpair failed and we were unable to recover it. 00:26:29.758 [2024-07-26 14:20:37.483181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.758 [2024-07-26 14:20:37.483246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.758 qpair failed and we were unable to recover it. 00:26:29.758 [2024-07-26 14:20:37.483523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.758 [2024-07-26 14:20:37.483606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.758 qpair failed and we were unable to recover it. 00:26:29.758 [2024-07-26 14:20:37.483872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.758 [2024-07-26 14:20:37.483936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.758 qpair failed and we were unable to recover it. 00:26:29.758 [2024-07-26 14:20:37.484183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.758 [2024-07-26 14:20:37.484248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.758 qpair failed and we were unable to recover it. 00:26:29.758 [2024-07-26 14:20:37.484456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.758 [2024-07-26 14:20:37.484522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.758 qpair failed and we were unable to recover it. 00:26:29.758 [2024-07-26 14:20:37.484819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.758 [2024-07-26 14:20:37.484884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.758 qpair failed and we were unable to recover it. 00:26:29.758 [2024-07-26 14:20:37.485173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.758 [2024-07-26 14:20:37.485238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.758 qpair failed and we were unable to recover it. 00:26:29.758 [2024-07-26 14:20:37.485519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.758 [2024-07-26 14:20:37.485608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.758 qpair failed and we were unable to recover it. 00:26:29.758 [2024-07-26 14:20:37.485893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.758 [2024-07-26 14:20:37.485958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.758 qpair failed and we were unable to recover it. 00:26:29.758 [2024-07-26 14:20:37.486202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.758 [2024-07-26 14:20:37.486269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.758 qpair failed and we were unable to recover it. 00:26:29.758 [2024-07-26 14:20:37.486524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.758 [2024-07-26 14:20:37.486608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.758 qpair failed and we were unable to recover it. 00:26:29.758 [2024-07-26 14:20:37.486891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.758 [2024-07-26 14:20:37.486955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.758 qpair failed and we were unable to recover it. 00:26:29.758 [2024-07-26 14:20:37.487168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.758 [2024-07-26 14:20:37.487232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.758 qpair failed and we were unable to recover it. 00:26:29.758 [2024-07-26 14:20:37.487520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.758 [2024-07-26 14:20:37.487606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.758 qpair failed and we were unable to recover it. 00:26:29.758 [2024-07-26 14:20:37.487876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.758 [2024-07-26 14:20:37.487940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.758 qpair failed and we were unable to recover it. 00:26:29.758 [2024-07-26 14:20:37.488178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.758 [2024-07-26 14:20:37.488242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.758 qpair failed and we were unable to recover it. 00:26:29.758 [2024-07-26 14:20:37.488561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.758 [2024-07-26 14:20:37.488626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.758 qpair failed and we were unable to recover it. 00:26:29.758 [2024-07-26 14:20:37.488896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.758 [2024-07-26 14:20:37.488961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.758 qpair failed and we were unable to recover it. 00:26:29.758 [2024-07-26 14:20:37.489243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.758 [2024-07-26 14:20:37.489307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.758 qpair failed and we were unable to recover it. 00:26:29.758 [2024-07-26 14:20:37.489608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.758 [2024-07-26 14:20:37.489675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.758 qpair failed and we were unable to recover it. 00:26:29.758 [2024-07-26 14:20:37.489974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.758 [2024-07-26 14:20:37.490038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.758 qpair failed and we were unable to recover it. 00:26:29.758 [2024-07-26 14:20:37.490296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.758 [2024-07-26 14:20:37.490360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.758 qpair failed and we were unable to recover it. 00:26:29.758 [2024-07-26 14:20:37.490643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.758 [2024-07-26 14:20:37.490708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.758 qpair failed and we were unable to recover it. 00:26:29.758 [2024-07-26 14:20:37.490986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.758 [2024-07-26 14:20:37.491050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.758 qpair failed and we were unable to recover it. 00:26:29.758 [2024-07-26 14:20:37.491282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.758 [2024-07-26 14:20:37.491346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.758 qpair failed and we were unable to recover it. 00:26:29.758 [2024-07-26 14:20:37.491554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.758 [2024-07-26 14:20:37.491626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.758 qpair failed and we were unable to recover it. 00:26:29.758 [2024-07-26 14:20:37.491904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.758 [2024-07-26 14:20:37.491968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.758 qpair failed and we were unable to recover it. 00:26:29.758 [2024-07-26 14:20:37.492182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.758 [2024-07-26 14:20:37.492247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.758 qpair failed and we were unable to recover it. 00:26:29.758 [2024-07-26 14:20:37.492559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.758 [2024-07-26 14:20:37.492624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.758 qpair failed and we were unable to recover it. 00:26:29.758 [2024-07-26 14:20:37.492915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.758 [2024-07-26 14:20:37.492979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.758 qpair failed and we were unable to recover it. 00:26:29.758 [2024-07-26 14:20:37.493230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.758 [2024-07-26 14:20:37.493304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.758 qpair failed and we were unable to recover it. 00:26:29.758 [2024-07-26 14:20:37.493578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.758 [2024-07-26 14:20:37.493643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.758 qpair failed and we were unable to recover it. 00:26:29.758 [2024-07-26 14:20:37.493930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.758 [2024-07-26 14:20:37.493994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.758 qpair failed and we were unable to recover it. 00:26:29.758 [2024-07-26 14:20:37.494285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.758 [2024-07-26 14:20:37.494348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.758 qpair failed and we were unable to recover it. 00:26:29.758 [2024-07-26 14:20:37.494592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.758 [2024-07-26 14:20:37.494657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.758 qpair failed and we were unable to recover it. 00:26:29.758 [2024-07-26 14:20:37.494904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.759 [2024-07-26 14:20:37.494967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.759 qpair failed and we were unable to recover it. 00:26:29.759 [2024-07-26 14:20:37.495252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.759 [2024-07-26 14:20:37.495316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.759 qpair failed and we were unable to recover it. 00:26:29.759 [2024-07-26 14:20:37.495560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.759 [2024-07-26 14:20:37.495625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.759 qpair failed and we were unable to recover it. 00:26:29.759 [2024-07-26 14:20:37.495876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.759 [2024-07-26 14:20:37.495940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.759 qpair failed and we were unable to recover it. 00:26:29.759 [2024-07-26 14:20:37.496223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.759 [2024-07-26 14:20:37.496286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.759 qpair failed and we were unable to recover it. 00:26:29.759 [2024-07-26 14:20:37.496568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.759 [2024-07-26 14:20:37.496633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.759 qpair failed and we were unable to recover it. 00:26:29.759 [2024-07-26 14:20:37.496921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.759 [2024-07-26 14:20:37.496985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.759 qpair failed and we were unable to recover it. 00:26:29.759 [2024-07-26 14:20:37.497260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.759 [2024-07-26 14:20:37.497323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.759 qpair failed and we were unable to recover it. 00:26:29.759 [2024-07-26 14:20:37.497559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.759 [2024-07-26 14:20:37.497625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.759 qpair failed and we were unable to recover it. 00:26:29.759 [2024-07-26 14:20:37.497876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.759 [2024-07-26 14:20:37.497940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.759 qpair failed and we were unable to recover it. 00:26:29.759 [2024-07-26 14:20:37.498200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.759 [2024-07-26 14:20:37.498264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.759 qpair failed and we were unable to recover it. 00:26:29.759 [2024-07-26 14:20:37.498490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.759 [2024-07-26 14:20:37.498596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.759 qpair failed and we were unable to recover it. 00:26:29.759 [2024-07-26 14:20:37.498749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.759 [2024-07-26 14:20:37.498785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.759 qpair failed and we were unable to recover it. 00:26:29.759 [2024-07-26 14:20:37.498894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.759 [2024-07-26 14:20:37.498928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.759 qpair failed and we were unable to recover it. 00:26:29.759 [2024-07-26 14:20:37.499107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.759 [2024-07-26 14:20:37.499141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.759 qpair failed and we were unable to recover it. 00:26:29.759 [2024-07-26 14:20:37.499310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.759 [2024-07-26 14:20:37.499344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.759 qpair failed and we were unable to recover it. 00:26:29.759 [2024-07-26 14:20:37.499480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.759 [2024-07-26 14:20:37.499515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.759 qpair failed and we were unable to recover it. 00:26:29.759 [2024-07-26 14:20:37.499670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.759 [2024-07-26 14:20:37.499705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.759 qpair failed and we were unable to recover it. 00:26:29.759 [2024-07-26 14:20:37.499800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.759 [2024-07-26 14:20:37.499835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.759 qpair failed and we were unable to recover it. 00:26:29.759 [2024-07-26 14:20:37.499965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.759 [2024-07-26 14:20:37.499999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.759 qpair failed and we were unable to recover it. 00:26:29.759 [2024-07-26 14:20:37.500107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.759 [2024-07-26 14:20:37.500140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.759 qpair failed and we were unable to recover it. 00:26:29.759 [2024-07-26 14:20:37.500306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.759 [2024-07-26 14:20:37.500341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.759 qpair failed and we were unable to recover it. 00:26:29.759 [2024-07-26 14:20:37.500483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.759 [2024-07-26 14:20:37.500520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.759 qpair failed and we were unable to recover it. 00:26:29.759 [2024-07-26 14:20:37.500690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.759 [2024-07-26 14:20:37.500724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.759 qpair failed and we were unable to recover it. 00:26:29.759 [2024-07-26 14:20:37.500862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.759 [2024-07-26 14:20:37.500895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.759 qpair failed and we were unable to recover it. 00:26:29.759 [2024-07-26 14:20:37.501035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.759 [2024-07-26 14:20:37.501069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.759 qpair failed and we were unable to recover it. 00:26:29.759 [2024-07-26 14:20:37.501230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.759 [2024-07-26 14:20:37.501265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.759 qpair failed and we were unable to recover it. 00:26:29.759 [2024-07-26 14:20:37.501422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.759 [2024-07-26 14:20:37.501456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.759 qpair failed and we were unable to recover it. 00:26:29.759 [2024-07-26 14:20:37.501588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.759 [2024-07-26 14:20:37.501621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.759 qpair failed and we were unable to recover it. 00:26:29.759 [2024-07-26 14:20:37.501784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.759 [2024-07-26 14:20:37.501818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.759 qpair failed and we were unable to recover it. 00:26:29.759 [2024-07-26 14:20:37.501979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.759 [2024-07-26 14:20:37.502012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.759 qpair failed and we were unable to recover it. 00:26:29.759 [2024-07-26 14:20:37.502134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.759 [2024-07-26 14:20:37.502166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.759 qpair failed and we were unable to recover it. 00:26:29.759 [2024-07-26 14:20:37.502321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.759 [2024-07-26 14:20:37.502355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.759 qpair failed and we were unable to recover it. 00:26:29.759 [2024-07-26 14:20:37.502512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.759 [2024-07-26 14:20:37.502556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.759 qpair failed and we were unable to recover it. 00:26:29.759 [2024-07-26 14:20:37.502669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.759 [2024-07-26 14:20:37.502700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.759 qpair failed and we were unable to recover it. 00:26:29.759 [2024-07-26 14:20:37.502863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.759 [2024-07-26 14:20:37.502898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.759 qpair failed and we were unable to recover it. 00:26:29.759 [2024-07-26 14:20:37.503004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.759 [2024-07-26 14:20:37.503035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.759 qpair failed and we were unable to recover it. 00:26:29.760 [2024-07-26 14:20:37.503165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.760 [2024-07-26 14:20:37.503195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.760 qpair failed and we were unable to recover it. 00:26:29.760 [2024-07-26 14:20:37.503293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.760 [2024-07-26 14:20:37.503324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.760 qpair failed and we were unable to recover it. 00:26:29.760 [2024-07-26 14:20:37.503482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.760 [2024-07-26 14:20:37.503513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.760 qpair failed and we were unable to recover it. 00:26:29.760 [2024-07-26 14:20:37.503681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.760 [2024-07-26 14:20:37.503711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.760 qpair failed and we were unable to recover it. 00:26:29.760 [2024-07-26 14:20:37.503818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.760 [2024-07-26 14:20:37.503848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.760 qpair failed and we were unable to recover it. 00:26:29.760 [2024-07-26 14:20:37.504001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.760 [2024-07-26 14:20:37.504064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.760 qpair failed and we were unable to recover it. 00:26:29.760 [2024-07-26 14:20:37.504318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.760 [2024-07-26 14:20:37.504385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.760 qpair failed and we were unable to recover it. 00:26:29.760 [2024-07-26 14:20:37.504588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.760 [2024-07-26 14:20:37.504619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.760 qpair failed and we were unable to recover it. 00:26:29.760 [2024-07-26 14:20:37.504745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.760 [2024-07-26 14:20:37.504775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.760 qpair failed and we were unable to recover it. 00:26:29.760 [2024-07-26 14:20:37.504872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.760 [2024-07-26 14:20:37.504902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.760 qpair failed and we were unable to recover it. 00:26:29.760 [2024-07-26 14:20:37.505082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.760 [2024-07-26 14:20:37.505146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.760 qpair failed and we were unable to recover it. 00:26:29.760 [2024-07-26 14:20:37.505413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.760 [2024-07-26 14:20:37.505478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.760 qpair failed and we were unable to recover it. 00:26:29.760 [2024-07-26 14:20:37.505689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.760 [2024-07-26 14:20:37.505720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.760 qpair failed and we were unable to recover it. 00:26:29.760 [2024-07-26 14:20:37.505863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.760 [2024-07-26 14:20:37.505931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.760 qpair failed and we were unable to recover it. 00:26:29.760 [2024-07-26 14:20:37.506213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.760 [2024-07-26 14:20:37.506279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.760 qpair failed and we were unable to recover it. 00:26:29.760 [2024-07-26 14:20:37.506549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.760 [2024-07-26 14:20:37.506591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.760 qpair failed and we were unable to recover it. 00:26:29.760 [2024-07-26 14:20:37.506683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.760 [2024-07-26 14:20:37.506715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.760 qpair failed and we were unable to recover it. 00:26:29.760 [2024-07-26 14:20:37.506873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.760 [2024-07-26 14:20:37.506903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.760 qpair failed and we were unable to recover it. 00:26:29.760 [2024-07-26 14:20:37.506990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.760 [2024-07-26 14:20:37.507061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.760 qpair failed and we were unable to recover it. 00:26:29.760 [2024-07-26 14:20:37.507338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.760 [2024-07-26 14:20:37.507402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.760 qpair failed and we were unable to recover it. 00:26:29.760 [2024-07-26 14:20:37.507605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.760 [2024-07-26 14:20:37.507636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.760 qpair failed and we were unable to recover it. 00:26:29.760 [2024-07-26 14:20:37.507739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.760 [2024-07-26 14:20:37.507769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.760 qpair failed and we were unable to recover it. 00:26:29.760 [2024-07-26 14:20:37.507971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.760 [2024-07-26 14:20:37.508034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.760 qpair failed and we were unable to recover it. 00:26:29.760 [2024-07-26 14:20:37.508278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.760 [2024-07-26 14:20:37.508343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.760 qpair failed and we were unable to recover it. 00:26:29.760 [2024-07-26 14:20:37.508605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.760 [2024-07-26 14:20:37.508636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.760 qpair failed and we were unable to recover it. 00:26:29.760 [2024-07-26 14:20:37.508769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.760 [2024-07-26 14:20:37.508799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.760 qpair failed and we were unable to recover it. 00:26:29.760 [2024-07-26 14:20:37.509060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.760 [2024-07-26 14:20:37.509123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.760 qpair failed and we were unable to recover it. 00:26:29.760 [2024-07-26 14:20:37.509372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.760 [2024-07-26 14:20:37.509437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.760 qpair failed and we were unable to recover it. 00:26:29.760 [2024-07-26 14:20:37.509690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.760 [2024-07-26 14:20:37.509722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.760 qpair failed and we were unable to recover it. 00:26:29.760 [2024-07-26 14:20:37.509822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.760 [2024-07-26 14:20:37.509852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.760 qpair failed and we were unable to recover it. 00:26:29.760 [2024-07-26 14:20:37.509950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.760 [2024-07-26 14:20:37.509979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.760 qpair failed and we were unable to recover it. 00:26:29.760 [2024-07-26 14:20:37.510237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.761 [2024-07-26 14:20:37.510300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.761 qpair failed and we were unable to recover it. 00:26:29.761 [2024-07-26 14:20:37.510498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.761 [2024-07-26 14:20:37.510538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.761 qpair failed and we were unable to recover it. 00:26:29.761 [2024-07-26 14:20:37.510665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.761 [2024-07-26 14:20:37.510695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.761 qpair failed and we were unable to recover it. 00:26:29.761 [2024-07-26 14:20:37.510895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.761 [2024-07-26 14:20:37.510959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.761 qpair failed and we were unable to recover it. 00:26:29.761 [2024-07-26 14:20:37.511250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.761 [2024-07-26 14:20:37.511314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.761 qpair failed and we were unable to recover it. 00:26:29.761 [2024-07-26 14:20:37.511559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.761 [2024-07-26 14:20:37.511612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.761 qpair failed and we were unable to recover it. 00:26:29.761 [2024-07-26 14:20:37.511764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.761 [2024-07-26 14:20:37.511794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.761 qpair failed and we were unable to recover it. 00:26:29.761 [2024-07-26 14:20:37.512008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.761 [2024-07-26 14:20:37.512045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.761 qpair failed and we were unable to recover it. 00:26:29.761 [2024-07-26 14:20:37.512197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.761 [2024-07-26 14:20:37.512261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.761 qpair failed and we were unable to recover it. 00:26:29.761 [2024-07-26 14:20:37.512556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.761 [2024-07-26 14:20:37.512587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.761 qpair failed and we were unable to recover it. 00:26:29.761 [2024-07-26 14:20:37.512693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.761 [2024-07-26 14:20:37.512724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.761 qpair failed and we were unable to recover it. 00:26:29.761 [2024-07-26 14:20:37.512822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.761 [2024-07-26 14:20:37.512853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.761 qpair failed and we were unable to recover it. 00:26:29.761 [2024-07-26 14:20:37.513043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.761 [2024-07-26 14:20:37.513074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.761 qpair failed and we were unable to recover it. 00:26:29.761 [2024-07-26 14:20:37.513182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.761 [2024-07-26 14:20:37.513213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.761 qpair failed and we were unable to recover it. 00:26:29.761 [2024-07-26 14:20:37.513342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.761 [2024-07-26 14:20:37.513373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.761 qpair failed and we were unable to recover it. 00:26:29.761 [2024-07-26 14:20:37.513505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.761 [2024-07-26 14:20:37.513544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.761 qpair failed and we were unable to recover it. 00:26:29.761 [2024-07-26 14:20:37.513884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.761 [2024-07-26 14:20:37.513949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.761 qpair failed and we were unable to recover it. 00:26:29.761 [2024-07-26 14:20:37.514199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.761 [2024-07-26 14:20:37.514263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.761 qpair failed and we were unable to recover it. 00:26:29.761 [2024-07-26 14:20:37.514508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.761 [2024-07-26 14:20:37.514591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.761 qpair failed and we were unable to recover it. 00:26:29.761 [2024-07-26 14:20:37.514834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.761 [2024-07-26 14:20:37.514899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.761 qpair failed and we were unable to recover it. 00:26:29.761 [2024-07-26 14:20:37.515129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.761 [2024-07-26 14:20:37.515193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.761 qpair failed and we were unable to recover it. 00:26:29.761 [2024-07-26 14:20:37.515488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.761 [2024-07-26 14:20:37.515573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.761 qpair failed and we were unable to recover it. 00:26:29.761 [2024-07-26 14:20:37.515867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.761 [2024-07-26 14:20:37.515932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.761 qpair failed and we were unable to recover it. 00:26:29.761 [2024-07-26 14:20:37.516188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.761 [2024-07-26 14:20:37.516251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.761 qpair failed and we were unable to recover it. 00:26:29.761 [2024-07-26 14:20:37.516498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.761 [2024-07-26 14:20:37.516596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.761 qpair failed and we were unable to recover it. 00:26:29.761 [2024-07-26 14:20:37.516887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.761 [2024-07-26 14:20:37.516950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.761 qpair failed and we were unable to recover it. 00:26:29.761 [2024-07-26 14:20:37.517196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.761 [2024-07-26 14:20:37.517262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.761 qpair failed and we were unable to recover it. 00:26:29.761 [2024-07-26 14:20:37.517560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.761 [2024-07-26 14:20:37.517626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.761 qpair failed and we were unable to recover it. 00:26:29.761 [2024-07-26 14:20:37.517847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.761 [2024-07-26 14:20:37.517913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.761 qpair failed and we were unable to recover it. 00:26:29.761 [2024-07-26 14:20:37.518208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.761 [2024-07-26 14:20:37.518274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.761 qpair failed and we were unable to recover it. 00:26:29.761 [2024-07-26 14:20:37.518500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.761 [2024-07-26 14:20:37.518585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.761 qpair failed and we were unable to recover it. 00:26:29.761 [2024-07-26 14:20:37.518872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.761 [2024-07-26 14:20:37.518936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.761 qpair failed and we were unable to recover it. 00:26:29.761 [2024-07-26 14:20:37.519134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.761 [2024-07-26 14:20:37.519200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.761 qpair failed and we were unable to recover it. 00:26:29.761 [2024-07-26 14:20:37.519489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.761 [2024-07-26 14:20:37.519571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.761 qpair failed and we were unable to recover it. 00:26:29.761 [2024-07-26 14:20:37.519804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.761 [2024-07-26 14:20:37.519876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.761 qpair failed and we were unable to recover it. 00:26:29.761 [2024-07-26 14:20:37.520102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.761 [2024-07-26 14:20:37.520167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.761 qpair failed and we were unable to recover it. 00:26:29.761 [2024-07-26 14:20:37.520419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.761 [2024-07-26 14:20:37.520483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.761 qpair failed and we were unable to recover it. 00:26:29.761 [2024-07-26 14:20:37.520762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.761 [2024-07-26 14:20:37.520826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.762 qpair failed and we were unable to recover it. 00:26:29.762 [2024-07-26 14:20:37.521082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.762 [2024-07-26 14:20:37.521146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.762 qpair failed and we were unable to recover it. 00:26:29.762 [2024-07-26 14:20:37.521356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.762 [2024-07-26 14:20:37.521422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.762 qpair failed and we were unable to recover it. 00:26:29.762 [2024-07-26 14:20:37.521692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.762 [2024-07-26 14:20:37.521757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.762 qpair failed and we were unable to recover it. 00:26:29.762 [2024-07-26 14:20:37.522030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.762 [2024-07-26 14:20:37.522094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.762 qpair failed and we were unable to recover it. 00:26:29.762 [2024-07-26 14:20:37.522341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.762 [2024-07-26 14:20:37.522405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.762 qpair failed and we were unable to recover it. 00:26:29.762 [2024-07-26 14:20:37.522663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.762 [2024-07-26 14:20:37.522728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.762 qpair failed and we were unable to recover it. 00:26:29.762 [2024-07-26 14:20:37.523021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.762 [2024-07-26 14:20:37.523085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.762 qpair failed and we were unable to recover it. 00:26:29.762 [2024-07-26 14:20:37.523292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.762 [2024-07-26 14:20:37.523357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.762 qpair failed and we were unable to recover it. 00:26:29.762 [2024-07-26 14:20:37.523624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.762 [2024-07-26 14:20:37.523688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.762 qpair failed and we were unable to recover it. 00:26:29.762 [2024-07-26 14:20:37.523950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.762 [2024-07-26 14:20:37.524024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.762 qpair failed and we were unable to recover it. 00:26:29.762 [2024-07-26 14:20:37.524276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.762 [2024-07-26 14:20:37.524338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.762 qpair failed and we were unable to recover it. 00:26:29.762 [2024-07-26 14:20:37.524584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.762 [2024-07-26 14:20:37.524651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.762 qpair failed and we were unable to recover it. 00:26:29.762 [2024-07-26 14:20:37.524887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.762 [2024-07-26 14:20:37.524952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.762 qpair failed and we were unable to recover it. 00:26:29.762 [2024-07-26 14:20:37.525240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.762 [2024-07-26 14:20:37.525304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.762 qpair failed and we were unable to recover it. 00:26:29.762 [2024-07-26 14:20:37.525560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.762 [2024-07-26 14:20:37.525626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.762 qpair failed and we were unable to recover it. 00:26:29.762 [2024-07-26 14:20:37.525871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.762 [2024-07-26 14:20:37.525938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.762 qpair failed and we were unable to recover it. 00:26:29.762 [2024-07-26 14:20:37.526185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.762 [2024-07-26 14:20:37.526250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.762 qpair failed and we were unable to recover it. 00:26:29.762 [2024-07-26 14:20:37.526507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.762 [2024-07-26 14:20:37.526721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.762 qpair failed and we were unable to recover it. 00:26:29.762 [2024-07-26 14:20:37.526961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.762 [2024-07-26 14:20:37.527026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.762 qpair failed and we were unable to recover it. 00:26:29.762 [2024-07-26 14:20:37.527241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.762 [2024-07-26 14:20:37.527307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.762 qpair failed and we were unable to recover it. 00:26:29.762 [2024-07-26 14:20:37.527523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.762 [2024-07-26 14:20:37.527607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.762 qpair failed and we were unable to recover it. 00:26:29.762 [2024-07-26 14:20:37.527861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.762 [2024-07-26 14:20:37.527924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.762 qpair failed and we were unable to recover it. 00:26:29.762 [2024-07-26 14:20:37.528168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.762 [2024-07-26 14:20:37.528232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.762 qpair failed and we were unable to recover it. 00:26:29.762 [2024-07-26 14:20:37.528510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.762 [2024-07-26 14:20:37.528598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.762 qpair failed and we were unable to recover it. 00:26:29.762 [2024-07-26 14:20:37.528819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.762 [2024-07-26 14:20:37.528883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.762 qpair failed and we were unable to recover it. 00:26:29.762 [2024-07-26 14:20:37.529101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.762 [2024-07-26 14:20:37.529167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.762 qpair failed and we were unable to recover it. 00:26:29.762 [2024-07-26 14:20:37.529451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.762 [2024-07-26 14:20:37.529516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.762 qpair failed and we were unable to recover it. 00:26:29.762 [2024-07-26 14:20:37.529808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.762 [2024-07-26 14:20:37.529873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.762 qpair failed and we were unable to recover it. 00:26:29.762 [2024-07-26 14:20:37.530150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.762 [2024-07-26 14:20:37.530214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.762 qpair failed and we were unable to recover it. 00:26:29.762 [2024-07-26 14:20:37.530453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.762 [2024-07-26 14:20:37.530517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.762 qpair failed and we were unable to recover it. 00:26:29.762 [2024-07-26 14:20:37.530781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.762 [2024-07-26 14:20:37.530852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.762 qpair failed and we were unable to recover it. 00:26:29.762 [2024-07-26 14:20:37.531134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.762 [2024-07-26 14:20:37.531197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.762 qpair failed and we were unable to recover it. 00:26:29.762 [2024-07-26 14:20:37.531437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.762 [2024-07-26 14:20:37.531501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.762 qpair failed and we were unable to recover it. 00:26:29.762 [2024-07-26 14:20:37.531777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.762 [2024-07-26 14:20:37.531848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.762 qpair failed and we were unable to recover it. 00:26:29.762 [2024-07-26 14:20:37.532127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.762 [2024-07-26 14:20:37.532190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.762 qpair failed and we were unable to recover it. 00:26:29.762 [2024-07-26 14:20:37.532402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.762 [2024-07-26 14:20:37.532469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.762 qpair failed and we were unable to recover it. 00:26:29.762 [2024-07-26 14:20:37.532737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.762 [2024-07-26 14:20:37.532802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.762 qpair failed and we were unable to recover it. 00:26:29.762 [2024-07-26 14:20:37.533059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.763 [2024-07-26 14:20:37.533123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.763 qpair failed and we were unable to recover it. 00:26:29.763 [2024-07-26 14:20:37.533336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.763 [2024-07-26 14:20:37.533403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.763 qpair failed and we were unable to recover it. 00:26:29.763 [2024-07-26 14:20:37.533626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.763 [2024-07-26 14:20:37.533693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.763 qpair failed and we were unable to recover it. 00:26:29.763 [2024-07-26 14:20:37.533905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.763 [2024-07-26 14:20:37.533970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.763 qpair failed and we were unable to recover it. 00:26:29.763 [2024-07-26 14:20:37.534166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.763 [2024-07-26 14:20:37.534233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.763 qpair failed and we were unable to recover it. 00:26:29.763 [2024-07-26 14:20:37.534494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.763 [2024-07-26 14:20:37.534577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.763 qpair failed and we were unable to recover it. 00:26:29.763 [2024-07-26 14:20:37.534877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.763 [2024-07-26 14:20:37.534942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.763 qpair failed and we were unable to recover it. 00:26:29.763 [2024-07-26 14:20:37.535174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.763 [2024-07-26 14:20:37.535238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.763 qpair failed and we were unable to recover it. 00:26:29.763 [2024-07-26 14:20:37.535515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.763 [2024-07-26 14:20:37.535594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.763 qpair failed and we were unable to recover it. 00:26:29.763 [2024-07-26 14:20:37.535889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.763 [2024-07-26 14:20:37.535953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.763 qpair failed and we were unable to recover it. 00:26:29.763 [2024-07-26 14:20:37.536167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.763 [2024-07-26 14:20:37.536233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.763 qpair failed and we were unable to recover it. 00:26:29.763 [2024-07-26 14:20:37.536479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.763 [2024-07-26 14:20:37.536559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.763 qpair failed and we were unable to recover it. 00:26:29.763 [2024-07-26 14:20:37.536780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.763 [2024-07-26 14:20:37.536855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.763 qpair failed and we were unable to recover it. 00:26:29.763 [2024-07-26 14:20:37.537100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.763 [2024-07-26 14:20:37.537166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.763 qpair failed and we were unable to recover it. 00:26:29.763 [2024-07-26 14:20:37.537452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.763 [2024-07-26 14:20:37.537516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.763 qpair failed and we were unable to recover it. 00:26:29.763 [2024-07-26 14:20:37.537798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.763 [2024-07-26 14:20:37.537863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.763 qpair failed and we were unable to recover it. 00:26:29.763 [2024-07-26 14:20:37.538111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.763 [2024-07-26 14:20:37.538175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.763 qpair failed and we were unable to recover it. 00:26:29.763 [2024-07-26 14:20:37.538377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.763 [2024-07-26 14:20:37.538445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.763 qpair failed and we were unable to recover it. 00:26:29.763 [2024-07-26 14:20:37.538708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.763 [2024-07-26 14:20:37.538778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.763 qpair failed and we were unable to recover it. 00:26:29.763 [2024-07-26 14:20:37.539028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.763 [2024-07-26 14:20:37.539104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.763 qpair failed and we were unable to recover it. 00:26:29.763 [2024-07-26 14:20:37.539321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.763 [2024-07-26 14:20:37.539385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.763 qpair failed and we were unable to recover it. 00:26:29.763 [2024-07-26 14:20:37.539597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.763 [2024-07-26 14:20:37.539667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.763 qpair failed and we were unable to recover it. 00:26:29.763 [2024-07-26 14:20:37.539918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.763 [2024-07-26 14:20:37.539983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.763 qpair failed and we were unable to recover it. 00:26:29.763 [2024-07-26 14:20:37.540266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.763 [2024-07-26 14:20:37.540330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.763 qpair failed and we were unable to recover it. 00:26:29.763 [2024-07-26 14:20:37.540625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.763 [2024-07-26 14:20:37.540690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.763 qpair failed and we were unable to recover it. 00:26:29.763 [2024-07-26 14:20:37.540901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.763 [2024-07-26 14:20:37.540968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.763 qpair failed and we were unable to recover it. 00:26:29.763 [2024-07-26 14:20:37.541229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.763 [2024-07-26 14:20:37.541294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.763 qpair failed and we were unable to recover it. 00:26:29.763 [2024-07-26 14:20:37.541499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.763 [2024-07-26 14:20:37.541593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.763 qpair failed and we were unable to recover it. 00:26:29.763 [2024-07-26 14:20:37.541858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.763 [2024-07-26 14:20:37.541923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.763 qpair failed and we were unable to recover it. 00:26:29.763 [2024-07-26 14:20:37.542165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.763 [2024-07-26 14:20:37.542231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.763 qpair failed and we were unable to recover it. 00:26:29.763 [2024-07-26 14:20:37.542511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.763 [2024-07-26 14:20:37.542601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.763 qpair failed and we were unable to recover it. 00:26:29.763 [2024-07-26 14:20:37.542826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.763 [2024-07-26 14:20:37.542890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.763 qpair failed and we were unable to recover it. 00:26:29.763 [2024-07-26 14:20:37.543115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.763 [2024-07-26 14:20:37.543179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.763 qpair failed and we were unable to recover it. 00:26:29.763 [2024-07-26 14:20:37.543391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.763 [2024-07-26 14:20:37.543457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.763 qpair failed and we were unable to recover it. 00:26:29.763 [2024-07-26 14:20:37.543761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.763 [2024-07-26 14:20:37.543835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.763 qpair failed and we were unable to recover it. 00:26:29.763 [2024-07-26 14:20:37.544074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.763 [2024-07-26 14:20:37.544140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.763 qpair failed and we were unable to recover it. 00:26:29.763 [2024-07-26 14:20:37.544426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.763 [2024-07-26 14:20:37.544491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.763 qpair failed and we were unable to recover it. 00:26:29.763 [2024-07-26 14:20:37.544772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.763 [2024-07-26 14:20:37.544845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.763 qpair failed and we were unable to recover it. 00:26:29.764 [2024-07-26 14:20:37.545094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.764 [2024-07-26 14:20:37.545158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.764 qpair failed and we were unable to recover it. 00:26:29.764 [2024-07-26 14:20:37.545411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.764 [2024-07-26 14:20:37.545475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.764 qpair failed and we were unable to recover it. 00:26:29.764 [2024-07-26 14:20:37.545768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.764 [2024-07-26 14:20:37.545838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.764 qpair failed and we were unable to recover it. 00:26:29.764 [2024-07-26 14:20:37.546101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.764 [2024-07-26 14:20:37.546166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.764 qpair failed and we were unable to recover it. 00:26:29.764 [2024-07-26 14:20:37.546443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.764 [2024-07-26 14:20:37.546506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.764 qpair failed and we were unable to recover it. 00:26:29.764 [2024-07-26 14:20:37.546744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.764 [2024-07-26 14:20:37.546810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.764 qpair failed and we were unable to recover it. 00:26:29.764 [2024-07-26 14:20:37.547030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.764 [2024-07-26 14:20:37.547095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.764 qpair failed and we were unable to recover it. 00:26:29.764 [2024-07-26 14:20:37.547315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.764 [2024-07-26 14:20:37.547378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.764 qpair failed and we were unable to recover it. 00:26:29.764 [2024-07-26 14:20:37.547658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.764 [2024-07-26 14:20:37.547725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.764 qpair failed and we were unable to recover it. 00:26:29.764 [2024-07-26 14:20:37.547977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.764 [2024-07-26 14:20:37.548041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.764 qpair failed and we were unable to recover it. 00:26:29.764 [2024-07-26 14:20:37.548331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.764 [2024-07-26 14:20:37.548395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.764 qpair failed and we were unable to recover it. 00:26:29.764 [2024-07-26 14:20:37.548645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.764 [2024-07-26 14:20:37.548710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.764 qpair failed and we were unable to recover it. 00:26:29.764 [2024-07-26 14:20:37.548971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.764 [2024-07-26 14:20:37.549035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.764 qpair failed and we were unable to recover it. 00:26:29.764 [2024-07-26 14:20:37.549320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.764 [2024-07-26 14:20:37.549383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.764 qpair failed and we were unable to recover it. 00:26:29.764 [2024-07-26 14:20:37.549615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.764 [2024-07-26 14:20:37.549691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.764 qpair failed and we were unable to recover it. 00:26:29.764 [2024-07-26 14:20:37.549986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.764 [2024-07-26 14:20:37.550049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.764 qpair failed and we were unable to recover it. 00:26:29.764 [2024-07-26 14:20:37.550266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.764 [2024-07-26 14:20:37.550329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.764 qpair failed and we were unable to recover it. 00:26:29.764 [2024-07-26 14:20:37.550549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.764 [2024-07-26 14:20:37.550614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.764 qpair failed and we were unable to recover it. 00:26:29.764 [2024-07-26 14:20:37.550855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.764 [2024-07-26 14:20:37.550921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.764 qpair failed and we were unable to recover it. 00:26:29.764 [2024-07-26 14:20:37.551173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.764 [2024-07-26 14:20:37.551238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.764 qpair failed and we were unable to recover it. 00:26:29.764 [2024-07-26 14:20:37.551493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.764 [2024-07-26 14:20:37.551576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.764 qpair failed and we were unable to recover it. 00:26:29.764 [2024-07-26 14:20:37.551805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.764 [2024-07-26 14:20:37.551871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.764 qpair failed and we were unable to recover it. 00:26:29.764 [2024-07-26 14:20:37.552081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.764 [2024-07-26 14:20:37.552145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.764 qpair failed and we were unable to recover it. 00:26:29.764 [2024-07-26 14:20:37.552398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.764 [2024-07-26 14:20:37.552460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.764 qpair failed and we were unable to recover it. 00:26:29.764 [2024-07-26 14:20:37.552724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.764 [2024-07-26 14:20:37.552791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.764 qpair failed and we were unable to recover it. 00:26:29.764 [2024-07-26 14:20:37.553064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.764 [2024-07-26 14:20:37.553129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.764 qpair failed and we were unable to recover it. 00:26:29.764 [2024-07-26 14:20:37.553352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.764 [2024-07-26 14:20:37.553415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.764 qpair failed and we were unable to recover it. 00:26:29.764 [2024-07-26 14:20:37.553739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.764 [2024-07-26 14:20:37.553805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.764 qpair failed and we were unable to recover it. 00:26:29.764 [2024-07-26 14:20:37.554069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.764 [2024-07-26 14:20:37.554133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.764 qpair failed and we were unable to recover it. 00:26:29.764 [2024-07-26 14:20:37.554376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.764 [2024-07-26 14:20:37.554443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.764 qpair failed and we were unable to recover it. 00:26:29.764 [2024-07-26 14:20:37.554713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.764 [2024-07-26 14:20:37.554779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.764 qpair failed and we were unable to recover it. 00:26:29.764 [2024-07-26 14:20:37.555001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.764 [2024-07-26 14:20:37.555065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.764 qpair failed and we were unable to recover it. 00:26:29.765 [2024-07-26 14:20:37.555351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.765 [2024-07-26 14:20:37.555415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.765 qpair failed and we were unable to recover it. 00:26:29.765 [2024-07-26 14:20:37.555635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.765 [2024-07-26 14:20:37.555702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.765 qpair failed and we were unable to recover it. 00:26:29.765 [2024-07-26 14:20:37.555924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.765 [2024-07-26 14:20:37.555988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.765 qpair failed and we were unable to recover it. 00:26:29.765 [2024-07-26 14:20:37.556209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.765 [2024-07-26 14:20:37.556272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.765 qpair failed and we were unable to recover it. 00:26:29.765 [2024-07-26 14:20:37.556489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.765 [2024-07-26 14:20:37.556569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.765 qpair failed and we were unable to recover it. 00:26:29.765 [2024-07-26 14:20:37.556780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.765 [2024-07-26 14:20:37.556844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.765 qpair failed and we were unable to recover it. 00:26:29.765 [2024-07-26 14:20:37.557092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.765 [2024-07-26 14:20:37.557155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.765 qpair failed and we were unable to recover it. 00:26:29.765 [2024-07-26 14:20:37.557449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.765 [2024-07-26 14:20:37.557513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.765 qpair failed and we were unable to recover it. 00:26:29.765 [2024-07-26 14:20:37.557787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.765 [2024-07-26 14:20:37.557851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.765 qpair failed and we were unable to recover it. 00:26:29.765 [2024-07-26 14:20:37.558107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.765 [2024-07-26 14:20:37.558172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.765 qpair failed and we were unable to recover it. 00:26:29.765 [2024-07-26 14:20:37.558422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.765 [2024-07-26 14:20:37.558485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.765 qpair failed and we were unable to recover it. 00:26:29.765 [2024-07-26 14:20:37.558754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.765 [2024-07-26 14:20:37.558819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.765 qpair failed and we were unable to recover it. 00:26:29.765 [2024-07-26 14:20:37.559039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.765 [2024-07-26 14:20:37.559102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.765 qpair failed and we were unable to recover it. 00:26:29.765 [2024-07-26 14:20:37.559382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.765 [2024-07-26 14:20:37.559445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.765 qpair failed and we were unable to recover it. 00:26:29.765 [2024-07-26 14:20:37.559747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.765 [2024-07-26 14:20:37.559813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.765 qpair failed and we were unable to recover it. 00:26:29.765 [2024-07-26 14:20:37.560050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.765 [2024-07-26 14:20:37.560113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.765 qpair failed and we were unable to recover it. 00:26:29.765 [2024-07-26 14:20:37.560390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.765 [2024-07-26 14:20:37.560454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.765 qpair failed and we were unable to recover it. 00:26:29.765 [2024-07-26 14:20:37.560738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.765 [2024-07-26 14:20:37.560803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.765 qpair failed and we were unable to recover it. 00:26:29.765 [2024-07-26 14:20:37.561032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.765 [2024-07-26 14:20:37.561096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.765 qpair failed and we were unable to recover it. 00:26:29.765 [2024-07-26 14:20:37.561305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.765 [2024-07-26 14:20:37.561371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.765 qpair failed and we were unable to recover it. 00:26:29.765 [2024-07-26 14:20:37.561636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.765 [2024-07-26 14:20:37.561703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.765 qpair failed and we were unable to recover it. 00:26:29.765 [2024-07-26 14:20:37.561957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.765 [2024-07-26 14:20:37.562031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.765 qpair failed and we were unable to recover it. 00:26:29.765 [2024-07-26 14:20:37.562255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.765 [2024-07-26 14:20:37.562342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.765 qpair failed and we were unable to recover it. 00:26:29.765 [2024-07-26 14:20:37.562604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.765 [2024-07-26 14:20:37.562672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.765 qpair failed and we were unable to recover it. 00:26:29.765 [2024-07-26 14:20:37.562851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.765 [2024-07-26 14:20:37.562915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.765 qpair failed and we were unable to recover it. 00:26:29.765 [2024-07-26 14:20:37.563129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.765 [2024-07-26 14:20:37.563193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.765 qpair failed and we were unable to recover it. 00:26:29.765 [2024-07-26 14:20:37.563385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.765 [2024-07-26 14:20:37.563448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.765 qpair failed and we were unable to recover it. 00:26:29.765 [2024-07-26 14:20:37.563689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.765 [2024-07-26 14:20:37.563756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.765 qpair failed and we were unable to recover it. 00:26:29.765 [2024-07-26 14:20:37.563951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.765 [2024-07-26 14:20:37.564017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.765 qpair failed and we were unable to recover it. 00:26:29.765 [2024-07-26 14:20:37.564255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.765 [2024-07-26 14:20:37.564318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.765 qpair failed and we were unable to recover it. 00:26:29.765 [2024-07-26 14:20:37.564554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.765 [2024-07-26 14:20:37.564619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.765 qpair failed and we were unable to recover it. 00:26:29.765 [2024-07-26 14:20:37.564857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.765 [2024-07-26 14:20:37.564922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.765 qpair failed and we were unable to recover it. 00:26:29.765 [2024-07-26 14:20:37.565162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.765 [2024-07-26 14:20:37.565226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.765 qpair failed and we were unable to recover it. 00:26:29.765 [2024-07-26 14:20:37.565437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.765 [2024-07-26 14:20:37.565503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.765 qpair failed and we were unable to recover it. 00:26:29.765 [2024-07-26 14:20:37.565790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.765 [2024-07-26 14:20:37.565861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.765 qpair failed and we were unable to recover it. 00:26:29.765 [2024-07-26 14:20:37.566158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.765 [2024-07-26 14:20:37.566222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.765 qpair failed and we were unable to recover it. 00:26:29.765 [2024-07-26 14:20:37.566497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.765 [2024-07-26 14:20:37.566605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.765 qpair failed and we were unable to recover it. 00:26:29.765 [2024-07-26 14:20:37.566876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.765 [2024-07-26 14:20:37.566940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.766 qpair failed and we were unable to recover it. 00:26:29.766 [2024-07-26 14:20:37.567225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.766 [2024-07-26 14:20:37.567289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.766 qpair failed and we were unable to recover it. 00:26:29.766 [2024-07-26 14:20:37.567478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.766 [2024-07-26 14:20:37.567563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.766 qpair failed and we were unable to recover it. 00:26:29.766 [2024-07-26 14:20:37.567785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.766 [2024-07-26 14:20:37.567850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.766 qpair failed and we were unable to recover it. 00:26:29.766 [2024-07-26 14:20:37.568134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.766 [2024-07-26 14:20:37.568199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.766 qpair failed and we were unable to recover it. 00:26:29.766 [2024-07-26 14:20:37.568442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.766 [2024-07-26 14:20:37.568505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.766 qpair failed and we were unable to recover it. 00:26:29.766 [2024-07-26 14:20:37.568815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.766 [2024-07-26 14:20:37.568880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.766 qpair failed and we were unable to recover it. 00:26:29.766 [2024-07-26 14:20:37.569128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.766 [2024-07-26 14:20:37.569193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.766 qpair failed and we were unable to recover it. 00:26:29.766 [2024-07-26 14:20:37.569406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.766 [2024-07-26 14:20:37.569471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.766 qpair failed and we were unable to recover it. 00:26:29.766 [2024-07-26 14:20:37.569740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.766 [2024-07-26 14:20:37.569807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.766 qpair failed and we were unable to recover it. 00:26:29.766 [2024-07-26 14:20:37.570061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.766 [2024-07-26 14:20:37.570125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.766 qpair failed and we were unable to recover it. 00:26:29.766 [2024-07-26 14:20:37.570386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.766 [2024-07-26 14:20:37.570449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.766 qpair failed and we were unable to recover it. 00:26:29.766 [2024-07-26 14:20:37.570698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.766 [2024-07-26 14:20:37.570775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.766 qpair failed and we were unable to recover it. 00:26:29.766 [2024-07-26 14:20:37.571068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.766 [2024-07-26 14:20:37.571133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.766 qpair failed and we were unable to recover it. 00:26:29.766 [2024-07-26 14:20:37.571387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.766 [2024-07-26 14:20:37.571452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.766 qpair failed and we were unable to recover it. 00:26:29.766 [2024-07-26 14:20:37.571727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.766 [2024-07-26 14:20:37.571793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.766 qpair failed and we were unable to recover it. 00:26:29.766 [2024-07-26 14:20:37.572074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.766 [2024-07-26 14:20:37.572137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.766 qpair failed and we were unable to recover it. 00:26:29.766 [2024-07-26 14:20:37.572399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.766 [2024-07-26 14:20:37.572463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.766 qpair failed and we were unable to recover it. 00:26:29.766 [2024-07-26 14:20:37.572770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.766 [2024-07-26 14:20:37.572835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.766 qpair failed and we were unable to recover it. 00:26:29.766 [2024-07-26 14:20:37.573040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.766 [2024-07-26 14:20:37.573105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.766 qpair failed and we were unable to recover it. 00:26:29.766 [2024-07-26 14:20:37.573385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.766 [2024-07-26 14:20:37.573460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.766 qpair failed and we were unable to recover it. 00:26:29.766 [2024-07-26 14:20:37.573736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.766 [2024-07-26 14:20:37.573801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.766 qpair failed and we were unable to recover it. 00:26:29.766 [2024-07-26 14:20:37.574046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.766 [2024-07-26 14:20:37.574109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.766 qpair failed and we were unable to recover it. 00:26:29.766 [2024-07-26 14:20:37.574391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.766 [2024-07-26 14:20:37.574456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.766 qpair failed and we were unable to recover it. 00:26:29.766 [2024-07-26 14:20:37.574758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.766 [2024-07-26 14:20:37.574823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.766 qpair failed and we were unable to recover it. 00:26:29.766 [2024-07-26 14:20:37.575119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.766 [2024-07-26 14:20:37.575183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.766 qpair failed and we were unable to recover it. 00:26:29.766 [2024-07-26 14:20:37.575437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.766 [2024-07-26 14:20:37.575503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.766 qpair failed and we were unable to recover it. 00:26:29.766 [2024-07-26 14:20:37.575776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.766 [2024-07-26 14:20:37.575840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.766 qpair failed and we were unable to recover it. 00:26:29.766 [2024-07-26 14:20:37.576061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.766 [2024-07-26 14:20:37.576124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.766 qpair failed and we were unable to recover it. 00:26:29.766 [2024-07-26 14:20:37.576403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.766 [2024-07-26 14:20:37.576468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.766 qpair failed and we were unable to recover it. 00:26:29.766 [2024-07-26 14:20:37.576703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.766 [2024-07-26 14:20:37.576769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.766 qpair failed and we were unable to recover it. 00:26:29.766 [2024-07-26 14:20:37.577015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.766 [2024-07-26 14:20:37.577081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.766 qpair failed and we were unable to recover it. 00:26:29.766 [2024-07-26 14:20:37.577346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.766 [2024-07-26 14:20:37.577410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.766 qpair failed and we were unable to recover it. 00:26:29.766 [2024-07-26 14:20:37.577650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.766 [2024-07-26 14:20:37.577717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.766 qpair failed and we were unable to recover it. 00:26:29.766 [2024-07-26 14:20:37.577974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.766 [2024-07-26 14:20:37.578038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.766 qpair failed and we were unable to recover it. 00:26:29.766 [2024-07-26 14:20:37.578249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.766 [2024-07-26 14:20:37.578312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.766 qpair failed and we were unable to recover it. 00:26:29.766 [2024-07-26 14:20:37.578568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.766 [2024-07-26 14:20:37.578635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.766 qpair failed and we were unable to recover it. 00:26:29.766 [2024-07-26 14:20:37.578847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.766 [2024-07-26 14:20:37.578911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.766 qpair failed and we were unable to recover it. 00:26:29.766 [2024-07-26 14:20:37.579191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.767 [2024-07-26 14:20:37.579255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.767 qpair failed and we were unable to recover it. 00:26:29.767 [2024-07-26 14:20:37.579474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.767 [2024-07-26 14:20:37.579555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.767 qpair failed and we were unable to recover it. 00:26:29.767 [2024-07-26 14:20:37.579805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.767 [2024-07-26 14:20:37.579868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.767 qpair failed and we were unable to recover it. 00:26:29.767 [2024-07-26 14:20:37.580073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.767 [2024-07-26 14:20:37.580137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.767 qpair failed and we were unable to recover it. 00:26:29.767 [2024-07-26 14:20:37.580383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.767 [2024-07-26 14:20:37.580447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.767 qpair failed and we were unable to recover it. 00:26:29.767 [2024-07-26 14:20:37.580719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.767 [2024-07-26 14:20:37.580784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.767 qpair failed and we were unable to recover it. 00:26:29.767 [2024-07-26 14:20:37.581056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.767 [2024-07-26 14:20:37.581120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.767 qpair failed and we were unable to recover it. 00:26:29.767 [2024-07-26 14:20:37.581338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.767 [2024-07-26 14:20:37.581404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.767 qpair failed and we were unable to recover it. 00:26:29.767 [2024-07-26 14:20:37.581662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.767 [2024-07-26 14:20:37.581729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.767 qpair failed and we were unable to recover it. 00:26:29.767 [2024-07-26 14:20:37.581981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.767 [2024-07-26 14:20:37.582045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.767 qpair failed and we were unable to recover it. 00:26:29.767 [2024-07-26 14:20:37.582244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.767 [2024-07-26 14:20:37.582307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.767 qpair failed and we were unable to recover it. 00:26:29.767 [2024-07-26 14:20:37.582525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.767 [2024-07-26 14:20:37.582622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.767 qpair failed and we were unable to recover it. 00:26:29.767 [2024-07-26 14:20:37.582840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.767 [2024-07-26 14:20:37.582906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.767 qpair failed and we were unable to recover it. 00:26:29.767 [2024-07-26 14:20:37.583145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.767 [2024-07-26 14:20:37.583210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.767 qpair failed and we were unable to recover it. 00:26:29.767 [2024-07-26 14:20:37.583425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.767 [2024-07-26 14:20:37.583498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.767 qpair failed and we were unable to recover it. 00:26:29.767 [2024-07-26 14:20:37.583784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.767 [2024-07-26 14:20:37.583849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.767 qpair failed and we were unable to recover it. 00:26:29.767 [2024-07-26 14:20:37.584118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.767 [2024-07-26 14:20:37.584182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.767 qpair failed and we were unable to recover it. 00:26:29.767 [2024-07-26 14:20:37.584464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.767 [2024-07-26 14:20:37.584556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.767 qpair failed and we were unable to recover it. 00:26:29.767 [2024-07-26 14:20:37.584808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.767 [2024-07-26 14:20:37.584874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.767 qpair failed and we were unable to recover it. 00:26:29.767 [2024-07-26 14:20:37.585155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.767 [2024-07-26 14:20:37.585220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.767 qpair failed and we were unable to recover it. 00:26:29.767 [2024-07-26 14:20:37.585464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.767 [2024-07-26 14:20:37.585557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.767 qpair failed and we were unable to recover it. 00:26:29.767 [2024-07-26 14:20:37.585788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.767 [2024-07-26 14:20:37.585852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.767 qpair failed and we were unable to recover it. 00:26:29.767 [2024-07-26 14:20:37.586043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.767 [2024-07-26 14:20:37.586107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.767 qpair failed and we were unable to recover it. 00:26:29.767 [2024-07-26 14:20:37.586353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.767 [2024-07-26 14:20:37.586417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.767 qpair failed and we were unable to recover it. 00:26:29.767 [2024-07-26 14:20:37.586671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.767 [2024-07-26 14:20:37.586736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.767 qpair failed and we were unable to recover it. 00:26:29.767 [2024-07-26 14:20:37.586943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.767 [2024-07-26 14:20:37.587007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.767 qpair failed and we were unable to recover it. 00:26:29.767 [2024-07-26 14:20:37.587260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.767 [2024-07-26 14:20:37.587323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.767 qpair failed and we were unable to recover it. 00:26:29.767 [2024-07-26 14:20:37.587549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.767 [2024-07-26 14:20:37.587614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.767 qpair failed and we were unable to recover it. 00:26:29.767 [2024-07-26 14:20:37.587851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.767 [2024-07-26 14:20:37.587917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.767 qpair failed and we were unable to recover it. 00:26:29.767 [2024-07-26 14:20:37.588118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.767 [2024-07-26 14:20:37.588183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.767 qpair failed and we were unable to recover it. 00:26:29.767 [2024-07-26 14:20:37.588424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.767 [2024-07-26 14:20:37.588488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.767 qpair failed and we were unable to recover it. 00:26:29.767 [2024-07-26 14:20:37.588773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.767 [2024-07-26 14:20:37.588847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.767 qpair failed and we were unable to recover it. 00:26:29.767 [2024-07-26 14:20:37.589101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.767 [2024-07-26 14:20:37.589165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.767 qpair failed and we were unable to recover it. 00:26:29.767 [2024-07-26 14:20:37.589411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.767 [2024-07-26 14:20:37.589474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.767 qpair failed and we were unable to recover it. 00:26:29.767 [2024-07-26 14:20:37.589705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.767 [2024-07-26 14:20:37.589768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.767 qpair failed and we were unable to recover it. 00:26:29.767 [2024-07-26 14:20:37.589983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.767 [2024-07-26 14:20:37.590047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.767 qpair failed and we were unable to recover it. 00:26:29.767 [2024-07-26 14:20:37.590285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.767 [2024-07-26 14:20:37.590351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.767 qpair failed and we were unable to recover it. 00:26:29.767 [2024-07-26 14:20:37.590582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.768 [2024-07-26 14:20:37.590648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.768 qpair failed and we were unable to recover it. 00:26:29.768 [2024-07-26 14:20:37.590877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.768 [2024-07-26 14:20:37.590941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.768 qpair failed and we were unable to recover it. 00:26:29.768 [2024-07-26 14:20:37.591179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.768 [2024-07-26 14:20:37.591243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.768 qpair failed and we were unable to recover it. 00:26:29.768 [2024-07-26 14:20:37.591506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.768 [2024-07-26 14:20:37.591590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.768 qpair failed and we were unable to recover it. 00:26:29.768 [2024-07-26 14:20:37.591809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.768 [2024-07-26 14:20:37.591873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.768 qpair failed and we were unable to recover it. 00:26:29.768 [2024-07-26 14:20:37.592113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.768 [2024-07-26 14:20:37.592177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.768 qpair failed and we were unable to recover it. 00:26:29.768 [2024-07-26 14:20:37.592413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.768 [2024-07-26 14:20:37.592478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.768 qpair failed and we were unable to recover it. 00:26:29.768 [2024-07-26 14:20:37.592707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.768 [2024-07-26 14:20:37.592771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.768 qpair failed and we were unable to recover it. 00:26:29.768 [2024-07-26 14:20:37.593039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.768 [2024-07-26 14:20:37.593102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.768 qpair failed and we were unable to recover it. 00:26:29.768 [2024-07-26 14:20:37.593314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.768 [2024-07-26 14:20:37.593378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.768 qpair failed and we were unable to recover it. 00:26:29.768 [2024-07-26 14:20:37.593620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.768 [2024-07-26 14:20:37.593686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.768 qpair failed and we were unable to recover it. 00:26:29.768 [2024-07-26 14:20:37.593930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.768 [2024-07-26 14:20:37.593996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.768 qpair failed and we were unable to recover it. 00:26:29.768 [2024-07-26 14:20:37.594274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.768 [2024-07-26 14:20:37.594339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.768 qpair failed and we were unable to recover it. 00:26:29.768 [2024-07-26 14:20:37.594596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.768 [2024-07-26 14:20:37.594662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.768 qpair failed and we were unable to recover it. 00:26:29.768 [2024-07-26 14:20:37.594941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.768 [2024-07-26 14:20:37.595005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.768 qpair failed and we were unable to recover it. 00:26:29.768 [2024-07-26 14:20:37.595283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.768 [2024-07-26 14:20:37.595347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.768 qpair failed and we were unable to recover it. 00:26:29.768 [2024-07-26 14:20:37.595566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.768 [2024-07-26 14:20:37.595632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.768 qpair failed and we were unable to recover it. 00:26:29.768 [2024-07-26 14:20:37.595878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.768 [2024-07-26 14:20:37.595952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.768 qpair failed and we were unable to recover it. 00:26:29.768 [2024-07-26 14:20:37.596211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.768 [2024-07-26 14:20:37.596277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.768 qpair failed and we were unable to recover it. 00:26:29.768 [2024-07-26 14:20:37.596489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.768 [2024-07-26 14:20:37.596569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.768 qpair failed and we were unable to recover it. 00:26:29.768 [2024-07-26 14:20:37.596838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.768 [2024-07-26 14:20:37.596902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.768 qpair failed and we were unable to recover it. 00:26:29.768 [2024-07-26 14:20:37.597137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.768 [2024-07-26 14:20:37.597200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.768 qpair failed and we were unable to recover it. 00:26:29.768 [2024-07-26 14:20:37.597453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.768 [2024-07-26 14:20:37.597516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.768 qpair failed and we were unable to recover it. 00:26:29.768 [2024-07-26 14:20:37.597742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.768 [2024-07-26 14:20:37.597809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.768 qpair failed and we were unable to recover it. 00:26:29.768 [2024-07-26 14:20:37.598040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.768 [2024-07-26 14:20:37.598104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.768 qpair failed and we were unable to recover it. 00:26:29.768 [2024-07-26 14:20:37.598341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.768 [2024-07-26 14:20:37.598406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.768 qpair failed and we were unable to recover it. 00:26:29.768 [2024-07-26 14:20:37.598698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.768 [2024-07-26 14:20:37.598732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.768 qpair failed and we were unable to recover it. 00:26:29.768 [2024-07-26 14:20:37.598845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.768 [2024-07-26 14:20:37.598890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.768 qpair failed and we were unable to recover it. 00:26:29.768 [2024-07-26 14:20:37.599019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.768 [2024-07-26 14:20:37.599054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.768 qpair failed and we were unable to recover it. 00:26:29.768 [2024-07-26 14:20:37.599167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.768 [2024-07-26 14:20:37.599200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.768 qpair failed and we were unable to recover it. 00:26:29.768 [2024-07-26 14:20:37.599295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.768 [2024-07-26 14:20:37.599329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.768 qpair failed and we were unable to recover it. 00:26:29.768 [2024-07-26 14:20:37.599508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.768 [2024-07-26 14:20:37.599595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.768 qpair failed and we were unable to recover it. 00:26:29.768 [2024-07-26 14:20:37.599862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.768 [2024-07-26 14:20:37.599926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.768 qpair failed and we were unable to recover it. 00:26:29.768 [2024-07-26 14:20:37.600176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.768 [2024-07-26 14:20:37.600240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.768 qpair failed and we were unable to recover it. 00:26:29.768 [2024-07-26 14:20:37.600491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.768 [2024-07-26 14:20:37.600575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.768 qpair failed and we were unable to recover it. 00:26:29.768 [2024-07-26 14:20:37.600804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.768 [2024-07-26 14:20:37.600871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.768 qpair failed and we were unable to recover it. 00:26:29.768 [2024-07-26 14:20:37.601128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.768 [2024-07-26 14:20:37.601191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.768 qpair failed and we were unable to recover it. 00:26:29.768 [2024-07-26 14:20:37.601392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.768 [2024-07-26 14:20:37.601455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.769 qpair failed and we were unable to recover it. 00:26:29.769 [2024-07-26 14:20:37.601705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.769 [2024-07-26 14:20:37.601770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.769 qpair failed and we were unable to recover it. 00:26:29.769 [2024-07-26 14:20:37.602045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.769 [2024-07-26 14:20:37.602109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.769 qpair failed and we were unable to recover it. 00:26:29.769 [2024-07-26 14:20:37.602315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.769 [2024-07-26 14:20:37.602379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.769 qpair failed and we were unable to recover it. 00:26:29.769 [2024-07-26 14:20:37.602627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.769 [2024-07-26 14:20:37.602693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.769 qpair failed and we were unable to recover it. 00:26:29.769 [2024-07-26 14:20:37.602904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.769 [2024-07-26 14:20:37.602970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.769 qpair failed and we were unable to recover it. 00:26:29.769 [2024-07-26 14:20:37.603211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.769 [2024-07-26 14:20:37.603275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.769 qpair failed and we were unable to recover it. 00:26:29.769 [2024-07-26 14:20:37.603542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.769 [2024-07-26 14:20:37.603609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.769 qpair failed and we were unable to recover it. 00:26:29.769 [2024-07-26 14:20:37.603913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.769 [2024-07-26 14:20:37.603977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.769 qpair failed and we were unable to recover it. 00:26:29.769 [2024-07-26 14:20:37.604185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.769 [2024-07-26 14:20:37.604251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.769 qpair failed and we were unable to recover it. 00:26:29.769 [2024-07-26 14:20:37.604511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.769 [2024-07-26 14:20:37.604594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.769 qpair failed and we were unable to recover it. 00:26:29.769 [2024-07-26 14:20:37.604843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.769 [2024-07-26 14:20:37.604907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.769 qpair failed and we were unable to recover it. 00:26:29.769 [2024-07-26 14:20:37.605102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.769 [2024-07-26 14:20:37.605166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.769 qpair failed and we were unable to recover it. 00:26:29.769 [2024-07-26 14:20:37.605386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.769 [2024-07-26 14:20:37.605450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.769 qpair failed and we were unable to recover it. 00:26:29.769 [2024-07-26 14:20:37.605696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.769 [2024-07-26 14:20:37.605761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.769 qpair failed and we were unable to recover it. 00:26:29.769 [2024-07-26 14:20:37.606058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.769 [2024-07-26 14:20:37.606123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.769 qpair failed and we were unable to recover it. 00:26:29.769 [2024-07-26 14:20:37.606358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.769 [2024-07-26 14:20:37.606422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.769 qpair failed and we were unable to recover it. 00:26:29.769 [2024-07-26 14:20:37.606721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.769 [2024-07-26 14:20:37.606787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.769 qpair failed and we were unable to recover it. 00:26:29.769 [2024-07-26 14:20:37.607085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.769 [2024-07-26 14:20:37.607149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.769 qpair failed and we were unable to recover it. 00:26:29.769 [2024-07-26 14:20:37.607363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.769 [2024-07-26 14:20:37.607426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.769 qpair failed and we were unable to recover it. 00:26:29.769 [2024-07-26 14:20:37.607665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.769 [2024-07-26 14:20:37.607742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.769 qpair failed and we were unable to recover it. 00:26:29.769 [2024-07-26 14:20:37.608003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.769 [2024-07-26 14:20:37.608067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.769 qpair failed and we were unable to recover it. 00:26:29.769 [2024-07-26 14:20:37.608285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.769 [2024-07-26 14:20:37.608351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.769 qpair failed and we were unable to recover it. 00:26:29.769 [2024-07-26 14:20:37.608616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.769 [2024-07-26 14:20:37.608682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.769 qpair failed and we were unable to recover it. 00:26:29.769 [2024-07-26 14:20:37.608924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.769 [2024-07-26 14:20:37.608989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.769 qpair failed and we were unable to recover it. 00:26:29.769 [2024-07-26 14:20:37.609286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.769 [2024-07-26 14:20:37.609351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.769 qpair failed and we were unable to recover it. 00:26:29.769 [2024-07-26 14:20:37.609567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.769 [2024-07-26 14:20:37.609632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.769 qpair failed and we were unable to recover it. 00:26:29.769 [2024-07-26 14:20:37.609819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.769 [2024-07-26 14:20:37.609882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.769 qpair failed and we were unable to recover it. 00:26:29.769 [2024-07-26 14:20:37.610103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.769 [2024-07-26 14:20:37.610165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.769 qpair failed and we were unable to recover it. 00:26:29.769 [2024-07-26 14:20:37.610413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.769 [2024-07-26 14:20:37.610477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.769 qpair failed and we were unable to recover it. 00:26:29.769 [2024-07-26 14:20:37.610770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.769 [2024-07-26 14:20:37.610840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.769 qpair failed and we were unable to recover it. 00:26:29.769 [2024-07-26 14:20:37.611079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.769 [2024-07-26 14:20:37.611144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.769 qpair failed and we were unable to recover it. 00:26:29.769 [2024-07-26 14:20:37.611407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.769 [2024-07-26 14:20:37.611471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.769 qpair failed and we were unable to recover it. 00:26:29.770 [2024-07-26 14:20:37.611758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.770 [2024-07-26 14:20:37.611833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.770 qpair failed and we were unable to recover it. 00:26:29.770 [2024-07-26 14:20:37.612128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.770 [2024-07-26 14:20:37.612193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.770 qpair failed and we were unable to recover it. 00:26:29.770 [2024-07-26 14:20:37.612466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.770 [2024-07-26 14:20:37.612577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.770 qpair failed and we were unable to recover it. 00:26:29.770 [2024-07-26 14:20:37.612836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.770 [2024-07-26 14:20:37.612900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.770 qpair failed and we were unable to recover it. 00:26:29.770 [2024-07-26 14:20:37.613147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.770 [2024-07-26 14:20:37.613211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.770 qpair failed and we were unable to recover it. 00:26:29.770 [2024-07-26 14:20:37.613504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.770 [2024-07-26 14:20:37.613593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.770 qpair failed and we were unable to recover it. 00:26:29.770 [2024-07-26 14:20:37.613828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.770 [2024-07-26 14:20:37.613893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.770 qpair failed and we were unable to recover it. 00:26:29.770 [2024-07-26 14:20:37.614122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.770 [2024-07-26 14:20:37.614185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.770 qpair failed and we were unable to recover it. 00:26:29.770 [2024-07-26 14:20:37.614385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.770 [2024-07-26 14:20:37.614450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.770 qpair failed and we were unable to recover it. 00:26:29.770 [2024-07-26 14:20:37.614709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.770 [2024-07-26 14:20:37.614777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.770 qpair failed and we were unable to recover it. 00:26:29.770 [2024-07-26 14:20:37.615058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.770 [2024-07-26 14:20:37.615121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.770 qpair failed and we were unable to recover it. 00:26:29.770 [2024-07-26 14:20:37.615409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.770 [2024-07-26 14:20:37.615472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.770 qpair failed and we were unable to recover it. 00:26:29.770 [2024-07-26 14:20:37.615752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.770 [2024-07-26 14:20:37.615817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.770 qpair failed and we were unable to recover it. 00:26:29.770 [2024-07-26 14:20:37.616112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.770 [2024-07-26 14:20:37.616176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.770 qpair failed and we were unable to recover it. 00:26:29.770 [2024-07-26 14:20:37.616429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.770 [2024-07-26 14:20:37.616493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.770 qpair failed and we were unable to recover it. 00:26:29.770 [2024-07-26 14:20:37.616724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.770 [2024-07-26 14:20:37.616788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.770 qpair failed and we were unable to recover it. 00:26:29.770 [2024-07-26 14:20:37.617036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.770 [2024-07-26 14:20:37.617100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.770 qpair failed and we were unable to recover it. 00:26:29.770 [2024-07-26 14:20:37.617347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.770 [2024-07-26 14:20:37.617411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.770 qpair failed and we were unable to recover it. 00:26:29.770 [2024-07-26 14:20:37.617611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.770 [2024-07-26 14:20:37.617676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.770 qpair failed and we were unable to recover it. 00:26:29.770 [2024-07-26 14:20:37.617937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.770 [2024-07-26 14:20:37.618003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.770 qpair failed and we were unable to recover it. 00:26:29.770 [2024-07-26 14:20:37.618296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.770 [2024-07-26 14:20:37.618360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.770 qpair failed and we were unable to recover it. 00:26:29.770 [2024-07-26 14:20:37.618646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.770 [2024-07-26 14:20:37.618712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.770 qpair failed and we were unable to recover it. 00:26:29.770 [2024-07-26 14:20:37.618980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.770 [2024-07-26 14:20:37.619045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.770 qpair failed and we were unable to recover it. 00:26:29.770 [2024-07-26 14:20:37.619255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.770 [2024-07-26 14:20:37.619319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.770 qpair failed and we were unable to recover it. 00:26:29.770 [2024-07-26 14:20:37.619583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.770 [2024-07-26 14:20:37.619650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.770 qpair failed and we were unable to recover it. 00:26:29.770 [2024-07-26 14:20:37.619924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.770 [2024-07-26 14:20:37.619988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.770 qpair failed and we were unable to recover it. 00:26:29.770 [2024-07-26 14:20:37.620227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.770 [2024-07-26 14:20:37.620291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.770 qpair failed and we were unable to recover it. 00:26:29.770 [2024-07-26 14:20:37.620578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.770 [2024-07-26 14:20:37.620653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.770 qpair failed and we were unable to recover it. 00:26:29.770 [2024-07-26 14:20:37.620866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.770 [2024-07-26 14:20:37.620930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.770 qpair failed and we were unable to recover it. 00:26:29.770 [2024-07-26 14:20:37.621183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.770 [2024-07-26 14:20:37.621248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.770 qpair failed and we were unable to recover it. 00:26:29.770 [2024-07-26 14:20:37.621462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.770 [2024-07-26 14:20:37.621526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.770 qpair failed and we were unable to recover it. 00:26:29.770 [2024-07-26 14:20:37.621793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.770 [2024-07-26 14:20:37.621857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.770 qpair failed and we were unable to recover it. 00:26:29.770 [2024-07-26 14:20:37.622145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.770 [2024-07-26 14:20:37.622208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.770 qpair failed and we were unable to recover it. 00:26:29.770 [2024-07-26 14:20:37.622464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.770 [2024-07-26 14:20:37.622547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.770 qpair failed and we were unable to recover it. 00:26:29.770 [2024-07-26 14:20:37.622831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.770 [2024-07-26 14:20:37.622894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.770 qpair failed and we were unable to recover it. 00:26:29.770 [2024-07-26 14:20:37.623144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.770 [2024-07-26 14:20:37.623208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.770 qpair failed and we were unable to recover it. 00:26:29.770 [2024-07-26 14:20:37.623432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.770 [2024-07-26 14:20:37.623497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.770 qpair failed and we were unable to recover it. 00:26:29.770 [2024-07-26 14:20:37.623797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.771 [2024-07-26 14:20:37.623861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.771 qpair failed and we were unable to recover it. 00:26:29.771 [2024-07-26 14:20:37.624104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.771 [2024-07-26 14:20:37.624169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.771 qpair failed and we were unable to recover it. 00:26:29.771 [2024-07-26 14:20:37.624406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.771 [2024-07-26 14:20:37.624471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.771 qpair failed and we were unable to recover it. 00:26:29.771 [2024-07-26 14:20:37.624730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.771 [2024-07-26 14:20:37.624796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.771 qpair failed and we were unable to recover it. 00:26:29.771 [2024-07-26 14:20:37.625060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.771 [2024-07-26 14:20:37.625125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.771 qpair failed and we were unable to recover it. 00:26:29.771 [2024-07-26 14:20:37.625358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.771 [2024-07-26 14:20:37.625422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.771 qpair failed and we were unable to recover it. 00:26:29.771 [2024-07-26 14:20:37.625696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.771 [2024-07-26 14:20:37.625763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.771 qpair failed and we were unable to recover it. 00:26:29.771 [2024-07-26 14:20:37.626046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.771 [2024-07-26 14:20:37.626110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.771 qpair failed and we were unable to recover it. 00:26:29.771 [2024-07-26 14:20:37.626360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.771 [2024-07-26 14:20:37.626425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.771 qpair failed and we were unable to recover it. 00:26:29.771 [2024-07-26 14:20:37.626725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.771 [2024-07-26 14:20:37.626791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.771 qpair failed and we were unable to recover it. 00:26:29.771 [2024-07-26 14:20:37.627050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.771 [2024-07-26 14:20:37.627114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.771 qpair failed and we were unable to recover it. 00:26:29.771 [2024-07-26 14:20:37.627326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.771 [2024-07-26 14:20:37.627390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.771 qpair failed and we were unable to recover it. 00:26:29.771 [2024-07-26 14:20:37.627604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.771 [2024-07-26 14:20:37.627670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.771 qpair failed and we were unable to recover it. 00:26:29.771 [2024-07-26 14:20:37.627971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.771 [2024-07-26 14:20:37.628034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.771 qpair failed and we were unable to recover it. 00:26:29.771 [2024-07-26 14:20:37.628291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.771 [2024-07-26 14:20:37.628354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.771 qpair failed and we were unable to recover it. 00:26:29.771 [2024-07-26 14:20:37.628607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.771 [2024-07-26 14:20:37.628673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.771 qpair failed and we were unable to recover it. 00:26:29.771 [2024-07-26 14:20:37.628967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.771 [2024-07-26 14:20:37.629030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.771 qpair failed and we were unable to recover it. 00:26:29.771 [2024-07-26 14:20:37.629267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.771 [2024-07-26 14:20:37.629331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.771 qpair failed and we were unable to recover it. 00:26:29.771 [2024-07-26 14:20:37.629588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.771 [2024-07-26 14:20:37.629654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.771 qpair failed and we were unable to recover it. 00:26:29.771 [2024-07-26 14:20:37.629893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.771 [2024-07-26 14:20:37.629959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.771 qpair failed and we were unable to recover it. 00:26:29.771 [2024-07-26 14:20:37.630175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.771 [2024-07-26 14:20:37.630240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.771 qpair failed and we were unable to recover it. 00:26:29.771 [2024-07-26 14:20:37.630428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.771 [2024-07-26 14:20:37.630492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.771 qpair failed and we were unable to recover it. 00:26:29.771 [2024-07-26 14:20:37.630747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.771 [2024-07-26 14:20:37.630810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.771 qpair failed and we were unable to recover it. 00:26:29.771 [2024-07-26 14:20:37.631070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.771 [2024-07-26 14:20:37.631141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.771 qpair failed and we were unable to recover it. 00:26:29.771 [2024-07-26 14:20:37.631442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.771 [2024-07-26 14:20:37.631505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.771 qpair failed and we were unable to recover it. 00:26:29.771 [2024-07-26 14:20:37.631781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.771 [2024-07-26 14:20:37.631845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.771 qpair failed and we were unable to recover it. 00:26:29.771 [2024-07-26 14:20:37.632063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.771 [2024-07-26 14:20:37.632129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.771 qpair failed and we were unable to recover it. 00:26:29.771 [2024-07-26 14:20:37.632345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.771 [2024-07-26 14:20:37.632411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.771 qpair failed and we were unable to recover it. 00:26:29.771 [2024-07-26 14:20:37.632745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.771 [2024-07-26 14:20:37.632811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.771 qpair failed and we were unable to recover it. 00:26:29.771 [2024-07-26 14:20:37.633033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.771 [2024-07-26 14:20:37.633098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.771 qpair failed and we were unable to recover it. 00:26:29.771 [2024-07-26 14:20:37.633345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.771 [2024-07-26 14:20:37.633421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.771 qpair failed and we were unable to recover it. 00:26:29.771 [2024-07-26 14:20:37.633709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.771 [2024-07-26 14:20:37.633775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.771 qpair failed and we were unable to recover it. 00:26:29.771 [2024-07-26 14:20:37.633986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.771 [2024-07-26 14:20:37.634052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.771 qpair failed and we were unable to recover it. 00:26:29.771 [2024-07-26 14:20:37.634263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.771 [2024-07-26 14:20:37.634328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.771 qpair failed and we were unable to recover it. 00:26:29.771 [2024-07-26 14:20:37.634585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.771 [2024-07-26 14:20:37.634651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.771 qpair failed and we were unable to recover it. 00:26:29.771 [2024-07-26 14:20:37.634873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.771 [2024-07-26 14:20:37.634938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.771 qpair failed and we were unable to recover it. 00:26:29.771 [2024-07-26 14:20:37.635135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.771 [2024-07-26 14:20:37.635201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.771 qpair failed and we were unable to recover it. 00:26:29.771 [2024-07-26 14:20:37.635451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.771 [2024-07-26 14:20:37.635517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.771 qpair failed and we were unable to recover it. 00:26:29.772 [2024-07-26 14:20:37.635823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.772 [2024-07-26 14:20:37.635888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.772 qpair failed and we were unable to recover it. 00:26:29.772 [2024-07-26 14:20:37.636168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.772 [2024-07-26 14:20:37.636231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.772 qpair failed and we were unable to recover it. 00:26:29.772 [2024-07-26 14:20:37.636470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.772 [2024-07-26 14:20:37.636566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.772 qpair failed and we were unable to recover it. 00:26:29.772 [2024-07-26 14:20:37.636786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.772 [2024-07-26 14:20:37.636850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.772 qpair failed and we were unable to recover it. 00:26:29.772 [2024-07-26 14:20:37.637105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.772 [2024-07-26 14:20:37.637171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.772 qpair failed and we were unable to recover it. 00:26:29.772 [2024-07-26 14:20:37.637438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.772 [2024-07-26 14:20:37.637503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.772 qpair failed and we were unable to recover it. 00:26:29.772 [2024-07-26 14:20:37.637766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.772 [2024-07-26 14:20:37.637841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.772 qpair failed and we were unable to recover it. 00:26:29.772 [2024-07-26 14:20:37.638097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.772 [2024-07-26 14:20:37.638161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.772 qpair failed and we were unable to recover it. 00:26:29.772 [2024-07-26 14:20:37.638402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.772 [2024-07-26 14:20:37.638468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.772 qpair failed and we were unable to recover it. 00:26:29.772 [2024-07-26 14:20:37.638783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.772 [2024-07-26 14:20:37.638847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.772 qpair failed and we were unable to recover it. 00:26:29.772 [2024-07-26 14:20:37.639114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.772 [2024-07-26 14:20:37.639179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.772 qpair failed and we were unable to recover it. 00:26:29.772 [2024-07-26 14:20:37.639429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.772 [2024-07-26 14:20:37.639503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.772 qpair failed and we were unable to recover it. 00:26:29.772 [2024-07-26 14:20:37.639803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.772 [2024-07-26 14:20:37.639873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.772 qpair failed and we were unable to recover it. 00:26:29.772 [2024-07-26 14:20:37.640116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.772 [2024-07-26 14:20:37.640181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.772 qpair failed and we were unable to recover it. 00:26:29.772 [2024-07-26 14:20:37.640408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.772 [2024-07-26 14:20:37.640472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.772 qpair failed and we were unable to recover it. 00:26:29.772 [2024-07-26 14:20:37.640731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.772 [2024-07-26 14:20:37.640796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.772 qpair failed and we were unable to recover it. 00:26:29.772 [2024-07-26 14:20:37.641030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.772 [2024-07-26 14:20:37.641095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.772 qpair failed and we were unable to recover it. 00:26:29.772 [2024-07-26 14:20:37.641386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.772 [2024-07-26 14:20:37.641450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.772 qpair failed and we were unable to recover it. 00:26:29.772 [2024-07-26 14:20:37.641721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.772 [2024-07-26 14:20:37.641787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.772 qpair failed and we were unable to recover it. 00:26:29.772 [2024-07-26 14:20:37.642069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.772 [2024-07-26 14:20:37.642132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.772 qpair failed and we were unable to recover it. 00:26:29.772 [2024-07-26 14:20:37.642378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.772 [2024-07-26 14:20:37.642442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.772 qpair failed and we were unable to recover it. 00:26:29.772 [2024-07-26 14:20:37.642729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.772 [2024-07-26 14:20:37.642794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.772 qpair failed and we were unable to recover it. 00:26:29.772 [2024-07-26 14:20:37.643056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.772 [2024-07-26 14:20:37.643119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.772 qpair failed and we were unable to recover it. 00:26:29.772 [2024-07-26 14:20:37.643415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.772 [2024-07-26 14:20:37.643478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.772 qpair failed and we were unable to recover it. 00:26:29.772 [2024-07-26 14:20:37.643774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.772 [2024-07-26 14:20:37.643849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.772 qpair failed and we were unable to recover it. 00:26:29.772 [2024-07-26 14:20:37.644083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.772 [2024-07-26 14:20:37.644147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.772 qpair failed and we were unable to recover it. 00:26:29.772 [2024-07-26 14:20:37.644358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.772 [2024-07-26 14:20:37.644422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.772 qpair failed and we were unable to recover it. 00:26:29.772 [2024-07-26 14:20:37.644696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.772 [2024-07-26 14:20:37.644762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.772 qpair failed and we were unable to recover it. 00:26:29.772 [2024-07-26 14:20:37.645012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.772 [2024-07-26 14:20:37.645076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.772 qpair failed and we were unable to recover it. 00:26:29.772 [2024-07-26 14:20:37.645301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.772 [2024-07-26 14:20:37.645365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.772 qpair failed and we were unable to recover it. 00:26:29.772 [2024-07-26 14:20:37.645619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.772 [2024-07-26 14:20:37.645684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.772 qpair failed and we were unable to recover it. 00:26:29.772 [2024-07-26 14:20:37.645923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.772 [2024-07-26 14:20:37.645987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.772 qpair failed and we were unable to recover it. 00:26:29.772 [2024-07-26 14:20:37.646285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.772 [2024-07-26 14:20:37.646370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.772 qpair failed and we were unable to recover it. 00:26:29.772 [2024-07-26 14:20:37.646671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.772 [2024-07-26 14:20:37.646736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.772 qpair failed and we were unable to recover it. 00:26:29.772 [2024-07-26 14:20:37.646994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.772 [2024-07-26 14:20:37.647059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.772 qpair failed and we were unable to recover it. 00:26:29.772 [2024-07-26 14:20:37.647306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.772 [2024-07-26 14:20:37.647371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.772 qpair failed and we were unable to recover it. 00:26:29.772 [2024-07-26 14:20:37.647636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.772 [2024-07-26 14:20:37.647700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.772 qpair failed and we were unable to recover it. 00:26:29.772 [2024-07-26 14:20:37.647951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.772 [2024-07-26 14:20:37.648015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.773 qpair failed and we were unable to recover it. 00:26:29.773 [2024-07-26 14:20:37.648261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.773 [2024-07-26 14:20:37.648324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.773 qpair failed and we were unable to recover it. 00:26:29.773 [2024-07-26 14:20:37.648571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.773 [2024-07-26 14:20:37.648636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.773 qpair failed and we were unable to recover it. 00:26:29.773 [2024-07-26 14:20:37.648898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.773 [2024-07-26 14:20:37.648962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.773 qpair failed and we were unable to recover it. 00:26:29.773 [2024-07-26 14:20:37.649159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.773 [2024-07-26 14:20:37.649222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.773 qpair failed and we were unable to recover it. 00:26:29.773 [2024-07-26 14:20:37.649447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.773 [2024-07-26 14:20:37.649512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.773 qpair failed and we were unable to recover it. 00:26:29.773 [2024-07-26 14:20:37.649763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.773 [2024-07-26 14:20:37.649830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.773 qpair failed and we were unable to recover it. 00:26:29.773 [2024-07-26 14:20:37.650066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.773 [2024-07-26 14:20:37.650130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.773 qpair failed and we were unable to recover it. 00:26:29.773 [2024-07-26 14:20:37.650351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.773 [2024-07-26 14:20:37.650416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.773 qpair failed and we were unable to recover it. 00:26:29.773 [2024-07-26 14:20:37.650663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.773 [2024-07-26 14:20:37.650731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.773 qpair failed and we were unable to recover it. 00:26:29.773 [2024-07-26 14:20:37.650941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.773 [2024-07-26 14:20:37.651004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.773 qpair failed and we were unable to recover it. 00:26:29.773 [2024-07-26 14:20:37.651292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.773 [2024-07-26 14:20:37.651356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.773 qpair failed and we were unable to recover it. 00:26:29.773 [2024-07-26 14:20:37.651605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.773 [2024-07-26 14:20:37.651673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.773 qpair failed and we were unable to recover it. 00:26:29.773 [2024-07-26 14:20:37.651913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.773 [2024-07-26 14:20:37.651977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.773 qpair failed and we were unable to recover it. 00:26:29.773 [2024-07-26 14:20:37.652222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.773 [2024-07-26 14:20:37.652285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.773 qpair failed and we were unable to recover it. 00:26:29.773 [2024-07-26 14:20:37.652492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.773 [2024-07-26 14:20:37.652574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.773 qpair failed and we were unable to recover it. 00:26:29.773 [2024-07-26 14:20:37.652860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.773 [2024-07-26 14:20:37.652924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.773 qpair failed and we were unable to recover it. 00:26:29.773 [2024-07-26 14:20:37.653167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.773 [2024-07-26 14:20:37.653231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.773 qpair failed and we were unable to recover it. 00:26:29.773 [2024-07-26 14:20:37.653482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.773 [2024-07-26 14:20:37.653562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.773 qpair failed and we were unable to recover it. 00:26:29.773 [2024-07-26 14:20:37.653764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.773 [2024-07-26 14:20:37.653827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.773 qpair failed and we were unable to recover it. 00:26:29.773 [2024-07-26 14:20:37.654063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.773 [2024-07-26 14:20:37.654127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.773 qpair failed and we were unable to recover it. 00:26:29.773 [2024-07-26 14:20:37.654386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.773 [2024-07-26 14:20:37.654450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.773 qpair failed and we were unable to recover it. 00:26:29.773 [2024-07-26 14:20:37.654722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.773 [2024-07-26 14:20:37.654787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.773 qpair failed and we were unable to recover it. 00:26:29.773 [2024-07-26 14:20:37.655068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.773 [2024-07-26 14:20:37.655132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.773 qpair failed and we were unable to recover it. 00:26:29.773 [2024-07-26 14:20:37.655377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.773 [2024-07-26 14:20:37.655441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.773 qpair failed and we were unable to recover it. 00:26:29.773 [2024-07-26 14:20:37.655706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.773 [2024-07-26 14:20:37.655772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.773 qpair failed and we were unable to recover it. 00:26:29.773 [2024-07-26 14:20:37.656026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.773 [2024-07-26 14:20:37.656090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.773 qpair failed and we were unable to recover it. 00:26:29.773 [2024-07-26 14:20:37.656376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.773 [2024-07-26 14:20:37.656440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.773 qpair failed and we were unable to recover it. 00:26:29.773 [2024-07-26 14:20:37.656710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.773 [2024-07-26 14:20:37.656776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.773 qpair failed and we were unable to recover it. 00:26:29.773 [2024-07-26 14:20:37.657022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.773 [2024-07-26 14:20:37.657086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.773 qpair failed and we were unable to recover it. 00:26:29.773 [2024-07-26 14:20:37.657369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.773 [2024-07-26 14:20:37.657432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.773 qpair failed and we were unable to recover it. 00:26:29.773 [2024-07-26 14:20:37.657670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.773 [2024-07-26 14:20:37.657736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.773 qpair failed and we were unable to recover it. 00:26:29.773 [2024-07-26 14:20:37.658030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.773 [2024-07-26 14:20:37.658095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.773 qpair failed and we were unable to recover it. 00:26:29.773 [2024-07-26 14:20:37.658308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.773 [2024-07-26 14:20:37.658372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.773 qpair failed and we were unable to recover it. 00:26:29.773 [2024-07-26 14:20:37.658640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.773 [2024-07-26 14:20:37.658705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.773 qpair failed and we were unable to recover it. 00:26:29.773 [2024-07-26 14:20:37.658900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.773 [2024-07-26 14:20:37.658974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.773 qpair failed and we were unable to recover it. 00:26:29.774 [2024-07-26 14:20:37.659232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.774 [2024-07-26 14:20:37.659297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.774 qpair failed and we were unable to recover it. 00:26:29.774 [2024-07-26 14:20:37.659574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.774 [2024-07-26 14:20:37.659640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.774 qpair failed and we were unable to recover it. 00:26:29.774 [2024-07-26 14:20:37.659897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.774 [2024-07-26 14:20:37.659963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.774 qpair failed and we were unable to recover it. 00:26:29.774 [2024-07-26 14:20:37.660265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.774 [2024-07-26 14:20:37.660340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.774 qpair failed and we were unable to recover it. 00:26:29.774 [2024-07-26 14:20:37.660622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.774 [2024-07-26 14:20:37.660687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.774 qpair failed and we were unable to recover it. 00:26:29.774 [2024-07-26 14:20:37.660969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.774 [2024-07-26 14:20:37.661032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.774 qpair failed and we were unable to recover it. 00:26:29.774 [2024-07-26 14:20:37.661274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.774 [2024-07-26 14:20:37.661338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.774 qpair failed and we were unable to recover it. 00:26:29.774 [2024-07-26 14:20:37.661585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.774 [2024-07-26 14:20:37.661649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.774 qpair failed and we were unable to recover it. 00:26:29.774 [2024-07-26 14:20:37.661862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.774 [2024-07-26 14:20:37.661926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.774 qpair failed and we were unable to recover it. 00:26:29.774 [2024-07-26 14:20:37.662128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.774 [2024-07-26 14:20:37.662191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.774 qpair failed and we were unable to recover it. 00:26:29.774 [2024-07-26 14:20:37.662481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.774 [2024-07-26 14:20:37.662570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.774 qpair failed and we were unable to recover it. 00:26:29.774 [2024-07-26 14:20:37.662773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.774 [2024-07-26 14:20:37.662843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.774 qpair failed and we were unable to recover it. 00:26:29.774 [2024-07-26 14:20:37.663124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.774 [2024-07-26 14:20:37.663187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.774 qpair failed and we were unable to recover it. 00:26:29.774 [2024-07-26 14:20:37.663456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.774 [2024-07-26 14:20:37.663520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.774 qpair failed and we were unable to recover it. 00:26:29.774 [2024-07-26 14:20:37.663748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.774 [2024-07-26 14:20:37.663833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.774 qpair failed and we were unable to recover it. 00:26:29.774 [2024-07-26 14:20:37.664036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.774 [2024-07-26 14:20:37.664101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.774 qpair failed and we were unable to recover it. 00:26:29.774 [2024-07-26 14:20:37.664341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.774 [2024-07-26 14:20:37.664405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.774 qpair failed and we were unable to recover it. 00:26:29.774 [2024-07-26 14:20:37.664622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.774 [2024-07-26 14:20:37.664689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.774 qpair failed and we were unable to recover it. 00:26:29.774 [2024-07-26 14:20:37.664989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.774 [2024-07-26 14:20:37.665064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.774 qpair failed and we were unable to recover it. 00:26:29.774 [2024-07-26 14:20:37.665300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.774 [2024-07-26 14:20:37.665366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.774 qpair failed and we were unable to recover it. 00:26:29.774 [2024-07-26 14:20:37.665628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.774 [2024-07-26 14:20:37.665694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.774 qpair failed and we were unable to recover it. 00:26:29.774 [2024-07-26 14:20:37.665983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.774 [2024-07-26 14:20:37.666046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.774 qpair failed and we were unable to recover it. 00:26:29.774 [2024-07-26 14:20:37.666256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.774 [2024-07-26 14:20:37.666319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.774 qpair failed and we were unable to recover it. 00:26:29.774 [2024-07-26 14:20:37.666549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.774 [2024-07-26 14:20:37.666613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.774 qpair failed and we were unable to recover it. 00:26:29.774 [2024-07-26 14:20:37.666873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.774 [2024-07-26 14:20:37.666937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.774 qpair failed and we were unable to recover it. 00:26:29.774 [2024-07-26 14:20:37.667189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.774 [2024-07-26 14:20:37.667252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.774 qpair failed and we were unable to recover it. 00:26:29.774 [2024-07-26 14:20:37.667515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.774 [2024-07-26 14:20:37.667611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.774 qpair failed and we were unable to recover it. 00:26:29.774 [2024-07-26 14:20:37.667874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.774 [2024-07-26 14:20:37.667939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.774 qpair failed and we were unable to recover it. 00:26:29.774 [2024-07-26 14:20:37.668198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.774 [2024-07-26 14:20:37.668262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.774 qpair failed and we were unable to recover it. 00:26:29.774 [2024-07-26 14:20:37.668514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.774 [2024-07-26 14:20:37.668600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.774 qpair failed and we were unable to recover it. 00:26:29.774 [2024-07-26 14:20:37.668830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.774 [2024-07-26 14:20:37.668894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.774 qpair failed and we were unable to recover it. 00:26:29.774 [2024-07-26 14:20:37.669152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.774 [2024-07-26 14:20:37.669215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.774 qpair failed and we were unable to recover it. 00:26:29.774 [2024-07-26 14:20:37.669421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.774 [2024-07-26 14:20:37.669486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.774 qpair failed and we were unable to recover it. 00:26:29.774 [2024-07-26 14:20:37.669806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.774 [2024-07-26 14:20:37.669873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.774 qpair failed and we were unable to recover it. 00:26:29.774 [2024-07-26 14:20:37.670099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.774 [2024-07-26 14:20:37.670163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.774 qpair failed and we were unable to recover it. 00:26:29.775 [2024-07-26 14:20:37.670408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.775 [2024-07-26 14:20:37.670472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.775 qpair failed and we were unable to recover it. 00:26:29.775 [2024-07-26 14:20:37.670754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.775 [2024-07-26 14:20:37.670828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.775 qpair failed and we were unable to recover it. 00:26:29.775 [2024-07-26 14:20:37.671075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.775 [2024-07-26 14:20:37.671138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.775 qpair failed and we were unable to recover it. 00:26:29.775 [2024-07-26 14:20:37.671409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.775 [2024-07-26 14:20:37.671472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.775 qpair failed and we were unable to recover it. 00:26:29.775 [2024-07-26 14:20:37.671703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.775 [2024-07-26 14:20:37.671780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.775 qpair failed and we were unable to recover it. 00:26:29.775 [2024-07-26 14:20:37.672075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.775 [2024-07-26 14:20:37.672140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.775 qpair failed and we were unable to recover it. 00:26:29.775 [2024-07-26 14:20:37.672399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.775 [2024-07-26 14:20:37.672463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.775 qpair failed and we were unable to recover it. 00:26:29.775 [2024-07-26 14:20:37.672772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.775 [2024-07-26 14:20:37.672845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.775 qpair failed and we were unable to recover it. 00:26:29.775 [2024-07-26 14:20:37.673060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.775 [2024-07-26 14:20:37.673124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.775 qpair failed and we were unable to recover it. 00:26:29.775 [2024-07-26 14:20:37.673410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.775 [2024-07-26 14:20:37.673474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.775 qpair failed and we were unable to recover it. 00:26:29.775 [2024-07-26 14:20:37.673715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.775 [2024-07-26 14:20:37.673781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.775 qpair failed and we were unable to recover it. 00:26:29.775 [2024-07-26 14:20:37.673989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.775 [2024-07-26 14:20:37.674055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.775 qpair failed and we were unable to recover it. 00:26:29.775 [2024-07-26 14:20:37.674301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.775 [2024-07-26 14:20:37.674366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.775 qpair failed and we were unable to recover it. 00:26:29.775 [2024-07-26 14:20:37.674599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.775 [2024-07-26 14:20:37.674675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.775 qpair failed and we were unable to recover it. 00:26:29.775 [2024-07-26 14:20:37.674931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.775 [2024-07-26 14:20:37.674995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.775 qpair failed and we were unable to recover it. 00:26:29.775 [2024-07-26 14:20:37.675250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.775 [2024-07-26 14:20:37.675314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.775 qpair failed and we were unable to recover it. 00:26:29.775 [2024-07-26 14:20:37.675573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.775 [2024-07-26 14:20:37.675638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.775 qpair failed and we were unable to recover it. 00:26:29.775 [2024-07-26 14:20:37.675928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.775 [2024-07-26 14:20:37.676002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.775 qpair failed and we were unable to recover it. 00:26:29.775 [2024-07-26 14:20:37.676775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.775 [2024-07-26 14:20:37.676845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.775 qpair failed and we were unable to recover it. 00:26:29.775 [2024-07-26 14:20:37.677135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.775 [2024-07-26 14:20:37.677200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.775 qpair failed and we were unable to recover it. 00:26:29.775 [2024-07-26 14:20:37.677451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.775 [2024-07-26 14:20:37.677522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.775 qpair failed and we were unable to recover it. 00:26:29.775 [2024-07-26 14:20:37.677844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.775 [2024-07-26 14:20:37.677908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.775 qpair failed and we were unable to recover it. 00:26:29.775 [2024-07-26 14:20:37.678149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.775 [2024-07-26 14:20:37.678214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.775 qpair failed and we were unable to recover it. 00:26:29.775 [2024-07-26 14:20:37.678431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.775 [2024-07-26 14:20:37.678496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.775 qpair failed and we were unable to recover it. 00:26:29.775 [2024-07-26 14:20:37.678799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.775 [2024-07-26 14:20:37.678873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.775 qpair failed and we were unable to recover it. 00:26:29.775 [2024-07-26 14:20:37.679160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.775 [2024-07-26 14:20:37.679224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.775 qpair failed and we were unable to recover it. 00:26:29.775 [2024-07-26 14:20:37.679465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.775 [2024-07-26 14:20:37.679559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.775 qpair failed and we were unable to recover it. 00:26:29.775 [2024-07-26 14:20:37.679766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.775 [2024-07-26 14:20:37.679841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.775 qpair failed and we were unable to recover it. 00:26:29.775 [2024-07-26 14:20:37.680125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.775 [2024-07-26 14:20:37.680191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.775 qpair failed and we were unable to recover it. 00:26:29.775 [2024-07-26 14:20:37.680442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.775 [2024-07-26 14:20:37.680506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.775 qpair failed and we were unable to recover it. 00:26:29.775 [2024-07-26 14:20:37.680772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.775 [2024-07-26 14:20:37.680848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.775 qpair failed and we were unable to recover it. 00:26:29.775 [2024-07-26 14:20:37.681070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.775 [2024-07-26 14:20:37.681135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.775 qpair failed and we were unable to recover it. 00:26:29.775 [2024-07-26 14:20:37.681424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.775 [2024-07-26 14:20:37.681488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.775 qpair failed and we were unable to recover it. 00:26:29.775 [2024-07-26 14:20:37.681768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.775 [2024-07-26 14:20:37.681845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.775 qpair failed and we were unable to recover it. 00:26:29.775 [2024-07-26 14:20:37.682075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.775 [2024-07-26 14:20:37.682139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.775 qpair failed and we were unable to recover it. 00:26:29.775 [2024-07-26 14:20:37.682415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.775 [2024-07-26 14:20:37.682479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.775 qpair failed and we were unable to recover it. 00:26:29.775 [2024-07-26 14:20:37.682736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.775 [2024-07-26 14:20:37.682801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.775 qpair failed and we were unable to recover it. 00:26:29.775 [2024-07-26 14:20:37.683091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.775 [2024-07-26 14:20:37.683154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.776 qpair failed and we were unable to recover it. 00:26:29.776 [2024-07-26 14:20:37.683396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.776 [2024-07-26 14:20:37.683460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.776 qpair failed and we were unable to recover it. 00:26:29.776 [2024-07-26 14:20:37.683725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.776 [2024-07-26 14:20:37.683789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.776 qpair failed and we were unable to recover it. 00:26:29.776 [2024-07-26 14:20:37.684008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.776 [2024-07-26 14:20:37.684072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.776 qpair failed and we were unable to recover it. 00:26:29.776 [2024-07-26 14:20:37.684354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.776 [2024-07-26 14:20:37.684418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.776 qpair failed and we were unable to recover it. 00:26:29.776 [2024-07-26 14:20:37.684630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.776 [2024-07-26 14:20:37.684695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.776 qpair failed and we were unable to recover it. 00:26:29.776 [2024-07-26 14:20:37.684917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.776 [2024-07-26 14:20:37.684981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.776 qpair failed and we were unable to recover it. 00:26:29.776 [2024-07-26 14:20:37.685192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.776 [2024-07-26 14:20:37.685265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.776 qpair failed and we were unable to recover it. 00:26:29.776 [2024-07-26 14:20:37.685521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.776 [2024-07-26 14:20:37.685601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.776 qpair failed and we were unable to recover it. 00:26:29.776 [2024-07-26 14:20:37.685888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.776 [2024-07-26 14:20:37.685951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.776 qpair failed and we were unable to recover it. 00:26:29.776 [2024-07-26 14:20:37.686228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.776 [2024-07-26 14:20:37.686292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.776 qpair failed and we were unable to recover it. 00:26:29.776 [2024-07-26 14:20:37.686574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.776 [2024-07-26 14:20:37.686639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.776 qpair failed and we were unable to recover it. 00:26:29.776 [2024-07-26 14:20:37.686900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.776 [2024-07-26 14:20:37.686963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.776 qpair failed and we were unable to recover it. 00:26:29.776 [2024-07-26 14:20:37.687153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.776 [2024-07-26 14:20:37.687216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.776 qpair failed and we were unable to recover it. 00:26:29.776 [2024-07-26 14:20:37.687458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.776 [2024-07-26 14:20:37.687523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.776 qpair failed and we were unable to recover it. 00:26:29.776 [2024-07-26 14:20:37.687846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.776 [2024-07-26 14:20:37.687916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.776 qpair failed and we were unable to recover it. 00:26:29.776 [2024-07-26 14:20:37.688165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.776 [2024-07-26 14:20:37.688227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.776 qpair failed and we were unable to recover it. 00:26:29.776 [2024-07-26 14:20:37.688476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.776 [2024-07-26 14:20:37.688562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.776 qpair failed and we were unable to recover it. 00:26:29.776 [2024-07-26 14:20:37.688848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.776 [2024-07-26 14:20:37.688913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.776 qpair failed and we were unable to recover it. 00:26:29.776 [2024-07-26 14:20:37.689166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.776 [2024-07-26 14:20:37.689229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.776 qpair failed and we were unable to recover it. 00:26:29.776 [2024-07-26 14:20:37.689463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.776 [2024-07-26 14:20:37.689552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.776 qpair failed and we were unable to recover it. 00:26:29.776 [2024-07-26 14:20:37.689848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.776 [2024-07-26 14:20:37.689913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.776 qpair failed and we were unable to recover it. 00:26:29.776 [2024-07-26 14:20:37.690122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.776 [2024-07-26 14:20:37.690186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.776 qpair failed and we were unable to recover it. 00:26:29.776 [2024-07-26 14:20:37.690428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.776 [2024-07-26 14:20:37.690494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.776 qpair failed and we were unable to recover it. 00:26:29.776 [2024-07-26 14:20:37.690747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.776 [2024-07-26 14:20:37.690811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.776 qpair failed and we were unable to recover it. 00:26:29.776 [2024-07-26 14:20:37.691057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.776 [2024-07-26 14:20:37.691121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.776 qpair failed and we were unable to recover it. 00:26:29.776 [2024-07-26 14:20:37.691410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.776 [2024-07-26 14:20:37.691474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.776 qpair failed and we were unable to recover it. 00:26:29.776 [2024-07-26 14:20:37.691802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.776 [2024-07-26 14:20:37.691874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.776 qpair failed and we were unable to recover it. 00:26:29.776 [2024-07-26 14:20:37.692127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.776 [2024-07-26 14:20:37.692190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.776 qpair failed and we were unable to recover it. 00:26:29.776 [2024-07-26 14:20:37.692444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.776 [2024-07-26 14:20:37.692510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.776 qpair failed and we were unable to recover it. 00:26:29.776 [2024-07-26 14:20:37.692793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.776 [2024-07-26 14:20:37.692857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.776 qpair failed and we were unable to recover it. 00:26:29.776 [2024-07-26 14:20:37.693139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.776 [2024-07-26 14:20:37.693203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.776 qpair failed and we were unable to recover it. 00:26:29.776 [2024-07-26 14:20:37.693490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.776 [2024-07-26 14:20:37.693575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.776 qpair failed and we were unable to recover it. 00:26:29.776 [2024-07-26 14:20:37.693859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.776 [2024-07-26 14:20:37.693923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.776 qpair failed and we were unable to recover it. 00:26:29.776 [2024-07-26 14:20:37.694172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.776 [2024-07-26 14:20:37.694238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.776 qpair failed and we were unable to recover it. 00:26:29.776 [2024-07-26 14:20:37.694499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.776 [2024-07-26 14:20:37.694591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.776 qpair failed and we were unable to recover it. 00:26:29.776 [2024-07-26 14:20:37.694836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.776 [2024-07-26 14:20:37.694900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.776 qpair failed and we were unable to recover it. 00:26:29.776 [2024-07-26 14:20:37.695143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.776 [2024-07-26 14:20:37.695207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.776 qpair failed and we were unable to recover it. 00:26:29.776 [2024-07-26 14:20:37.695499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.777 [2024-07-26 14:20:37.695590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.777 qpair failed and we were unable to recover it. 00:26:29.777 [2024-07-26 14:20:37.695829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.777 [2024-07-26 14:20:37.695894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.777 qpair failed and we were unable to recover it. 00:26:29.777 [2024-07-26 14:20:37.696101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.777 [2024-07-26 14:20:37.696166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.777 qpair failed and we were unable to recover it. 00:26:29.777 [2024-07-26 14:20:37.696412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.777 [2024-07-26 14:20:37.696476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.777 qpair failed and we were unable to recover it. 00:26:29.777 [2024-07-26 14:20:37.696775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.777 [2024-07-26 14:20:37.696848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.777 qpair failed and we were unable to recover it. 00:26:29.777 [2024-07-26 14:20:37.697097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.777 [2024-07-26 14:20:37.697160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.777 qpair failed and we were unable to recover it. 00:26:29.777 [2024-07-26 14:20:37.697410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.777 [2024-07-26 14:20:37.697473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.777 qpair failed and we were unable to recover it. 00:26:29.777 [2024-07-26 14:20:37.697710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.777 [2024-07-26 14:20:37.697776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.777 qpair failed and we were unable to recover it. 00:26:29.777 [2024-07-26 14:20:37.698080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.777 [2024-07-26 14:20:37.698144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.777 qpair failed and we were unable to recover it. 00:26:29.777 [2024-07-26 14:20:37.698418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.777 [2024-07-26 14:20:37.698491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.777 qpair failed and we were unable to recover it. 00:26:29.777 [2024-07-26 14:20:37.698789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.777 [2024-07-26 14:20:37.698854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.777 qpair failed and we were unable to recover it. 00:26:29.777 [2024-07-26 14:20:37.699139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.777 [2024-07-26 14:20:37.699204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.777 qpair failed and we were unable to recover it. 00:26:29.777 [2024-07-26 14:20:37.699449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.777 [2024-07-26 14:20:37.699512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.777 qpair failed and we were unable to recover it. 00:26:29.777 [2024-07-26 14:20:37.699837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.777 [2024-07-26 14:20:37.699901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.777 qpair failed and we were unable to recover it. 00:26:29.777 [2024-07-26 14:20:37.700160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.777 [2024-07-26 14:20:37.700225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.777 qpair failed and we were unable to recover it. 00:26:29.777 [2024-07-26 14:20:37.700466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.777 [2024-07-26 14:20:37.700552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.777 qpair failed and we were unable to recover it. 00:26:29.777 [2024-07-26 14:20:37.700825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.777 [2024-07-26 14:20:37.700889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.777 qpair failed and we were unable to recover it. 00:26:29.777 [2024-07-26 14:20:37.701168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.777 [2024-07-26 14:20:37.701232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.777 qpair failed and we were unable to recover it. 00:26:29.777 [2024-07-26 14:20:37.701476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.777 [2024-07-26 14:20:37.701558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.777 qpair failed and we were unable to recover it. 00:26:29.777 [2024-07-26 14:20:37.701846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.777 [2024-07-26 14:20:37.701909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.777 qpair failed and we were unable to recover it. 00:26:29.777 [2024-07-26 14:20:37.702209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.777 [2024-07-26 14:20:37.702274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.777 qpair failed and we were unable to recover it. 00:26:29.777 [2024-07-26 14:20:37.702514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.777 [2024-07-26 14:20:37.702595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.777 qpair failed and we were unable to recover it. 00:26:29.777 [2024-07-26 14:20:37.702851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.777 [2024-07-26 14:20:37.702915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.777 qpair failed and we were unable to recover it. 00:26:29.777 [2024-07-26 14:20:37.703181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.777 [2024-07-26 14:20:37.703244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.777 qpair failed and we were unable to recover it. 00:26:29.777 [2024-07-26 14:20:37.703526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.777 [2024-07-26 14:20:37.703607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.777 qpair failed and we were unable to recover it. 00:26:29.777 [2024-07-26 14:20:37.703885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.777 [2024-07-26 14:20:37.703949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.777 qpair failed and we were unable to recover it. 00:26:29.777 [2024-07-26 14:20:37.704200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.777 [2024-07-26 14:20:37.704264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.777 qpair failed and we were unable to recover it. 00:26:29.777 [2024-07-26 14:20:37.704504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.777 [2024-07-26 14:20:37.704587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.777 qpair failed and we were unable to recover it. 00:26:29.777 [2024-07-26 14:20:37.704830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.777 [2024-07-26 14:20:37.704894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.777 qpair failed and we were unable to recover it. 00:26:29.777 [2024-07-26 14:20:37.705154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.777 [2024-07-26 14:20:37.705219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.777 qpair failed and we were unable to recover it. 00:26:29.777 [2024-07-26 14:20:37.705511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.777 [2024-07-26 14:20:37.705596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.777 qpair failed and we were unable to recover it. 00:26:29.777 [2024-07-26 14:20:37.705833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.777 [2024-07-26 14:20:37.705896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.777 qpair failed and we were unable to recover it. 00:26:29.777 [2024-07-26 14:20:37.706193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.777 [2024-07-26 14:20:37.706258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.777 qpair failed and we were unable to recover it. 00:26:29.777 [2024-07-26 14:20:37.706508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.777 [2024-07-26 14:20:37.706597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.777 qpair failed and we were unable to recover it. 00:26:29.777 [2024-07-26 14:20:37.706882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.777 [2024-07-26 14:20:37.706948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.777 qpair failed and we were unable to recover it. 00:26:29.777 [2024-07-26 14:20:37.707192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.777 [2024-07-26 14:20:37.707256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.777 qpair failed and we were unable to recover it. 00:26:29.777 [2024-07-26 14:20:37.707487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.777 [2024-07-26 14:20:37.707596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.777 qpair failed and we were unable to recover it. 00:26:29.777 [2024-07-26 14:20:37.707720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.777 [2024-07-26 14:20:37.707755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.778 qpair failed and we were unable to recover it. 00:26:29.778 [2024-07-26 14:20:37.708465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.778 [2024-07-26 14:20:37.708494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.778 qpair failed and we were unable to recover it. 00:26:29.778 [2024-07-26 14:20:37.708644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.778 [2024-07-26 14:20:37.708670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.778 qpair failed and we were unable to recover it. 00:26:29.778 [2024-07-26 14:20:37.708763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.778 [2024-07-26 14:20:37.708789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.778 qpair failed and we were unable to recover it. 00:26:29.778 [2024-07-26 14:20:37.708904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.778 [2024-07-26 14:20:37.708930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.778 qpair failed and we were unable to recover it. 00:26:29.778 [2024-07-26 14:20:37.709016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.778 [2024-07-26 14:20:37.709043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.778 qpair failed and we were unable to recover it. 00:26:29.778 [2024-07-26 14:20:37.709126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.778 [2024-07-26 14:20:37.709153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.778 qpair failed and we were unable to recover it. 00:26:29.778 [2024-07-26 14:20:37.709252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.778 [2024-07-26 14:20:37.709278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.778 qpair failed and we were unable to recover it. 00:26:29.778 [2024-07-26 14:20:37.709362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.778 [2024-07-26 14:20:37.709388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.778 qpair failed and we were unable to recover it. 00:26:29.778 [2024-07-26 14:20:37.709481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.778 [2024-07-26 14:20:37.709506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.778 qpair failed and we were unable to recover it. 00:26:29.778 [2024-07-26 14:20:37.709646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.778 [2024-07-26 14:20:37.709673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.778 qpair failed and we were unable to recover it. 00:26:29.778 [2024-07-26 14:20:37.709761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.778 [2024-07-26 14:20:37.709786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.778 qpair failed and we were unable to recover it. 00:26:29.778 [2024-07-26 14:20:37.709910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.778 [2024-07-26 14:20:37.709940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.778 qpair failed and we were unable to recover it. 00:26:29.778 [2024-07-26 14:20:37.710057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.778 [2024-07-26 14:20:37.710083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.778 qpair failed and we were unable to recover it. 00:26:29.778 [2024-07-26 14:20:37.710160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.778 [2024-07-26 14:20:37.710187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.778 qpair failed and we were unable to recover it. 00:26:29.778 [2024-07-26 14:20:37.710329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.778 [2024-07-26 14:20:37.710354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.778 qpair failed and we were unable to recover it. 00:26:29.778 [2024-07-26 14:20:37.710463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.778 [2024-07-26 14:20:37.710490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.778 qpair failed and we were unable to recover it. 00:26:29.778 [2024-07-26 14:20:37.710635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.778 [2024-07-26 14:20:37.710677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.778 qpair failed and we were unable to recover it. 00:26:29.778 [2024-07-26 14:20:37.710774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.778 [2024-07-26 14:20:37.710801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.778 qpair failed and we were unable to recover it. 00:26:29.778 [2024-07-26 14:20:37.710901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.778 [2024-07-26 14:20:37.710927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.778 qpair failed and we were unable to recover it. 00:26:29.778 [2024-07-26 14:20:37.711008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.778 [2024-07-26 14:20:37.711033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.778 qpair failed and we were unable to recover it. 00:26:29.778 [2024-07-26 14:20:37.711121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.778 [2024-07-26 14:20:37.711150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.778 qpair failed and we were unable to recover it. 00:26:29.778 [2024-07-26 14:20:37.711235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.778 [2024-07-26 14:20:37.711260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.778 qpair failed and we were unable to recover it. 00:26:29.778 [2024-07-26 14:20:37.711345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.778 [2024-07-26 14:20:37.711372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.778 qpair failed and we were unable to recover it. 00:26:29.778 [2024-07-26 14:20:37.711466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.778 [2024-07-26 14:20:37.711492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.778 qpair failed and we were unable to recover it. 00:26:29.778 [2024-07-26 14:20:37.711647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.778 [2024-07-26 14:20:37.711682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.778 qpair failed and we were unable to recover it. 00:26:29.778 [2024-07-26 14:20:37.711795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.778 [2024-07-26 14:20:37.711836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.778 qpair failed and we were unable to recover it. 00:26:29.778 [2024-07-26 14:20:37.711958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.778 [2024-07-26 14:20:37.711995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.778 qpair failed and we were unable to recover it. 00:26:29.778 [2024-07-26 14:20:37.712170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.778 [2024-07-26 14:20:37.712205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.778 qpair failed and we were unable to recover it. 00:26:29.778 [2024-07-26 14:20:37.712347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.778 [2024-07-26 14:20:37.712381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.778 qpair failed and we were unable to recover it. 00:26:29.778 [2024-07-26 14:20:37.712552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.778 [2024-07-26 14:20:37.712600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.778 qpair failed and we were unable to recover it. 00:26:29.778 [2024-07-26 14:20:37.712716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.778 [2024-07-26 14:20:37.712742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.778 qpair failed and we were unable to recover it. 00:26:29.778 [2024-07-26 14:20:37.712893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.778 [2024-07-26 14:20:37.712927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.779 qpair failed and we were unable to recover it. 00:26:29.779 [2024-07-26 14:20:37.713079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.779 [2024-07-26 14:20:37.713112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.779 qpair failed and we were unable to recover it. 00:26:29.779 [2024-07-26 14:20:37.713253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.779 [2024-07-26 14:20:37.713287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.779 qpair failed and we were unable to recover it. 00:26:29.779 [2024-07-26 14:20:37.713407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.779 [2024-07-26 14:20:37.713434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.779 qpair failed and we were unable to recover it. 00:26:29.779 [2024-07-26 14:20:37.713532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.779 [2024-07-26 14:20:37.713559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.779 qpair failed and we were unable to recover it. 00:26:29.779 [2024-07-26 14:20:37.713676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.779 [2024-07-26 14:20:37.713703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.779 qpair failed and we were unable to recover it. 00:26:29.779 [2024-07-26 14:20:37.713814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.779 [2024-07-26 14:20:37.713862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.779 qpair failed and we were unable to recover it. 00:26:29.779 [2024-07-26 14:20:37.714000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.779 [2024-07-26 14:20:37.714034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.779 qpair failed and we were unable to recover it. 00:26:29.779 [2024-07-26 14:20:37.714203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.779 [2024-07-26 14:20:37.714268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.779 qpair failed and we were unable to recover it. 00:26:29.779 [2024-07-26 14:20:37.714437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.779 [2024-07-26 14:20:37.714462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.779 qpair failed and we were unable to recover it. 00:26:29.779 [2024-07-26 14:20:37.714577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.779 [2024-07-26 14:20:37.714604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.779 qpair failed and we were unable to recover it. 00:26:29.779 [2024-07-26 14:20:37.714679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.779 [2024-07-26 14:20:37.714705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.779 qpair failed and we were unable to recover it. 00:26:29.779 [2024-07-26 14:20:37.714790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.779 [2024-07-26 14:20:37.714816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.779 qpair failed and we were unable to recover it. 00:26:29.779 [2024-07-26 14:20:37.714911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.779 [2024-07-26 14:20:37.714938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.779 qpair failed and we were unable to recover it. 00:26:29.779 [2024-07-26 14:20:37.715081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.779 [2024-07-26 14:20:37.715116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.779 qpair failed and we were unable to recover it. 00:26:29.779 [2024-07-26 14:20:37.715222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.779 [2024-07-26 14:20:37.715248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.779 qpair failed and we were unable to recover it. 00:26:29.779 [2024-07-26 14:20:37.715427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.779 [2024-07-26 14:20:37.715452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.779 qpair failed and we were unable to recover it. 00:26:29.779 [2024-07-26 14:20:37.715572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.779 [2024-07-26 14:20:37.715598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.779 qpair failed and we were unable to recover it. 00:26:29.779 [2024-07-26 14:20:37.715692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.779 [2024-07-26 14:20:37.715718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.779 qpair failed and we were unable to recover it. 00:26:29.779 [2024-07-26 14:20:37.715842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.779 [2024-07-26 14:20:37.715893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.779 qpair failed and we were unable to recover it. 00:26:29.779 [2024-07-26 14:20:37.716009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.779 [2024-07-26 14:20:37.716057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.779 qpair failed and we were unable to recover it. 00:26:29.779 [2024-07-26 14:20:37.716232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.779 [2024-07-26 14:20:37.716267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.779 qpair failed and we were unable to recover it. 00:26:29.779 [2024-07-26 14:20:37.716374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.779 [2024-07-26 14:20:37.716416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.779 qpair failed and we were unable to recover it. 00:26:29.779 [2024-07-26 14:20:37.716506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.779 [2024-07-26 14:20:37.716551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.779 qpair failed and we were unable to recover it. 00:26:29.779 [2024-07-26 14:20:37.716667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.779 [2024-07-26 14:20:37.716693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.779 qpair failed and we were unable to recover it. 00:26:29.779 [2024-07-26 14:20:37.716823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.779 [2024-07-26 14:20:37.716867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.779 qpair failed and we were unable to recover it. 00:26:29.779 [2024-07-26 14:20:37.717005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.779 [2024-07-26 14:20:37.717056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.779 qpair failed and we were unable to recover it. 00:26:29.779 [2024-07-26 14:20:37.717179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.779 [2024-07-26 14:20:37.717220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.779 qpair failed and we were unable to recover it. 00:26:29.779 [2024-07-26 14:20:37.717367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.779 [2024-07-26 14:20:37.717403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.779 qpair failed and we were unable to recover it. 00:26:29.779 [2024-07-26 14:20:37.717519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.779 [2024-07-26 14:20:37.717580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.779 qpair failed and we were unable to recover it. 00:26:29.779 [2024-07-26 14:20:37.717691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.779 [2024-07-26 14:20:37.717717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.779 qpair failed and we were unable to recover it. 00:26:29.779 [2024-07-26 14:20:37.717861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.779 [2024-07-26 14:20:37.717894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.779 qpair failed and we were unable to recover it. 00:26:29.779 [2024-07-26 14:20:37.718022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.779 [2024-07-26 14:20:37.718058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.779 qpair failed and we were unable to recover it. 00:26:29.779 [2024-07-26 14:20:37.718315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.779 [2024-07-26 14:20:37.718380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.779 qpair failed and we were unable to recover it. 00:26:29.779 [2024-07-26 14:20:37.718615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.779 [2024-07-26 14:20:37.718642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.779 qpair failed and we were unable to recover it. 00:26:29.779 [2024-07-26 14:20:37.718737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.779 [2024-07-26 14:20:37.718763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.779 qpair failed and we were unable to recover it. 00:26:29.779 [2024-07-26 14:20:37.718887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.779 [2024-07-26 14:20:37.718930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.779 qpair failed and we were unable to recover it. 00:26:29.779 [2024-07-26 14:20:37.719031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.779 [2024-07-26 14:20:37.719067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.779 qpair failed and we were unable to recover it. 00:26:29.779 [2024-07-26 14:20:37.719271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.780 [2024-07-26 14:20:37.719335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.780 qpair failed and we were unable to recover it. 00:26:29.780 [2024-07-26 14:20:37.719540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.780 [2024-07-26 14:20:37.719567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.780 qpair failed and we were unable to recover it. 00:26:29.780 [2024-07-26 14:20:37.719660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.780 [2024-07-26 14:20:37.719686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.780 qpair failed and we were unable to recover it. 00:26:29.780 [2024-07-26 14:20:37.719765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.780 [2024-07-26 14:20:37.719791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.780 qpair failed and we were unable to recover it. 00:26:29.780 [2024-07-26 14:20:37.719979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.780 [2024-07-26 14:20:37.720012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.780 qpair failed and we were unable to recover it. 00:26:29.780 [2024-07-26 14:20:37.720280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.780 [2024-07-26 14:20:37.720344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.780 qpair failed and we were unable to recover it. 00:26:29.780 [2024-07-26 14:20:37.720554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.780 [2024-07-26 14:20:37.720600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.780 qpair failed and we were unable to recover it. 00:26:29.780 [2024-07-26 14:20:37.720696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.780 [2024-07-26 14:20:37.720722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.780 qpair failed and we were unable to recover it. 00:26:29.780 [2024-07-26 14:20:37.720809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.780 [2024-07-26 14:20:37.720844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.780 qpair failed and we were unable to recover it. 00:26:29.780 [2024-07-26 14:20:37.720931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.780 [2024-07-26 14:20:37.720980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.780 qpair failed and we were unable to recover it. 00:26:29.780 [2024-07-26 14:20:37.721123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.780 [2024-07-26 14:20:37.721195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.780 qpair failed and we were unable to recover it. 00:26:29.780 [2024-07-26 14:20:37.721334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.780 [2024-07-26 14:20:37.721404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.780 qpair failed and we were unable to recover it. 00:26:29.780 [2024-07-26 14:20:37.721656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.780 [2024-07-26 14:20:37.721683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.780 qpair failed and we were unable to recover it. 00:26:29.780 [2024-07-26 14:20:37.721764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.780 [2024-07-26 14:20:37.721790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.780 qpair failed and we were unable to recover it. 00:26:29.780 [2024-07-26 14:20:37.721962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.780 [2024-07-26 14:20:37.721988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.780 qpair failed and we were unable to recover it. 00:26:29.780 [2024-07-26 14:20:37.722173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.780 [2024-07-26 14:20:37.722238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.780 qpair failed and we were unable to recover it. 00:26:29.780 [2024-07-26 14:20:37.722379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.780 [2024-07-26 14:20:37.722405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.780 qpair failed and we were unable to recover it. 00:26:29.780 [2024-07-26 14:20:37.722532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.780 [2024-07-26 14:20:37.722559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.780 qpair failed and we were unable to recover it. 00:26:29.780 [2024-07-26 14:20:37.722671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.780 [2024-07-26 14:20:37.722698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.780 qpair failed and we were unable to recover it. 00:26:29.780 [2024-07-26 14:20:37.722784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.780 [2024-07-26 14:20:37.722811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.780 qpair failed and we were unable to recover it. 00:26:29.780 [2024-07-26 14:20:37.722968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.780 [2024-07-26 14:20:37.723003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.780 qpair failed and we were unable to recover it. 00:26:29.780 [2024-07-26 14:20:37.723144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.780 [2024-07-26 14:20:37.723179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.780 qpair failed and we were unable to recover it. 00:26:29.780 [2024-07-26 14:20:37.723388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.780 [2024-07-26 14:20:37.723453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.780 qpair failed and we were unable to recover it. 00:26:29.780 [2024-07-26 14:20:37.723648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.780 [2024-07-26 14:20:37.723674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.780 qpair failed and we were unable to recover it. 00:26:29.780 [2024-07-26 14:20:37.723761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.780 [2024-07-26 14:20:37.723788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.780 qpair failed and we were unable to recover it. 00:26:29.780 [2024-07-26 14:20:37.723893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.780 [2024-07-26 14:20:37.723927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.780 qpair failed and we were unable to recover it. 00:26:29.780 [2024-07-26 14:20:37.724047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.780 [2024-07-26 14:20:37.724092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.780 qpair failed and we were unable to recover it. 00:26:29.780 [2024-07-26 14:20:37.724340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.780 [2024-07-26 14:20:37.724406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.780 qpair failed and we were unable to recover it. 00:26:29.780 [2024-07-26 14:20:37.724594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.780 [2024-07-26 14:20:37.724621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.780 qpair failed and we were unable to recover it. 00:26:29.780 [2024-07-26 14:20:37.724708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.780 [2024-07-26 14:20:37.724733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.780 qpair failed and we were unable to recover it. 00:26:29.780 [2024-07-26 14:20:37.724824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.780 [2024-07-26 14:20:37.724851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.780 qpair failed and we were unable to recover it. 00:26:29.780 [2024-07-26 14:20:37.724982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.780 [2024-07-26 14:20:37.725017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.780 qpair failed and we were unable to recover it. 00:26:29.780 [2024-07-26 14:20:37.725187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.780 [2024-07-26 14:20:37.725249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.780 qpair failed and we were unable to recover it. 00:26:29.780 [2024-07-26 14:20:37.725483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.780 [2024-07-26 14:20:37.725535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.780 qpair failed and we were unable to recover it. 00:26:29.780 [2024-07-26 14:20:37.725647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.780 [2024-07-26 14:20:37.725672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.780 qpair failed and we were unable to recover it. 00:26:29.780 [2024-07-26 14:20:37.725787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.780 [2024-07-26 14:20:37.725813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.780 qpair failed and we were unable to recover it. 00:26:29.780 [2024-07-26 14:20:37.725911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.780 [2024-07-26 14:20:37.725938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.780 qpair failed and we were unable to recover it. 00:26:29.780 [2024-07-26 14:20:37.726101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.780 [2024-07-26 14:20:37.726136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.780 qpair failed and we were unable to recover it. 00:26:29.781 [2024-07-26 14:20:37.726339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.781 [2024-07-26 14:20:37.726417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.781 qpair failed and we were unable to recover it. 00:26:29.781 [2024-07-26 14:20:37.726630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.781 [2024-07-26 14:20:37.726658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.781 qpair failed and we were unable to recover it. 00:26:29.781 [2024-07-26 14:20:37.726751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.781 [2024-07-26 14:20:37.726777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.781 qpair failed and we were unable to recover it. 00:26:29.781 [2024-07-26 14:20:37.726933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.781 [2024-07-26 14:20:37.726960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.781 qpair failed and we were unable to recover it. 00:26:29.781 [2024-07-26 14:20:37.727052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.781 [2024-07-26 14:20:37.727080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.781 qpair failed and we were unable to recover it. 00:26:29.781 [2024-07-26 14:20:37.727196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.781 [2024-07-26 14:20:37.727231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.781 qpair failed and we were unable to recover it. 00:26:29.781 [2024-07-26 14:20:37.727390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.781 [2024-07-26 14:20:37.727457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.781 qpair failed and we were unable to recover it. 00:26:29.781 [2024-07-26 14:20:37.727649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.781 [2024-07-26 14:20:37.727678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.781 qpair failed and we were unable to recover it. 00:26:29.781 [2024-07-26 14:20:37.727768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.781 [2024-07-26 14:20:37.727794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.781 qpair failed and we were unable to recover it. 00:26:29.781 [2024-07-26 14:20:37.727962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.781 [2024-07-26 14:20:37.727988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.781 qpair failed and we were unable to recover it. 00:26:29.781 [2024-07-26 14:20:37.728104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.781 [2024-07-26 14:20:37.728129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.781 qpair failed and we were unable to recover it. 00:26:29.781 [2024-07-26 14:20:37.728236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.781 [2024-07-26 14:20:37.728266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.781 qpair failed and we were unable to recover it. 00:26:29.781 [2024-07-26 14:20:37.728385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.781 [2024-07-26 14:20:37.728419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.781 qpair failed and we were unable to recover it. 00:26:29.781 [2024-07-26 14:20:37.728619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.781 [2024-07-26 14:20:37.728649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.781 qpair failed and we were unable to recover it. 00:26:29.781 [2024-07-26 14:20:37.728759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.781 [2024-07-26 14:20:37.728785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.781 qpair failed and we were unable to recover it. 00:26:29.781 [2024-07-26 14:20:37.728901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.781 [2024-07-26 14:20:37.728928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.781 qpair failed and we were unable to recover it. 00:26:29.781 [2024-07-26 14:20:37.729130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.781 [2024-07-26 14:20:37.729161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.781 qpair failed and we were unable to recover it. 00:26:29.781 [2024-07-26 14:20:37.729278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.781 [2024-07-26 14:20:37.729320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.781 qpair failed and we were unable to recover it. 00:26:29.781 [2024-07-26 14:20:37.729595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.781 [2024-07-26 14:20:37.729622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.781 qpair failed and we were unable to recover it. 00:26:29.781 [2024-07-26 14:20:37.729719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.781 [2024-07-26 14:20:37.729745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.781 qpair failed and we were unable to recover it. 00:26:29.781 [2024-07-26 14:20:37.729864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.781 [2024-07-26 14:20:37.729891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.781 qpair failed and we were unable to recover it. 00:26:29.781 [2024-07-26 14:20:37.731129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.781 [2024-07-26 14:20:37.731203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.781 qpair failed and we were unable to recover it. 00:26:29.781 [2024-07-26 14:20:37.731468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.781 [2024-07-26 14:20:37.731590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.781 qpair failed and we were unable to recover it. 00:26:29.781 [2024-07-26 14:20:37.731688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.781 [2024-07-26 14:20:37.731716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.781 qpair failed and we were unable to recover it. 00:26:29.781 [2024-07-26 14:20:37.731832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.781 [2024-07-26 14:20:37.731858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.781 qpair failed and we were unable to recover it. 00:26:29.781 [2024-07-26 14:20:37.732090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.781 [2024-07-26 14:20:37.732154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.781 qpair failed and we were unable to recover it. 00:26:29.781 [2024-07-26 14:20:37.732395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.781 [2024-07-26 14:20:37.732458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.781 qpair failed and we were unable to recover it. 00:26:29.781 [2024-07-26 14:20:37.732659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.781 [2024-07-26 14:20:37.732685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.781 qpair failed and we were unable to recover it. 00:26:29.781 [2024-07-26 14:20:37.732791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.781 [2024-07-26 14:20:37.732817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.781 qpair failed and we were unable to recover it. 00:26:29.781 [2024-07-26 14:20:37.732912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.781 [2024-07-26 14:20:37.732938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.781 qpair failed and we were unable to recover it. 00:26:29.781 [2024-07-26 14:20:37.733058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.781 [2024-07-26 14:20:37.733085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.781 qpair failed and we were unable to recover it. 00:26:29.781 [2024-07-26 14:20:37.733260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.781 [2024-07-26 14:20:37.733318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.781 qpair failed and we were unable to recover it. 00:26:29.781 [2024-07-26 14:20:37.733610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.781 [2024-07-26 14:20:37.733645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.781 qpair failed and we were unable to recover it. 00:26:29.781 [2024-07-26 14:20:37.733791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.781 [2024-07-26 14:20:37.733862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.781 qpair failed and we were unable to recover it. 00:26:29.781 [2024-07-26 14:20:37.734154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.781 [2024-07-26 14:20:37.734218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.781 qpair failed and we were unable to recover it. 00:26:29.781 [2024-07-26 14:20:37.734480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.781 [2024-07-26 14:20:37.734564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.781 qpair failed and we were unable to recover it. 00:26:29.781 [2024-07-26 14:20:37.734709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.781 [2024-07-26 14:20:37.734744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.781 qpair failed and we were unable to recover it. 00:26:29.781 [2024-07-26 14:20:37.734913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.781 [2024-07-26 14:20:37.734970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.782 qpair failed and we were unable to recover it. 00:26:29.782 [2024-07-26 14:20:37.735268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.782 [2024-07-26 14:20:37.735331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.782 qpair failed and we were unable to recover it. 00:26:29.782 [2024-07-26 14:20:37.735579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.782 [2024-07-26 14:20:37.735615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.782 qpair failed and we were unable to recover it. 00:26:29.782 [2024-07-26 14:20:37.735738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.782 [2024-07-26 14:20:37.735772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.782 qpair failed and we were unable to recover it. 00:26:29.782 [2024-07-26 14:20:37.736009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.782 [2024-07-26 14:20:37.736043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.782 qpair failed and we were unable to recover it. 00:26:29.782 [2024-07-26 14:20:37.736189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.782 [2024-07-26 14:20:37.736223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.782 qpair failed and we were unable to recover it. 00:26:29.782 [2024-07-26 14:20:37.736377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.782 [2024-07-26 14:20:37.736443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.782 qpair failed and we were unable to recover it. 00:26:29.782 [2024-07-26 14:20:37.736688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.782 [2024-07-26 14:20:37.736724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.782 qpair failed and we were unable to recover it. 00:26:29.782 [2024-07-26 14:20:37.736838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.782 [2024-07-26 14:20:37.736873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.782 qpair failed and we were unable to recover it. 00:26:29.782 [2024-07-26 14:20:37.737048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.782 [2024-07-26 14:20:37.737117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.782 qpair failed and we were unable to recover it. 00:26:29.782 [2024-07-26 14:20:37.737402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.782 [2024-07-26 14:20:37.737436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.782 qpair failed and we were unable to recover it. 00:26:29.782 [2024-07-26 14:20:37.737578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.782 [2024-07-26 14:20:37.737611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.782 qpair failed and we were unable to recover it. 00:26:29.782 [2024-07-26 14:20:37.737779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.782 [2024-07-26 14:20:37.737833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.782 qpair failed and we were unable to recover it. 00:26:29.782 [2024-07-26 14:20:37.738115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.782 [2024-07-26 14:20:37.738181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.782 qpair failed and we were unable to recover it. 00:26:29.782 [2024-07-26 14:20:37.738465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.782 [2024-07-26 14:20:37.738567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.782 qpair failed and we were unable to recover it. 00:26:29.782 [2024-07-26 14:20:37.738734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.782 [2024-07-26 14:20:37.738768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.782 qpair failed and we were unable to recover it. 00:26:29.782 [2024-07-26 14:20:37.738915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.782 [2024-07-26 14:20:37.738949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.782 qpair failed and we were unable to recover it. 00:26:29.782 [2024-07-26 14:20:37.739088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.782 [2024-07-26 14:20:37.739152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.782 qpair failed and we were unable to recover it. 00:26:29.782 [2024-07-26 14:20:37.739360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.782 [2024-07-26 14:20:37.739425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.782 qpair failed and we were unable to recover it. 00:26:29.782 [2024-07-26 14:20:37.739651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.782 [2024-07-26 14:20:37.739686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.782 qpair failed and we were unable to recover it. 00:26:29.782 [2024-07-26 14:20:37.739867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.782 [2024-07-26 14:20:37.739941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.782 qpair failed and we were unable to recover it. 00:26:29.782 [2024-07-26 14:20:37.740216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.782 [2024-07-26 14:20:37.740280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.782 qpair failed and we were unable to recover it. 00:26:29.782 [2024-07-26 14:20:37.740546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.782 [2024-07-26 14:20:37.740605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.782 qpair failed and we were unable to recover it. 00:26:29.782 [2024-07-26 14:20:37.740725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.782 [2024-07-26 14:20:37.740759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.782 qpair failed and we were unable to recover it. 00:26:29.782 [2024-07-26 14:20:37.740861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.782 [2024-07-26 14:20:37.740893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.782 qpair failed and we were unable to recover it. 00:26:29.782 [2024-07-26 14:20:37.741038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.782 [2024-07-26 14:20:37.741072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.782 qpair failed and we were unable to recover it. 00:26:29.782 [2024-07-26 14:20:37.741212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.782 [2024-07-26 14:20:37.741262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.782 qpair failed and we were unable to recover it. 00:26:29.782 [2024-07-26 14:20:37.741483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.782 [2024-07-26 14:20:37.741584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.782 qpair failed and we were unable to recover it. 00:26:29.782 [2024-07-26 14:20:37.741712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.782 [2024-07-26 14:20:37.741746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.782 qpair failed and we were unable to recover it. 00:26:29.782 [2024-07-26 14:20:37.741911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.782 [2024-07-26 14:20:37.741944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.782 qpair failed and we were unable to recover it. 00:26:29.782 [2024-07-26 14:20:37.742131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.782 [2024-07-26 14:20:37.742193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:29.782 qpair failed and we were unable to recover it. 00:26:29.782 [2024-07-26 14:20:37.742488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.782 [2024-07-26 14:20:37.742551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.782 qpair failed and we were unable to recover it. 00:26:29.782 [2024-07-26 14:20:37.742725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.782 [2024-07-26 14:20:37.742761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.782 qpair failed and we were unable to recover it. 00:26:29.782 [2024-07-26 14:20:37.742961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.782 [2024-07-26 14:20:37.743029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.782 qpair failed and we were unable to recover it. 00:26:29.782 [2024-07-26 14:20:37.743248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.782 [2024-07-26 14:20:37.743319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.782 qpair failed and we were unable to recover it. 00:26:29.782 [2024-07-26 14:20:37.743589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.782 [2024-07-26 14:20:37.743623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.782 qpair failed and we were unable to recover it. 00:26:29.782 [2024-07-26 14:20:37.743732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.782 [2024-07-26 14:20:37.743766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.782 qpair failed and we were unable to recover it. 00:26:29.782 [2024-07-26 14:20:37.744008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.782 [2024-07-26 14:20:37.744072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.782 qpair failed and we were unable to recover it. 00:26:29.782 [2024-07-26 14:20:37.744363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.783 [2024-07-26 14:20:37.744396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.783 qpair failed and we were unable to recover it. 00:26:29.783 [2024-07-26 14:20:37.744550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.783 [2024-07-26 14:20:37.744585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.783 qpair failed and we were unable to recover it. 00:26:29.783 [2024-07-26 14:20:37.744702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.783 [2024-07-26 14:20:37.744737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.783 qpair failed and we were unable to recover it. 00:26:29.783 [2024-07-26 14:20:37.744915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.783 [2024-07-26 14:20:37.744949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.783 qpair failed and we were unable to recover it. 00:26:29.783 [2024-07-26 14:20:37.745225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.783 [2024-07-26 14:20:37.745288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.783 qpair failed and we were unable to recover it. 00:26:29.783 [2024-07-26 14:20:37.745579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.783 [2024-07-26 14:20:37.745613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.783 qpair failed and we were unable to recover it. 00:26:29.783 [2024-07-26 14:20:37.745753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.783 [2024-07-26 14:20:37.745787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.783 qpair failed and we were unable to recover it. 00:26:29.783 [2024-07-26 14:20:37.745999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.783 [2024-07-26 14:20:37.746062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.783 qpair failed and we were unable to recover it. 00:26:29.783 [2024-07-26 14:20:37.746299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.783 [2024-07-26 14:20:37.746361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.783 qpair failed and we were unable to recover it. 00:26:29.783 [2024-07-26 14:20:37.746591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.783 [2024-07-26 14:20:37.746620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.783 qpair failed and we were unable to recover it. 00:26:29.783 [2024-07-26 14:20:37.746709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.783 [2024-07-26 14:20:37.746737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.783 qpair failed and we were unable to recover it. 00:26:29.783 [2024-07-26 14:20:37.746840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.783 [2024-07-26 14:20:37.746867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.783 qpair failed and we were unable to recover it. 00:26:29.783 [2024-07-26 14:20:37.746994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.783 [2024-07-26 14:20:37.747022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.783 qpair failed and we were unable to recover it. 00:26:29.783 [2024-07-26 14:20:37.747299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.783 [2024-07-26 14:20:37.747363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.783 qpair failed and we were unable to recover it. 00:26:29.783 [2024-07-26 14:20:37.747562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.783 [2024-07-26 14:20:37.747621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.783 qpair failed and we were unable to recover it. 00:26:29.783 [2024-07-26 14:20:37.747716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.783 [2024-07-26 14:20:37.747744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.783 qpair failed and we were unable to recover it. 00:26:29.783 [2024-07-26 14:20:37.747853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.783 [2024-07-26 14:20:37.747884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.783 qpair failed and we were unable to recover it. 00:26:29.783 [2024-07-26 14:20:37.748068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.783 [2024-07-26 14:20:37.748133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.783 qpair failed and we were unable to recover it. 00:26:29.783 [2024-07-26 14:20:37.748453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.783 [2024-07-26 14:20:37.748517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.783 qpair failed and we were unable to recover it. 00:26:29.783 [2024-07-26 14:20:37.748703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.783 [2024-07-26 14:20:37.748731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.783 qpair failed and we were unable to recover it. 00:26:29.783 [2024-07-26 14:20:37.748825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.783 [2024-07-26 14:20:37.748864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.783 qpair failed and we were unable to recover it. 00:26:29.783 [2024-07-26 14:20:37.748955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.783 [2024-07-26 14:20:37.748983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.783 qpair failed and we were unable to recover it. 00:26:29.783 [2024-07-26 14:20:37.749106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.783 [2024-07-26 14:20:37.749178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.783 qpair failed and we were unable to recover it. 00:26:29.783 [2024-07-26 14:20:37.749408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.783 [2024-07-26 14:20:37.749473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.783 qpair failed and we were unable to recover it. 00:26:29.783 [2024-07-26 14:20:37.749671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.783 [2024-07-26 14:20:37.749699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.783 qpair failed and we were unable to recover it. 00:26:29.783 [2024-07-26 14:20:37.749791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.783 [2024-07-26 14:20:37.749819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.783 qpair failed and we were unable to recover it. 00:26:29.783 [2024-07-26 14:20:37.749909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.783 [2024-07-26 14:20:37.749936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.783 qpair failed and we were unable to recover it. 00:26:29.783 [2024-07-26 14:20:37.750074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.783 [2024-07-26 14:20:37.750103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.783 qpair failed and we were unable to recover it. 00:26:29.783 [2024-07-26 14:20:37.750450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.783 [2024-07-26 14:20:37.750488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.783 qpair failed and we were unable to recover it. 00:26:29.783 [2024-07-26 14:20:37.750617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.783 [2024-07-26 14:20:37.750644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.783 qpair failed and we were unable to recover it. 00:26:29.783 [2024-07-26 14:20:37.750734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.783 [2024-07-26 14:20:37.750767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.783 qpair failed and we were unable to recover it. 00:26:29.783 [2024-07-26 14:20:37.750867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.783 [2024-07-26 14:20:37.750906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.783 qpair failed and we were unable to recover it. 00:26:29.783 [2024-07-26 14:20:37.751047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.783 [2024-07-26 14:20:37.751092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.783 qpair failed and we were unable to recover it. 00:26:29.783 [2024-07-26 14:20:37.751352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.784 [2024-07-26 14:20:37.751415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.784 qpair failed and we were unable to recover it. 00:26:29.784 [2024-07-26 14:20:37.751586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.784 [2024-07-26 14:20:37.751616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.784 qpair failed and we were unable to recover it. 00:26:29.784 [2024-07-26 14:20:37.751729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.784 [2024-07-26 14:20:37.751757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.784 qpair failed and we were unable to recover it. 00:26:29.784 [2024-07-26 14:20:37.751929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.784 [2024-07-26 14:20:37.751962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.784 qpair failed and we were unable to recover it. 00:26:29.784 [2024-07-26 14:20:37.752168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.784 [2024-07-26 14:20:37.752202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.784 qpair failed and we were unable to recover it. 00:26:29.784 [2024-07-26 14:20:37.752412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.784 [2024-07-26 14:20:37.752475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.784 qpair failed and we were unable to recover it. 00:26:29.784 [2024-07-26 14:20:37.752643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.784 [2024-07-26 14:20:37.752672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.784 qpair failed and we were unable to recover it. 00:26:29.784 [2024-07-26 14:20:37.752770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.784 [2024-07-26 14:20:37.752798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.784 qpair failed and we were unable to recover it. 00:26:29.784 [2024-07-26 14:20:37.752900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.784 [2024-07-26 14:20:37.752945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.784 qpair failed and we were unable to recover it. 00:26:29.784 [2024-07-26 14:20:37.753056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.784 [2024-07-26 14:20:37.753090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.784 qpair failed and we were unable to recover it. 00:26:29.784 [2024-07-26 14:20:37.753240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.784 [2024-07-26 14:20:37.753273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.784 qpair failed and we were unable to recover it. 00:26:29.784 [2024-07-26 14:20:37.753440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.784 [2024-07-26 14:20:37.753486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.784 qpair failed and we were unable to recover it. 00:26:29.784 [2024-07-26 14:20:37.753612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.784 [2024-07-26 14:20:37.753642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.784 qpair failed and we were unable to recover it. 00:26:29.784 [2024-07-26 14:20:37.753739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.784 [2024-07-26 14:20:37.753767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.784 qpair failed and we were unable to recover it. 00:26:29.784 [2024-07-26 14:20:37.753890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.784 [2024-07-26 14:20:37.753920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.784 qpair failed and we were unable to recover it. 00:26:29.784 [2024-07-26 14:20:37.754060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.784 [2024-07-26 14:20:37.754108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.784 qpair failed and we were unable to recover it. 00:26:29.784 [2024-07-26 14:20:37.754209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.784 [2024-07-26 14:20:37.754240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.784 qpair failed and we were unable to recover it. 00:26:29.784 [2024-07-26 14:20:37.754379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.784 [2024-07-26 14:20:37.754408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.784 qpair failed and we were unable to recover it. 00:26:29.784 [2024-07-26 14:20:37.754543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.784 [2024-07-26 14:20:37.754587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.784 qpair failed and we were unable to recover it. 00:26:29.784 [2024-07-26 14:20:37.754673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.784 [2024-07-26 14:20:37.754704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.784 qpair failed and we were unable to recover it. 00:26:29.784 [2024-07-26 14:20:37.754799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.784 [2024-07-26 14:20:37.754824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.784 qpair failed and we were unable to recover it. 00:26:29.784 [2024-07-26 14:20:37.754942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.784 [2024-07-26 14:20:37.754968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.784 qpair failed and we were unable to recover it. 00:26:29.784 [2024-07-26 14:20:37.755090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.784 [2024-07-26 14:20:37.755116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.784 qpair failed and we were unable to recover it. 00:26:29.784 [2024-07-26 14:20:37.755204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.784 [2024-07-26 14:20:37.755239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:29.784 qpair failed and we were unable to recover it. 00:26:29.784 [2024-07-26 14:20:37.755356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.784 [2024-07-26 14:20:37.755387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.784 qpair failed and we were unable to recover it. 00:26:29.784 [2024-07-26 14:20:37.755495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.784 [2024-07-26 14:20:37.755541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.784 qpair failed and we were unable to recover it. 00:26:29.784 [2024-07-26 14:20:37.755675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.784 [2024-07-26 14:20:37.755705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.784 qpair failed and we were unable to recover it. 00:26:29.784 [2024-07-26 14:20:37.755810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.784 [2024-07-26 14:20:37.755841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.784 qpair failed and we were unable to recover it. 00:26:29.784 [2024-07-26 14:20:37.755958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.784 [2024-07-26 14:20:37.755987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.784 qpair failed and we were unable to recover it. 00:26:29.784 [2024-07-26 14:20:37.756084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.784 [2024-07-26 14:20:37.756113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.784 qpair failed and we were unable to recover it. 00:26:29.784 [2024-07-26 14:20:37.756271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.784 [2024-07-26 14:20:37.756300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.784 qpair failed and we were unable to recover it. 00:26:29.784 [2024-07-26 14:20:37.756407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.784 [2024-07-26 14:20:37.756450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.784 qpair failed and we were unable to recover it. 00:26:29.784 [2024-07-26 14:20:37.756549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.784 [2024-07-26 14:20:37.756578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.784 qpair failed and we were unable to recover it. 00:26:29.784 [2024-07-26 14:20:37.756696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.784 [2024-07-26 14:20:37.756723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.784 qpair failed and we were unable to recover it. 00:26:29.784 [2024-07-26 14:20:37.756817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.784 [2024-07-26 14:20:37.756846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:29.784 qpair failed and we were unable to recover it. 00:26:30.069 [2024-07-26 14:20:37.756972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.069 [2024-07-26 14:20:37.757023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.069 qpair failed and we were unable to recover it. 00:26:30.069 [2024-07-26 14:20:37.757205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.069 [2024-07-26 14:20:37.757269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.069 qpair failed and we were unable to recover it. 00:26:30.069 [2024-07-26 14:20:37.757382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.069 [2024-07-26 14:20:37.757409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.069 qpair failed and we were unable to recover it. 00:26:30.069 [2024-07-26 14:20:37.757503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.069 [2024-07-26 14:20:37.757539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.069 qpair failed and we were unable to recover it. 00:26:30.070 [2024-07-26 14:20:37.757675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.070 [2024-07-26 14:20:37.757704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.070 qpair failed and we were unable to recover it. 00:26:30.070 [2024-07-26 14:20:37.757876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.070 [2024-07-26 14:20:37.757938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.070 qpair failed and we were unable to recover it. 00:26:30.070 [2024-07-26 14:20:37.758121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.070 [2024-07-26 14:20:37.758197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.070 qpair failed and we were unable to recover it. 00:26:30.070 [2024-07-26 14:20:37.758372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.070 [2024-07-26 14:20:37.758398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.070 qpair failed and we were unable to recover it. 00:26:30.070 [2024-07-26 14:20:37.758493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.070 [2024-07-26 14:20:37.758537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.070 qpair failed and we were unable to recover it. 00:26:30.070 [2024-07-26 14:20:37.758638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.070 [2024-07-26 14:20:37.758683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.070 qpair failed and we were unable to recover it. 00:26:30.070 [2024-07-26 14:20:37.758796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.070 [2024-07-26 14:20:37.758855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.070 qpair failed and we were unable to recover it. 00:26:30.070 [2024-07-26 14:20:37.759024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.070 [2024-07-26 14:20:37.759057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.070 qpair failed and we were unable to recover it. 00:26:30.070 [2024-07-26 14:20:37.759222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.070 [2024-07-26 14:20:37.759285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.070 qpair failed and we were unable to recover it. 00:26:30.070 [2024-07-26 14:20:37.759398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.070 [2024-07-26 14:20:37.759425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.070 qpair failed and we were unable to recover it. 00:26:30.070 [2024-07-26 14:20:37.759500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.070 [2024-07-26 14:20:37.759534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.070 qpair failed and we were unable to recover it. 00:26:30.070 [2024-07-26 14:20:37.759678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.070 [2024-07-26 14:20:37.759724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.070 qpair failed and we were unable to recover it. 00:26:30.070 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 328878 Killed "${NVMF_APP[@]}" "$@" 00:26:30.070 [2024-07-26 14:20:37.759883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.070 [2024-07-26 14:20:37.759920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.070 qpair failed and we were unable to recover it. 00:26:30.070 [2024-07-26 14:20:37.760051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.070 [2024-07-26 14:20:37.760084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.070 qpair failed and we were unable to recover it. 00:26:30.070 [2024-07-26 14:20:37.760225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.070 [2024-07-26 14:20:37.760260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.070 qpair failed and we were unable to recover it. 00:26:30.070 14:20:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:26:30.070 [2024-07-26 14:20:37.760394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.070 [2024-07-26 14:20:37.760431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.070 qpair failed and we were unable to recover it. 00:26:30.070 [2024-07-26 14:20:37.760558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.070 [2024-07-26 14:20:37.760593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.070 14:20:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:26:30.070 qpair failed and we were unable to recover it. 00:26:30.070 [2024-07-26 14:20:37.760706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.070 [2024-07-26 14:20:37.760742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.070 qpair failed and we were unable to recover it. 00:26:30.070 14:20:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:30.070 [2024-07-26 14:20:37.760935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.070 [2024-07-26 14:20:37.760987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.070 qpair failed and we were unable to recover it. 00:26:30.070 14:20:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:30.070 [2024-07-26 14:20:37.761207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.070 [2024-07-26 14:20:37.761288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.070 14:20:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:30.070 qpair failed and we were unable to recover it. 00:26:30.070 [2024-07-26 14:20:37.761419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.070 [2024-07-26 14:20:37.761465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.070 qpair failed and we were unable to recover it. 00:26:30.070 [2024-07-26 14:20:37.761569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.070 [2024-07-26 14:20:37.761597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.070 qpair failed and we were unable to recover it. 00:26:30.070 [2024-07-26 14:20:37.761710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.070 [2024-07-26 14:20:37.761737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.070 qpair failed and we were unable to recover it. 00:26:30.070 [2024-07-26 14:20:37.761872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.070 [2024-07-26 14:20:37.761905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.070 qpair failed and we were unable to recover it. 00:26:30.070 [2024-07-26 14:20:37.762111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.070 [2024-07-26 14:20:37.762140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.070 qpair failed and we were unable to recover it. 00:26:30.070 [2024-07-26 14:20:37.762264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.070 [2024-07-26 14:20:37.762292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.070 qpair failed and we were unable to recover it. 00:26:30.070 [2024-07-26 14:20:37.762405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.070 [2024-07-26 14:20:37.762448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.070 qpair failed and we were unable to recover it. 00:26:30.070 [2024-07-26 14:20:37.762561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.070 [2024-07-26 14:20:37.762589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.070 qpair failed and we were unable to recover it. 00:26:30.070 [2024-07-26 14:20:37.762728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.070 [2024-07-26 14:20:37.762755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.070 qpair failed and we were unable to recover it. 00:26:30.070 [2024-07-26 14:20:37.762910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.070 [2024-07-26 14:20:37.762938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.070 qpair failed and we were unable to recover it. 00:26:30.070 [2024-07-26 14:20:37.763128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.070 [2024-07-26 14:20:37.763181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.070 qpair failed and we were unable to recover it. 00:26:30.070 [2024-07-26 14:20:37.763311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.070 [2024-07-26 14:20:37.763339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.070 qpair failed and we were unable to recover it. 00:26:30.070 [2024-07-26 14:20:37.763492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.070 [2024-07-26 14:20:37.763522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.070 qpair failed and we were unable to recover it. 00:26:30.070 [2024-07-26 14:20:37.763670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.070 [2024-07-26 14:20:37.763698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.070 qpair failed and we were unable to recover it. 00:26:30.070 [2024-07-26 14:20:37.763813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.070 [2024-07-26 14:20:37.763844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.070 qpair failed and we were unable to recover it. 00:26:30.071 [2024-07-26 14:20:37.763997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.071 [2024-07-26 14:20:37.764048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.071 qpair failed and we were unable to recover it. 00:26:30.071 [2024-07-26 14:20:37.764213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.071 [2024-07-26 14:20:37.764263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.071 qpair failed and we were unable to recover it. 00:26:30.071 [2024-07-26 14:20:37.764399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.071 [2024-07-26 14:20:37.764426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.071 qpair failed and we were unable to recover it. 00:26:30.071 [2024-07-26 14:20:37.764573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.071 [2024-07-26 14:20:37.764603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.071 qpair failed and we were unable to recover it. 00:26:30.071 [2024-07-26 14:20:37.764717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.071 [2024-07-26 14:20:37.764745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.071 qpair failed and we were unable to recover it. 00:26:30.071 [2024-07-26 14:20:37.764860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.071 [2024-07-26 14:20:37.764886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.071 qpair failed and we were unable to recover it. 00:26:30.071 [2024-07-26 14:20:37.764964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.071 [2024-07-26 14:20:37.764991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.071 qpair failed and we were unable to recover it. 00:26:30.071 [2024-07-26 14:20:37.765103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.071 [2024-07-26 14:20:37.765131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.071 qpair failed and we were unable to recover it. 00:26:30.071 14:20:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=329352 00:26:30.071 14:20:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:26:30.071 [2024-07-26 14:20:37.765225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.071 [2024-07-26 14:20:37.765252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.071 qpair failed and we were unable to recover it. 00:26:30.071 14:20:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 329352 00:26:30.071 [2024-07-26 14:20:37.765373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.071 [2024-07-26 14:20:37.765401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.071 qpair failed and we were unable to recover it. 00:26:30.071 14:20:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 329352 ']' 00:26:30.071 [2024-07-26 14:20:37.765488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.071 [2024-07-26 14:20:37.765516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.071 qpair failed and we were unable to recover it. 00:26:30.071 14:20:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:30.071 [2024-07-26 14:20:37.765613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.071 [2024-07-26 14:20:37.765640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.071 qpair failed and we were unable to recover it. 00:26:30.071 14:20:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:30.071 [2024-07-26 14:20:37.765753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.071 [2024-07-26 14:20:37.765781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.071 qpair failed and we were unable to recover it. 00:26:30.071 14:20:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:30.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:30.071 [2024-07-26 14:20:37.765905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.071 [2024-07-26 14:20:37.765933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.071 14:20:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:30.071 qpair failed and we were unable to recover it. 00:26:30.071 [2024-07-26 14:20:37.766026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.071 14:20:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:30.071 [2024-07-26 14:20:37.766053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.071 qpair failed and we were unable to recover it. 00:26:30.071 [2024-07-26 14:20:37.766141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.071 [2024-07-26 14:20:37.766169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.071 qpair failed and we were unable to recover it. 00:26:30.071 [2024-07-26 14:20:37.766309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.071 [2024-07-26 14:20:37.766335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.071 qpair failed and we were unable to recover it. 00:26:30.071 [2024-07-26 14:20:37.766451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.071 [2024-07-26 14:20:37.766477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.071 qpair failed and we were unable to recover it. 00:26:30.071 [2024-07-26 14:20:37.766612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.071 [2024-07-26 14:20:37.766640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.071 qpair failed and we were unable to recover it. 00:26:30.071 [2024-07-26 14:20:37.766737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.071 [2024-07-26 14:20:37.766765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.071 qpair failed and we were unable to recover it. 00:26:30.071 [2024-07-26 14:20:37.766861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.071 [2024-07-26 14:20:37.766891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.071 qpair failed and we were unable to recover it. 00:26:30.071 [2024-07-26 14:20:37.767018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.071 [2024-07-26 14:20:37.767047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.071 qpair failed and we were unable to recover it. 00:26:30.071 [2024-07-26 14:20:37.767171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.071 [2024-07-26 14:20:37.767199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.071 qpair failed and we were unable to recover it. 00:26:30.071 [2024-07-26 14:20:37.767310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.071 [2024-07-26 14:20:37.767364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.071 qpair failed and we were unable to recover it. 00:26:30.071 [2024-07-26 14:20:37.767512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.071 [2024-07-26 14:20:37.767583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.071 qpair failed and we were unable to recover it. 00:26:30.071 [2024-07-26 14:20:37.767727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.071 [2024-07-26 14:20:37.767762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.071 qpair failed and we were unable to recover it. 00:26:30.071 [2024-07-26 14:20:37.767915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.071 [2024-07-26 14:20:37.767951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.071 qpair failed and we were unable to recover it. 00:26:30.071 [2024-07-26 14:20:37.768077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.071 [2024-07-26 14:20:37.768110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.071 qpair failed and we were unable to recover it. 00:26:30.071 [2024-07-26 14:20:37.768248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.071 [2024-07-26 14:20:37.768277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.071 qpair failed and we were unable to recover it. 00:26:30.071 [2024-07-26 14:20:37.768364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.071 [2024-07-26 14:20:37.768390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.071 qpair failed and we were unable to recover it. 00:26:30.071 [2024-07-26 14:20:37.768484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.071 [2024-07-26 14:20:37.768508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.071 qpair failed and we were unable to recover it. 00:26:30.071 [2024-07-26 14:20:37.768624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.071 [2024-07-26 14:20:37.768649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.071 qpair failed and we were unable to recover it. 00:26:30.071 [2024-07-26 14:20:37.768731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.071 [2024-07-26 14:20:37.768758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.071 qpair failed and we were unable to recover it. 00:26:30.072 [2024-07-26 14:20:37.768848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.072 [2024-07-26 14:20:37.768874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.072 qpair failed and we were unable to recover it. 00:26:30.072 [2024-07-26 14:20:37.768968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.072 [2024-07-26 14:20:37.768994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.072 qpair failed and we were unable to recover it. 00:26:30.072 [2024-07-26 14:20:37.769079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.072 [2024-07-26 14:20:37.769105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.072 qpair failed and we were unable to recover it. 00:26:30.072 [2024-07-26 14:20:37.769216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.072 [2024-07-26 14:20:37.769243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.072 qpair failed and we were unable to recover it. 00:26:30.072 [2024-07-26 14:20:37.769336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.072 [2024-07-26 14:20:37.769361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.072 qpair failed and we were unable to recover it. 00:26:30.072 [2024-07-26 14:20:37.769449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.072 [2024-07-26 14:20:37.769474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.072 qpair failed and we were unable to recover it. 00:26:30.072 [2024-07-26 14:20:37.769565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.072 [2024-07-26 14:20:37.769592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.072 qpair failed and we were unable to recover it. 00:26:30.072 [2024-07-26 14:20:37.769666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.072 [2024-07-26 14:20:37.769691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.072 qpair failed and we were unable to recover it. 00:26:30.072 [2024-07-26 14:20:37.769781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.072 [2024-07-26 14:20:37.769807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.072 qpair failed and we were unable to recover it. 00:26:30.072 [2024-07-26 14:20:37.769889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.072 [2024-07-26 14:20:37.769914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.072 qpair failed and we were unable to recover it. 00:26:30.072 [2024-07-26 14:20:37.769993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.072 [2024-07-26 14:20:37.770017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.072 qpair failed and we were unable to recover it. 00:26:30.072 [2024-07-26 14:20:37.770095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.072 [2024-07-26 14:20:37.770119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.072 qpair failed and we were unable to recover it. 00:26:30.072 [2024-07-26 14:20:37.770256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.072 [2024-07-26 14:20:37.770282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.072 qpair failed and we were unable to recover it. 00:26:30.072 [2024-07-26 14:20:37.770394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.072 [2024-07-26 14:20:37.770421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.072 qpair failed and we were unable to recover it. 00:26:30.072 [2024-07-26 14:20:37.770505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.072 [2024-07-26 14:20:37.770542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.072 qpair failed and we were unable to recover it. 00:26:30.072 [2024-07-26 14:20:37.770641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.072 [2024-07-26 14:20:37.770666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.072 qpair failed and we were unable to recover it. 00:26:30.072 [2024-07-26 14:20:37.770756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.072 [2024-07-26 14:20:37.770782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.072 qpair failed and we were unable to recover it. 00:26:30.072 [2024-07-26 14:20:37.770906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.072 [2024-07-26 14:20:37.770931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.072 qpair failed and we were unable to recover it. 00:26:30.072 [2024-07-26 14:20:37.771018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.072 [2024-07-26 14:20:37.771043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.072 qpair failed and we were unable to recover it. 00:26:30.072 [2024-07-26 14:20:37.771130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.072 [2024-07-26 14:20:37.771156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.072 qpair failed and we were unable to recover it. 00:26:30.072 [2024-07-26 14:20:37.771249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.072 [2024-07-26 14:20:37.771275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.072 qpair failed and we were unable to recover it. 00:26:30.072 [2024-07-26 14:20:37.771376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.072 [2024-07-26 14:20:37.771416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.072 qpair failed and we were unable to recover it. 00:26:30.072 [2024-07-26 14:20:37.771525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.072 [2024-07-26 14:20:37.771566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.072 qpair failed and we were unable to recover it. 00:26:30.072 [2024-07-26 14:20:37.771664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.072 [2024-07-26 14:20:37.771690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.072 qpair failed and we were unable to recover it. 00:26:30.072 [2024-07-26 14:20:37.771781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.072 [2024-07-26 14:20:37.771806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.072 qpair failed and we were unable to recover it. 00:26:30.072 [2024-07-26 14:20:37.771897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.072 [2024-07-26 14:20:37.771922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.072 qpair failed and we were unable to recover it. 00:26:30.072 [2024-07-26 14:20:37.772038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.072 [2024-07-26 14:20:37.772064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.072 qpair failed and we were unable to recover it. 00:26:30.072 [2024-07-26 14:20:37.772166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.072 [2024-07-26 14:20:37.772191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.072 qpair failed and we were unable to recover it. 00:26:30.072 [2024-07-26 14:20:37.772274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.072 [2024-07-26 14:20:37.772300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.072 qpair failed and we were unable to recover it. 00:26:30.072 [2024-07-26 14:20:37.772397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.072 [2024-07-26 14:20:37.772422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.072 qpair failed and we were unable to recover it. 00:26:30.072 [2024-07-26 14:20:37.772547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.072 [2024-07-26 14:20:37.772578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.072 qpair failed and we were unable to recover it. 00:26:30.072 [2024-07-26 14:20:37.772672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.072 [2024-07-26 14:20:37.772698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.072 qpair failed and we were unable to recover it. 00:26:30.072 [2024-07-26 14:20:37.772795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.072 [2024-07-26 14:20:37.772821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.072 qpair failed and we were unable to recover it. 00:26:30.072 [2024-07-26 14:20:37.772915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.072 [2024-07-26 14:20:37.772940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.072 qpair failed and we were unable to recover it. 00:26:30.072 [2024-07-26 14:20:37.773038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.072 [2024-07-26 14:20:37.773064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.072 qpair failed and we were unable to recover it. 00:26:30.072 [2024-07-26 14:20:37.773148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.072 [2024-07-26 14:20:37.773172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.072 qpair failed and we were unable to recover it. 00:26:30.072 [2024-07-26 14:20:37.773270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.072 [2024-07-26 14:20:37.773295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.072 qpair failed and we were unable to recover it. 00:26:30.072 [2024-07-26 14:20:37.773385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.072 [2024-07-26 14:20:37.773409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.073 qpair failed and we were unable to recover it. 00:26:30.073 [2024-07-26 14:20:37.773498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.073 [2024-07-26 14:20:37.773523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.073 qpair failed and we were unable to recover it. 00:26:30.073 [2024-07-26 14:20:37.773613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.073 [2024-07-26 14:20:37.773637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.073 qpair failed and we were unable to recover it. 00:26:30.073 [2024-07-26 14:20:37.773725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.073 [2024-07-26 14:20:37.773749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.073 qpair failed and we were unable to recover it. 00:26:30.073 [2024-07-26 14:20:37.773857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.073 [2024-07-26 14:20:37.773881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.073 qpair failed and we were unable to recover it. 00:26:30.073 [2024-07-26 14:20:37.773959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.073 [2024-07-26 14:20:37.773984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.073 qpair failed and we were unable to recover it. 00:26:30.073 [2024-07-26 14:20:37.774069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.073 [2024-07-26 14:20:37.774095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.073 qpair failed and we were unable to recover it. 00:26:30.073 [2024-07-26 14:20:37.774202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.073 [2024-07-26 14:20:37.774237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.073 qpair failed and we were unable to recover it. 00:26:30.073 [2024-07-26 14:20:37.774364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.073 [2024-07-26 14:20:37.774392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.073 qpair failed and we were unable to recover it. 00:26:30.073 [2024-07-26 14:20:37.774535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.073 [2024-07-26 14:20:37.774562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.073 qpair failed and we were unable to recover it. 00:26:30.073 [2024-07-26 14:20:37.774642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.073 [2024-07-26 14:20:37.774667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.073 qpair failed and we were unable to recover it. 00:26:30.073 [2024-07-26 14:20:37.774756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.073 [2024-07-26 14:20:37.774782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.073 qpair failed and we were unable to recover it. 00:26:30.073 [2024-07-26 14:20:37.774877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.073 [2024-07-26 14:20:37.774904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.073 qpair failed and we were unable to recover it. 00:26:30.073 [2024-07-26 14:20:37.775001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.073 [2024-07-26 14:20:37.775028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.073 qpair failed and we were unable to recover it. 00:26:30.073 [2024-07-26 14:20:37.775114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.073 [2024-07-26 14:20:37.775139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.073 qpair failed and we were unable to recover it. 00:26:30.073 [2024-07-26 14:20:37.775246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.073 [2024-07-26 14:20:37.775272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.073 qpair failed and we were unable to recover it. 00:26:30.073 [2024-07-26 14:20:37.775366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.073 [2024-07-26 14:20:37.775390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.073 qpair failed and we were unable to recover it. 00:26:30.073 [2024-07-26 14:20:37.775510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.073 [2024-07-26 14:20:37.775544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.073 qpair failed and we were unable to recover it. 00:26:30.073 [2024-07-26 14:20:37.775627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.073 [2024-07-26 14:20:37.775652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.073 qpair failed and we were unable to recover it. 00:26:30.073 [2024-07-26 14:20:37.775743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.073 [2024-07-26 14:20:37.775768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.073 qpair failed and we were unable to recover it. 00:26:30.073 [2024-07-26 14:20:37.775854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.073 [2024-07-26 14:20:37.775878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.073 qpair failed and we were unable to recover it. 00:26:30.073 [2024-07-26 14:20:37.775974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.073 [2024-07-26 14:20:37.775998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.073 qpair failed and we were unable to recover it. 00:26:30.073 [2024-07-26 14:20:37.776073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.073 [2024-07-26 14:20:37.776098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.073 qpair failed and we were unable to recover it. 00:26:30.073 [2024-07-26 14:20:37.776186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.073 [2024-07-26 14:20:37.776211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.073 qpair failed and we were unable to recover it. 00:26:30.073 [2024-07-26 14:20:37.776327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.073 [2024-07-26 14:20:37.776352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.073 qpair failed and we were unable to recover it. 00:26:30.073 [2024-07-26 14:20:37.776434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.073 [2024-07-26 14:20:37.776458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.073 qpair failed and we were unable to recover it. 00:26:30.073 [2024-07-26 14:20:37.776551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.073 [2024-07-26 14:20:37.776578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.073 qpair failed and we were unable to recover it. 00:26:30.073 [2024-07-26 14:20:37.776667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.073 [2024-07-26 14:20:37.776691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.073 qpair failed and we were unable to recover it. 00:26:30.073 [2024-07-26 14:20:37.776774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.073 [2024-07-26 14:20:37.776799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.073 qpair failed and we were unable to recover it. 00:26:30.073 [2024-07-26 14:20:37.776883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.073 [2024-07-26 14:20:37.776907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.073 qpair failed and we were unable to recover it. 00:26:30.073 [2024-07-26 14:20:37.777024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.073 [2024-07-26 14:20:37.777053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.073 qpair failed and we were unable to recover it. 00:26:30.073 [2024-07-26 14:20:37.777146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.073 [2024-07-26 14:20:37.777173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.073 qpair failed and we were unable to recover it. 00:26:30.073 [2024-07-26 14:20:37.777287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.073 [2024-07-26 14:20:37.777320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.073 qpair failed and we were unable to recover it. 00:26:30.073 [2024-07-26 14:20:37.777420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.073 [2024-07-26 14:20:37.777447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.073 qpair failed and we were unable to recover it. 00:26:30.073 [2024-07-26 14:20:37.777543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.073 [2024-07-26 14:20:37.777570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.073 qpair failed and we were unable to recover it. 00:26:30.073 [2024-07-26 14:20:37.777661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.073 [2024-07-26 14:20:37.777686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.073 qpair failed and we were unable to recover it. 00:26:30.073 [2024-07-26 14:20:37.777774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.073 [2024-07-26 14:20:37.777800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.073 qpair failed and we were unable to recover it. 00:26:30.073 [2024-07-26 14:20:37.777900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.073 [2024-07-26 14:20:37.777924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.073 qpair failed and we were unable to recover it. 00:26:30.073 [2024-07-26 14:20:37.778022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.074 [2024-07-26 14:20:37.778047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.074 qpair failed and we were unable to recover it. 00:26:30.074 [2024-07-26 14:20:37.778139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.074 [2024-07-26 14:20:37.778163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.074 qpair failed and we were unable to recover it. 00:26:30.074 [2024-07-26 14:20:37.778260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.074 [2024-07-26 14:20:37.778285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.074 qpair failed and we were unable to recover it. 00:26:30.074 [2024-07-26 14:20:37.778362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.074 [2024-07-26 14:20:37.778387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.074 qpair failed and we were unable to recover it. 00:26:30.074 [2024-07-26 14:20:37.778474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.074 [2024-07-26 14:20:37.778499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.074 qpair failed and we were unable to recover it. 00:26:30.074 [2024-07-26 14:20:37.778636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.074 [2024-07-26 14:20:37.778661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.074 qpair failed and we were unable to recover it. 00:26:30.074 [2024-07-26 14:20:37.778751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.074 [2024-07-26 14:20:37.778776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.074 qpair failed and we were unable to recover it. 00:26:30.074 [2024-07-26 14:20:37.778889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.074 [2024-07-26 14:20:37.778914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.074 qpair failed and we were unable to recover it. 00:26:30.074 [2024-07-26 14:20:37.779026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.074 [2024-07-26 14:20:37.779051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.074 qpair failed and we were unable to recover it. 00:26:30.074 [2024-07-26 14:20:37.779174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.074 [2024-07-26 14:20:37.779201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.074 qpair failed and we were unable to recover it. 00:26:30.074 [2024-07-26 14:20:37.779315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.074 [2024-07-26 14:20:37.779341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.074 qpair failed and we were unable to recover it. 00:26:30.074 [2024-07-26 14:20:37.779423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.074 [2024-07-26 14:20:37.779449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.074 qpair failed and we were unable to recover it. 00:26:30.074 [2024-07-26 14:20:37.779564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.074 [2024-07-26 14:20:37.779589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.074 qpair failed and we were unable to recover it. 00:26:30.074 [2024-07-26 14:20:37.779684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.074 [2024-07-26 14:20:37.779710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.074 qpair failed and we were unable to recover it. 00:26:30.074 [2024-07-26 14:20:37.779793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.074 [2024-07-26 14:20:37.779819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.074 qpair failed and we were unable to recover it. 00:26:30.074 [2024-07-26 14:20:37.779896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.074 [2024-07-26 14:20:37.779921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.074 qpair failed and we were unable to recover it. 00:26:30.074 [2024-07-26 14:20:37.780042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.074 [2024-07-26 14:20:37.780068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.074 qpair failed and we were unable to recover it. 00:26:30.074 [2024-07-26 14:20:37.780195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.074 [2024-07-26 14:20:37.780220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.074 qpair failed and we were unable to recover it. 00:26:30.074 [2024-07-26 14:20:37.780328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.074 [2024-07-26 14:20:37.780353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.074 qpair failed and we were unable to recover it. 00:26:30.074 [2024-07-26 14:20:37.780433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.074 [2024-07-26 14:20:37.780459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.074 qpair failed and we were unable to recover it. 00:26:30.074 [2024-07-26 14:20:37.780565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.074 [2024-07-26 14:20:37.780591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.074 qpair failed and we were unable to recover it. 00:26:30.074 [2024-07-26 14:20:37.780696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.074 [2024-07-26 14:20:37.780721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.074 qpair failed and we were unable to recover it. 00:26:30.074 [2024-07-26 14:20:37.780809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.074 [2024-07-26 14:20:37.780835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.074 qpair failed and we were unable to recover it. 00:26:30.074 [2024-07-26 14:20:37.780950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.074 [2024-07-26 14:20:37.780976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.074 qpair failed and we were unable to recover it. 00:26:30.074 [2024-07-26 14:20:37.781075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.074 [2024-07-26 14:20:37.781101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.074 qpair failed and we were unable to recover it. 00:26:30.074 [2024-07-26 14:20:37.781197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.074 [2024-07-26 14:20:37.781221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.074 qpair failed and we were unable to recover it. 00:26:30.074 [2024-07-26 14:20:37.781349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.074 [2024-07-26 14:20:37.781374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.074 qpair failed and we were unable to recover it. 00:26:30.074 [2024-07-26 14:20:37.781455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.074 [2024-07-26 14:20:37.781489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.074 qpair failed and we were unable to recover it. 00:26:30.074 [2024-07-26 14:20:37.781597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.074 [2024-07-26 14:20:37.781623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.074 qpair failed and we were unable to recover it. 00:26:30.074 [2024-07-26 14:20:37.781712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.074 [2024-07-26 14:20:37.781736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.074 qpair failed and we were unable to recover it. 00:26:30.074 [2024-07-26 14:20:37.781829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.074 [2024-07-26 14:20:37.781855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.075 qpair failed and we were unable to recover it. 00:26:30.075 [2024-07-26 14:20:37.781980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.075 [2024-07-26 14:20:37.782016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.075 qpair failed and we were unable to recover it. 00:26:30.075 [2024-07-26 14:20:37.782129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.075 [2024-07-26 14:20:37.782154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.075 qpair failed and we were unable to recover it. 00:26:30.075 [2024-07-26 14:20:37.782241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.075 [2024-07-26 14:20:37.782265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.075 qpair failed and we were unable to recover it. 00:26:30.075 [2024-07-26 14:20:37.782361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.075 [2024-07-26 14:20:37.782387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.075 qpair failed and we were unable to recover it. 00:26:30.075 [2024-07-26 14:20:37.782491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.075 [2024-07-26 14:20:37.782542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.075 qpair failed and we were unable to recover it. 00:26:30.075 [2024-07-26 14:20:37.782661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.075 [2024-07-26 14:20:37.782693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.075 qpair failed and we were unable to recover it. 00:26:30.075 [2024-07-26 14:20:37.782784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.075 [2024-07-26 14:20:37.782810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.075 qpair failed and we were unable to recover it. 00:26:30.075 [2024-07-26 14:20:37.782903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.075 [2024-07-26 14:20:37.782928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.075 qpair failed and we were unable to recover it. 00:26:30.075 [2024-07-26 14:20:37.783014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.075 [2024-07-26 14:20:37.783050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.075 qpair failed and we were unable to recover it. 00:26:30.075 [2024-07-26 14:20:37.783151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.075 [2024-07-26 14:20:37.783177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.075 qpair failed and we were unable to recover it. 00:26:30.075 [2024-07-26 14:20:37.783272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.075 [2024-07-26 14:20:37.783308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.075 qpair failed and we were unable to recover it. 00:26:30.075 [2024-07-26 14:20:37.783393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.075 [2024-07-26 14:20:37.783418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.075 qpair failed and we were unable to recover it. 00:26:30.075 [2024-07-26 14:20:37.783507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.075 [2024-07-26 14:20:37.783538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.075 qpair failed and we were unable to recover it. 00:26:30.075 [2024-07-26 14:20:37.783640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.075 [2024-07-26 14:20:37.783665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.075 qpair failed and we were unable to recover it. 00:26:30.075 [2024-07-26 14:20:37.783753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.075 [2024-07-26 14:20:37.783780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.075 qpair failed and we were unable to recover it. 00:26:30.075 [2024-07-26 14:20:37.783870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.075 [2024-07-26 14:20:37.783895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.075 qpair failed and we were unable to recover it. 00:26:30.075 [2024-07-26 14:20:37.783984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.075 [2024-07-26 14:20:37.784010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.075 qpair failed and we were unable to recover it. 00:26:30.075 [2024-07-26 14:20:37.784108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.075 [2024-07-26 14:20:37.784133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.075 qpair failed and we were unable to recover it. 00:26:30.075 [2024-07-26 14:20:37.784222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.075 [2024-07-26 14:20:37.784248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.075 qpair failed and we were unable to recover it. 00:26:30.075 [2024-07-26 14:20:37.784349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.075 [2024-07-26 14:20:37.784375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.075 qpair failed and we were unable to recover it. 00:26:30.075 [2024-07-26 14:20:37.784454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.075 [2024-07-26 14:20:37.784492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.075 qpair failed and we were unable to recover it. 00:26:30.075 [2024-07-26 14:20:37.784626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.075 [2024-07-26 14:20:37.784652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.075 qpair failed and we were unable to recover it. 00:26:30.075 [2024-07-26 14:20:37.784738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.075 [2024-07-26 14:20:37.784763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.075 qpair failed and we were unable to recover it. 00:26:30.075 [2024-07-26 14:20:37.784888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.075 [2024-07-26 14:20:37.784914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.075 qpair failed and we were unable to recover it. 00:26:30.075 [2024-07-26 14:20:37.784997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.075 [2024-07-26 14:20:37.785023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.075 qpair failed and we were unable to recover it. 00:26:30.075 [2024-07-26 14:20:37.785104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.075 [2024-07-26 14:20:37.785129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.075 qpair failed and we were unable to recover it. 00:26:30.075 [2024-07-26 14:20:37.785212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.075 [2024-07-26 14:20:37.785237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.075 qpair failed and we were unable to recover it. 00:26:30.075 [2024-07-26 14:20:37.785337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.075 [2024-07-26 14:20:37.785364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.075 qpair failed and we were unable to recover it. 00:26:30.075 [2024-07-26 14:20:37.785455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.075 [2024-07-26 14:20:37.785481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.075 qpair failed and we were unable to recover it. 00:26:30.075 [2024-07-26 14:20:37.785580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.075 [2024-07-26 14:20:37.785609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.075 qpair failed and we were unable to recover it. 00:26:30.075 [2024-07-26 14:20:37.785708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.075 [2024-07-26 14:20:37.785734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.075 qpair failed and we were unable to recover it. 00:26:30.075 [2024-07-26 14:20:37.785865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.075 [2024-07-26 14:20:37.785893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.075 qpair failed and we were unable to recover it. 00:26:30.075 [2024-07-26 14:20:37.786585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.075 [2024-07-26 14:20:37.786669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.075 qpair failed and we were unable to recover it. 00:26:30.075 [2024-07-26 14:20:37.786811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.075 [2024-07-26 14:20:37.786837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.075 qpair failed and we were unable to recover it. 00:26:30.075 [2024-07-26 14:20:37.786942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.075 [2024-07-26 14:20:37.786967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.075 qpair failed and we were unable to recover it. 00:26:30.075 [2024-07-26 14:20:37.787064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.075 [2024-07-26 14:20:37.787091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.075 qpair failed and we were unable to recover it. 00:26:30.075 [2024-07-26 14:20:37.787185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.075 [2024-07-26 14:20:37.787211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.075 qpair failed and we were unable to recover it. 00:26:30.076 [2024-07-26 14:20:37.787327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.076 [2024-07-26 14:20:37.787354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.076 qpair failed and we were unable to recover it. 00:26:30.076 [2024-07-26 14:20:37.787442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.076 [2024-07-26 14:20:37.787467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.076 qpair failed and we were unable to recover it. 00:26:30.076 [2024-07-26 14:20:37.790545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.076 [2024-07-26 14:20:37.790585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.076 qpair failed and we were unable to recover it. 00:26:30.076 [2024-07-26 14:20:37.790694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.076 [2024-07-26 14:20:37.790721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.076 qpair failed and we were unable to recover it. 00:26:30.076 [2024-07-26 14:20:37.790822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.076 [2024-07-26 14:20:37.790848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.076 qpair failed and we were unable to recover it. 00:26:30.076 [2024-07-26 14:20:37.790938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.076 [2024-07-26 14:20:37.790963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.076 qpair failed and we were unable to recover it. 00:26:30.076 [2024-07-26 14:20:37.791055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.076 [2024-07-26 14:20:37.791082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.076 qpair failed and we were unable to recover it. 00:26:30.076 [2024-07-26 14:20:37.791193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.076 [2024-07-26 14:20:37.791220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.076 qpair failed and we were unable to recover it. 00:26:30.076 [2024-07-26 14:20:37.791320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.076 [2024-07-26 14:20:37.791350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.076 qpair failed and we were unable to recover it. 00:26:30.076 [2024-07-26 14:20:37.791442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.076 [2024-07-26 14:20:37.791468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.076 qpair failed and we were unable to recover it. 00:26:30.076 [2024-07-26 14:20:37.791561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.076 [2024-07-26 14:20:37.791587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.076 qpair failed and we were unable to recover it. 00:26:30.076 [2024-07-26 14:20:37.791685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.076 [2024-07-26 14:20:37.791711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.076 qpair failed and we were unable to recover it. 00:26:30.076 [2024-07-26 14:20:37.791802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.076 [2024-07-26 14:20:37.791827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.076 qpair failed and we were unable to recover it. 00:26:30.076 [2024-07-26 14:20:37.791911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.076 [2024-07-26 14:20:37.791938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.076 qpair failed and we were unable to recover it. 00:26:30.076 [2024-07-26 14:20:37.792042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.076 [2024-07-26 14:20:37.792070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.076 qpair failed and we were unable to recover it. 00:26:30.076 [2024-07-26 14:20:37.792198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.076 [2024-07-26 14:20:37.792225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.076 qpair failed and we were unable to recover it. 00:26:30.076 [2024-07-26 14:20:37.792299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.076 [2024-07-26 14:20:37.792325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.076 qpair failed and we were unable to recover it. 00:26:30.076 [2024-07-26 14:20:37.792417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.076 [2024-07-26 14:20:37.792442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.076 qpair failed and we were unable to recover it. 00:26:30.076 [2024-07-26 14:20:37.792566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.076 [2024-07-26 14:20:37.792592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.076 qpair failed and we were unable to recover it. 00:26:30.076 [2024-07-26 14:20:37.792704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.076 [2024-07-26 14:20:37.792730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.076 qpair failed and we were unable to recover it. 00:26:30.076 [2024-07-26 14:20:37.792851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.076 [2024-07-26 14:20:37.792876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.076 qpair failed and we were unable to recover it. 00:26:30.076 [2024-07-26 14:20:37.792963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.076 [2024-07-26 14:20:37.792990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.076 qpair failed and we were unable to recover it. 00:26:30.076 [2024-07-26 14:20:37.793118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.076 [2024-07-26 14:20:37.793144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.076 qpair failed and we were unable to recover it. 00:26:30.076 [2024-07-26 14:20:37.793263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.076 [2024-07-26 14:20:37.793289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.076 qpair failed and we were unable to recover it. 00:26:30.076 [2024-07-26 14:20:37.793376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.076 [2024-07-26 14:20:37.793401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.076 qpair failed and we were unable to recover it. 00:26:30.076 [2024-07-26 14:20:37.793482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.076 [2024-07-26 14:20:37.793508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.076 qpair failed and we were unable to recover it. 00:26:30.076 [2024-07-26 14:20:37.793633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.076 [2024-07-26 14:20:37.793658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.076 qpair failed and we were unable to recover it. 00:26:30.076 [2024-07-26 14:20:37.793751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.076 [2024-07-26 14:20:37.793777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.076 qpair failed and we were unable to recover it. 00:26:30.076 [2024-07-26 14:20:37.793863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.076 [2024-07-26 14:20:37.793889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.076 qpair failed and we were unable to recover it. 00:26:30.076 [2024-07-26 14:20:37.793989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.076 [2024-07-26 14:20:37.794015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.076 qpair failed and we were unable to recover it. 00:26:30.076 [2024-07-26 14:20:37.794127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.076 [2024-07-26 14:20:37.794152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.076 qpair failed and we were unable to recover it. 00:26:30.076 [2024-07-26 14:20:37.794255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.076 [2024-07-26 14:20:37.794281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.076 qpair failed and we were unable to recover it. 00:26:30.076 [2024-07-26 14:20:37.794381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.076 [2024-07-26 14:20:37.794407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.076 qpair failed and we were unable to recover it. 00:26:30.076 [2024-07-26 14:20:37.794493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.076 [2024-07-26 14:20:37.794519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.076 qpair failed and we were unable to recover it. 00:26:30.076 [2024-07-26 14:20:37.794715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.076 [2024-07-26 14:20:37.794741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.076 qpair failed and we were unable to recover it. 00:26:30.076 [2024-07-26 14:20:37.794841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.076 [2024-07-26 14:20:37.794871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.076 qpair failed and we were unable to recover it. 00:26:30.076 [2024-07-26 14:20:37.797540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.076 [2024-07-26 14:20:37.797572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.076 qpair failed and we were unable to recover it. 00:26:30.076 [2024-07-26 14:20:37.797666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.076 [2024-07-26 14:20:37.797693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.077 qpair failed and we were unable to recover it. 00:26:30.077 [2024-07-26 14:20:37.797788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.077 [2024-07-26 14:20:37.797815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.077 qpair failed and we were unable to recover it. 00:26:30.077 [2024-07-26 14:20:37.797934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.077 [2024-07-26 14:20:37.797959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.077 qpair failed and we were unable to recover it. 00:26:30.077 [2024-07-26 14:20:37.798077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.077 [2024-07-26 14:20:37.798102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.077 qpair failed and we were unable to recover it. 00:26:30.077 [2024-07-26 14:20:37.798197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.077 [2024-07-26 14:20:37.798223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.077 qpair failed and we were unable to recover it. 00:26:30.077 [2024-07-26 14:20:37.798313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.077 [2024-07-26 14:20:37.798339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.077 qpair failed and we were unable to recover it. 00:26:30.077 [2024-07-26 14:20:37.798481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.077 [2024-07-26 14:20:37.798508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.077 qpair failed and we were unable to recover it. 00:26:30.077 [2024-07-26 14:20:37.798607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.077 [2024-07-26 14:20:37.798633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.077 qpair failed and we were unable to recover it. 00:26:30.077 [2024-07-26 14:20:37.798745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.077 [2024-07-26 14:20:37.798771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.077 qpair failed and we were unable to recover it. 00:26:30.077 [2024-07-26 14:20:37.798869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.077 [2024-07-26 14:20:37.798893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.077 qpair failed and we were unable to recover it. 00:26:30.077 [2024-07-26 14:20:37.798994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.077 [2024-07-26 14:20:37.799020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.077 qpair failed and we were unable to recover it. 00:26:30.077 [2024-07-26 14:20:37.799163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.077 [2024-07-26 14:20:37.799191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.077 qpair failed and we were unable to recover it. 00:26:30.077 [2024-07-26 14:20:37.799285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.077 [2024-07-26 14:20:37.799311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.077 qpair failed and we were unable to recover it. 00:26:30.077 [2024-07-26 14:20:37.799411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.077 [2024-07-26 14:20:37.799435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.077 qpair failed and we were unable to recover it. 00:26:30.077 [2024-07-26 14:20:37.799560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.077 [2024-07-26 14:20:37.799585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.077 qpair failed and we were unable to recover it. 00:26:30.077 [2024-07-26 14:20:37.799679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.077 [2024-07-26 14:20:37.799705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.077 qpair failed and we were unable to recover it. 00:26:30.077 [2024-07-26 14:20:37.799798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.077 [2024-07-26 14:20:37.799823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.077 qpair failed and we were unable to recover it. 00:26:30.077 [2024-07-26 14:20:37.799907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.077 [2024-07-26 14:20:37.799933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.077 qpair failed and we were unable to recover it. 00:26:30.077 [2024-07-26 14:20:37.800018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.077 [2024-07-26 14:20:37.800043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.077 qpair failed and we were unable to recover it. 00:26:30.077 [2024-07-26 14:20:37.800135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.077 [2024-07-26 14:20:37.800160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.077 qpair failed and we were unable to recover it. 00:26:30.077 [2024-07-26 14:20:37.800245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.077 [2024-07-26 14:20:37.800271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.077 qpair failed and we were unable to recover it. 00:26:30.077 [2024-07-26 14:20:37.800368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.077 [2024-07-26 14:20:37.800393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.077 qpair failed and we were unable to recover it. 00:26:30.077 [2024-07-26 14:20:37.800512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.077 [2024-07-26 14:20:37.800544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.077 qpair failed and we were unable to recover it. 00:26:30.077 [2024-07-26 14:20:37.800628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.077 [2024-07-26 14:20:37.800653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.077 qpair failed and we were unable to recover it. 00:26:30.077 [2024-07-26 14:20:37.800736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.077 [2024-07-26 14:20:37.800762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.077 qpair failed and we were unable to recover it. 00:26:30.077 [2024-07-26 14:20:37.800865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.077 [2024-07-26 14:20:37.800890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.077 qpair failed and we were unable to recover it. 00:26:30.077 [2024-07-26 14:20:37.801013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.077 [2024-07-26 14:20:37.801039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.077 qpair failed and we were unable to recover it. 00:26:30.077 [2024-07-26 14:20:37.801148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.077 [2024-07-26 14:20:37.801173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.077 qpair failed and we were unable to recover it. 00:26:30.077 [2024-07-26 14:20:37.801267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.077 [2024-07-26 14:20:37.801293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.077 qpair failed and we were unable to recover it. 00:26:30.077 [2024-07-26 14:20:37.801408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.077 [2024-07-26 14:20:37.801433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.077 qpair failed and we were unable to recover it. 00:26:30.077 [2024-07-26 14:20:37.803550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.077 [2024-07-26 14:20:37.803582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.077 qpair failed and we were unable to recover it. 00:26:30.077 [2024-07-26 14:20:37.803694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.077 [2024-07-26 14:20:37.803730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.077 qpair failed and we were unable to recover it. 00:26:30.077 [2024-07-26 14:20:37.803840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.077 [2024-07-26 14:20:37.803871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.077 qpair failed and we were unable to recover it. 00:26:30.077 [2024-07-26 14:20:37.804010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.077 [2024-07-26 14:20:37.804036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.077 qpair failed and we were unable to recover it. 00:26:30.077 [2024-07-26 14:20:37.804135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.077 [2024-07-26 14:20:37.804162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.077 qpair failed and we were unable to recover it. 00:26:30.077 [2024-07-26 14:20:37.804252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.077 [2024-07-26 14:20:37.804279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.077 qpair failed and we were unable to recover it. 00:26:30.077 [2024-07-26 14:20:37.804375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.077 [2024-07-26 14:20:37.804402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.077 qpair failed and we were unable to recover it. 00:26:30.077 [2024-07-26 14:20:37.804491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.077 [2024-07-26 14:20:37.804516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.077 qpair failed and we were unable to recover it. 00:26:30.077 [2024-07-26 14:20:37.804637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.078 [2024-07-26 14:20:37.804670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.078 qpair failed and we were unable to recover it. 00:26:30.078 [2024-07-26 14:20:37.804763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.078 [2024-07-26 14:20:37.804789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.078 qpair failed and we were unable to recover it. 00:26:30.078 [2024-07-26 14:20:37.804875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.078 [2024-07-26 14:20:37.804900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.078 qpair failed and we were unable to recover it. 00:26:30.078 [2024-07-26 14:20:37.805015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.078 [2024-07-26 14:20:37.805040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.078 qpair failed and we were unable to recover it. 00:26:30.078 [2024-07-26 14:20:37.805131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.078 [2024-07-26 14:20:37.805157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.078 qpair failed and we were unable to recover it. 00:26:30.078 [2024-07-26 14:20:37.805244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.078 [2024-07-26 14:20:37.805269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.078 qpair failed and we were unable to recover it. 00:26:30.078 [2024-07-26 14:20:37.805354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.078 [2024-07-26 14:20:37.805380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.078 qpair failed and we were unable to recover it. 00:26:30.078 [2024-07-26 14:20:37.805498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.078 [2024-07-26 14:20:37.805534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.078 qpair failed and we were unable to recover it. 00:26:30.078 [2024-07-26 14:20:37.805624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.078 [2024-07-26 14:20:37.805650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.078 qpair failed and we were unable to recover it. 00:26:30.078 [2024-07-26 14:20:37.805737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.078 [2024-07-26 14:20:37.805761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.078 qpair failed and we were unable to recover it. 00:26:30.078 [2024-07-26 14:20:37.805855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.078 [2024-07-26 14:20:37.805880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.078 qpair failed and we were unable to recover it. 00:26:30.078 [2024-07-26 14:20:37.805985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.078 [2024-07-26 14:20:37.806021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.078 qpair failed and we were unable to recover it. 00:26:30.078 [2024-07-26 14:20:37.806138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.078 [2024-07-26 14:20:37.806177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.078 qpair failed and we were unable to recover it. 00:26:30.078 [2024-07-26 14:20:37.806278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.078 [2024-07-26 14:20:37.806303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.078 qpair failed and we were unable to recover it. 00:26:30.078 [2024-07-26 14:20:37.806418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.078 [2024-07-26 14:20:37.806444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.078 qpair failed and we were unable to recover it. 00:26:30.078 [2024-07-26 14:20:37.806543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.078 [2024-07-26 14:20:37.806569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.078 qpair failed and we were unable to recover it. 00:26:30.078 [2024-07-26 14:20:37.806666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.078 [2024-07-26 14:20:37.806692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.078 qpair failed and we were unable to recover it. 00:26:30.078 [2024-07-26 14:20:37.806781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.078 [2024-07-26 14:20:37.806806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.078 qpair failed and we were unable to recover it. 00:26:30.078 [2024-07-26 14:20:37.806914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.078 [2024-07-26 14:20:37.806939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.078 qpair failed and we were unable to recover it. 00:26:30.078 [2024-07-26 14:20:37.807041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.078 [2024-07-26 14:20:37.807066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.078 qpair failed and we were unable to recover it. 00:26:30.078 [2024-07-26 14:20:37.807177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.078 [2024-07-26 14:20:37.807203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.078 qpair failed and we were unable to recover it. 00:26:30.078 [2024-07-26 14:20:37.807312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.078 [2024-07-26 14:20:37.807338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.078 qpair failed and we were unable to recover it. 00:26:30.078 [2024-07-26 14:20:37.807457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.078 [2024-07-26 14:20:37.807483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.078 qpair failed and we were unable to recover it. 00:26:30.078 [2024-07-26 14:20:37.807607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.078 [2024-07-26 14:20:37.807633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.078 qpair failed and we were unable to recover it. 00:26:30.078 [2024-07-26 14:20:37.807715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.078 [2024-07-26 14:20:37.807741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.078 qpair failed and we were unable to recover it. 00:26:30.078 [2024-07-26 14:20:37.807818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.078 [2024-07-26 14:20:37.807852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.078 qpair failed and we were unable to recover it. 00:26:30.078 [2024-07-26 14:20:37.807963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.078 [2024-07-26 14:20:37.807989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.078 qpair failed and we were unable to recover it. 00:26:30.078 [2024-07-26 14:20:37.808089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.078 [2024-07-26 14:20:37.808115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.078 qpair failed and we were unable to recover it. 00:26:30.078 [2024-07-26 14:20:37.808222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.078 [2024-07-26 14:20:37.808249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.078 qpair failed and we were unable to recover it. 00:26:30.078 [2024-07-26 14:20:37.808373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.078 [2024-07-26 14:20:37.808399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.078 qpair failed and we were unable to recover it. 00:26:30.078 [2024-07-26 14:20:37.808490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.078 [2024-07-26 14:20:37.808533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.078 qpair failed and we were unable to recover it. 00:26:30.078 [2024-07-26 14:20:37.808624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.078 [2024-07-26 14:20:37.808649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.078 qpair failed and we were unable to recover it. 00:26:30.078 [2024-07-26 14:20:37.808729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.078 [2024-07-26 14:20:37.808755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.078 qpair failed and we were unable to recover it. 00:26:30.078 [2024-07-26 14:20:37.808904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.078 [2024-07-26 14:20:37.808930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.078 qpair failed and we were unable to recover it. 00:26:30.078 [2024-07-26 14:20:37.809044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.078 [2024-07-26 14:20:37.809070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.078 qpair failed and we were unable to recover it. 00:26:30.078 [2024-07-26 14:20:37.809180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.078 [2024-07-26 14:20:37.809205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.078 qpair failed and we were unable to recover it. 00:26:30.078 [2024-07-26 14:20:37.809350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.078 [2024-07-26 14:20:37.809376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.078 qpair failed and we were unable to recover it. 00:26:30.078 [2024-07-26 14:20:37.809451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.078 [2024-07-26 14:20:37.809476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.078 qpair failed and we were unable to recover it. 00:26:30.079 [2024-07-26 14:20:37.809570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.079 [2024-07-26 14:20:37.809596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.079 qpair failed and we were unable to recover it. 00:26:30.079 [2024-07-26 14:20:37.809680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.079 [2024-07-26 14:20:37.809706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.079 qpair failed and we were unable to recover it. 00:26:30.079 [2024-07-26 14:20:37.809793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.079 [2024-07-26 14:20:37.809833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.079 qpair failed and we were unable to recover it. 00:26:30.079 [2024-07-26 14:20:37.809939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.079 [2024-07-26 14:20:37.809965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.079 qpair failed and we were unable to recover it. 00:26:30.079 [2024-07-26 14:20:37.810047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.079 [2024-07-26 14:20:37.810073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.079 qpair failed and we were unable to recover it. 00:26:30.079 [2024-07-26 14:20:37.810216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.079 [2024-07-26 14:20:37.810242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.079 qpair failed and we were unable to recover it. 00:26:30.079 [2024-07-26 14:20:37.810352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.079 [2024-07-26 14:20:37.810378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.079 qpair failed and we were unable to recover it. 00:26:30.079 [2024-07-26 14:20:37.810452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.079 [2024-07-26 14:20:37.810478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.079 qpair failed and we were unable to recover it. 00:26:30.079 [2024-07-26 14:20:37.810587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.079 [2024-07-26 14:20:37.810614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.079 qpair failed and we were unable to recover it. 00:26:30.079 [2024-07-26 14:20:37.810688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.079 [2024-07-26 14:20:37.810714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.079 qpair failed and we were unable to recover it. 00:26:30.079 [2024-07-26 14:20:37.810801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.079 [2024-07-26 14:20:37.810834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.079 qpair failed and we were unable to recover it. 00:26:30.079 [2024-07-26 14:20:37.810978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.079 [2024-07-26 14:20:37.811004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.079 qpair failed and we were unable to recover it. 00:26:30.079 [2024-07-26 14:20:37.811082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.079 [2024-07-26 14:20:37.811108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.079 qpair failed and we were unable to recover it. 00:26:30.079 [2024-07-26 14:20:37.811220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.079 [2024-07-26 14:20:37.811246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.079 qpair failed and we were unable to recover it. 00:26:30.079 [2024-07-26 14:20:37.811342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.079 [2024-07-26 14:20:37.811369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.079 qpair failed and we were unable to recover it. 00:26:30.079 [2024-07-26 14:20:37.811480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.079 [2024-07-26 14:20:37.811506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.079 qpair failed and we were unable to recover it. 00:26:30.079 [2024-07-26 14:20:37.811611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.079 [2024-07-26 14:20:37.811638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.079 qpair failed and we were unable to recover it. 00:26:30.079 [2024-07-26 14:20:37.811722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.079 [2024-07-26 14:20:37.811748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.079 qpair failed and we were unable to recover it. 00:26:30.079 [2024-07-26 14:20:37.811859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.079 [2024-07-26 14:20:37.811886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.079 qpair failed and we were unable to recover it. 00:26:30.079 [2024-07-26 14:20:37.812887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.079 [2024-07-26 14:20:37.812930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.079 qpair failed and we were unable to recover it. 00:26:30.079 [2024-07-26 14:20:37.813122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.079 [2024-07-26 14:20:37.813150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.079 qpair failed and we were unable to recover it. 00:26:30.079 [2024-07-26 14:20:37.813238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.079 [2024-07-26 14:20:37.813265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.079 qpair failed and we were unable to recover it. 00:26:30.079 [2024-07-26 14:20:37.813381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.079 [2024-07-26 14:20:37.813406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.079 qpair failed and we were unable to recover it. 00:26:30.079 [2024-07-26 14:20:37.813520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.079 [2024-07-26 14:20:37.813554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.079 qpair failed and we were unable to recover it. 00:26:30.079 [2024-07-26 14:20:37.813642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.079 [2024-07-26 14:20:37.813669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.079 qpair failed and we were unable to recover it. 00:26:30.079 [2024-07-26 14:20:37.813753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.079 [2024-07-26 14:20:37.813779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.079 qpair failed and we were unable to recover it. 00:26:30.079 [2024-07-26 14:20:37.813897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.079 [2024-07-26 14:20:37.813930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.079 qpair failed and we were unable to recover it. 00:26:30.079 [2024-07-26 14:20:37.814011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.079 [2024-07-26 14:20:37.814038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.079 qpair failed and we were unable to recover it. 00:26:30.079 [2024-07-26 14:20:37.814164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.079 [2024-07-26 14:20:37.814191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.079 qpair failed and we were unable to recover it. 00:26:30.079 [2024-07-26 14:20:37.814291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.079 [2024-07-26 14:20:37.814318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.079 qpair failed and we were unable to recover it. 00:26:30.079 [2024-07-26 14:20:37.814404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.079 [2024-07-26 14:20:37.814440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.079 qpair failed and we were unable to recover it. 00:26:30.079 [2024-07-26 14:20:37.814538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.079 [2024-07-26 14:20:37.814566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.079 qpair failed and we were unable to recover it. 00:26:30.080 [2024-07-26 14:20:37.814656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.080 [2024-07-26 14:20:37.814681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.080 qpair failed and we were unable to recover it. 00:26:30.080 [2024-07-26 14:20:37.814794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.080 [2024-07-26 14:20:37.814821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.080 qpair failed and we were unable to recover it. 00:26:30.080 [2024-07-26 14:20:37.814916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.080 [2024-07-26 14:20:37.814944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.080 qpair failed and we were unable to recover it. 00:26:30.080 [2024-07-26 14:20:37.815063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.080 [2024-07-26 14:20:37.815090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.080 qpair failed and we were unable to recover it. 00:26:30.080 [2024-07-26 14:20:37.815999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.080 [2024-07-26 14:20:37.816040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.080 qpair failed and we were unable to recover it. 00:26:30.080 [2024-07-26 14:20:37.816205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.080 [2024-07-26 14:20:37.816237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.080 qpair failed and we were unable to recover it. 00:26:30.080 [2024-07-26 14:20:37.816399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.080 [2024-07-26 14:20:37.816427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.080 qpair failed and we were unable to recover it. 00:26:30.080 [2024-07-26 14:20:37.816514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.080 [2024-07-26 14:20:37.816559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.080 qpair failed and we were unable to recover it. 00:26:30.080 [2024-07-26 14:20:37.816651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.080 [2024-07-26 14:20:37.816678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.080 qpair failed and we were unable to recover it. 00:26:30.080 [2024-07-26 14:20:37.816792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.080 [2024-07-26 14:20:37.816818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.080 qpair failed and we were unable to recover it. 00:26:30.080 [2024-07-26 14:20:37.816930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.080 [2024-07-26 14:20:37.816961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.080 qpair failed and we were unable to recover it. 00:26:30.080 [2024-07-26 14:20:37.817048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.080 [2024-07-26 14:20:37.817074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.080 qpair failed and we were unable to recover it. 00:26:30.080 [2024-07-26 14:20:37.817177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.080 [2024-07-26 14:20:37.817203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.080 qpair failed and we were unable to recover it. 00:26:30.080 [2024-07-26 14:20:37.817315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.080 [2024-07-26 14:20:37.817342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.080 qpair failed and we were unable to recover it. 00:26:30.080 [2024-07-26 14:20:37.817439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.080 [2024-07-26 14:20:37.817465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.080 qpair failed and we were unable to recover it. 00:26:30.080 [2024-07-26 14:20:37.817588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.080 [2024-07-26 14:20:37.817615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.080 qpair failed and we were unable to recover it. 00:26:30.080 [2024-07-26 14:20:37.817711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.080 [2024-07-26 14:20:37.817737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.080 qpair failed and we were unable to recover it. 00:26:30.080 [2024-07-26 14:20:37.817855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.080 [2024-07-26 14:20:37.817880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.080 qpair failed and we were unable to recover it. 00:26:30.080 [2024-07-26 14:20:37.817962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.080 [2024-07-26 14:20:37.817988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.080 qpair failed and we were unable to recover it. 00:26:30.080 [2024-07-26 14:20:37.818076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.080 [2024-07-26 14:20:37.818102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.080 qpair failed and we were unable to recover it. 00:26:30.080 [2024-07-26 14:20:37.818215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.080 [2024-07-26 14:20:37.818242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.080 qpair failed and we were unable to recover it. 00:26:30.080 [2024-07-26 14:20:37.818337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.080 [2024-07-26 14:20:37.818362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.080 qpair failed and we were unable to recover it. 00:26:30.080 [2024-07-26 14:20:37.818455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.080 [2024-07-26 14:20:37.818481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.080 qpair failed and we were unable to recover it. 00:26:30.080 [2024-07-26 14:20:37.818615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.080 [2024-07-26 14:20:37.818644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.080 qpair failed and we were unable to recover it. 00:26:30.080 [2024-07-26 14:20:37.818743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.080 [2024-07-26 14:20:37.818770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.080 qpair failed and we were unable to recover it. 00:26:30.080 [2024-07-26 14:20:37.818855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.080 [2024-07-26 14:20:37.818886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.080 qpair failed and we were unable to recover it. 00:26:30.080 [2024-07-26 14:20:37.818993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.080 [2024-07-26 14:20:37.819020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.080 qpair failed and we were unable to recover it. 00:26:30.080 [2024-07-26 14:20:37.819111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.080 [2024-07-26 14:20:37.819136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.080 qpair failed and we were unable to recover it. 00:26:30.080 [2024-07-26 14:20:37.819221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.080 [2024-07-26 14:20:37.819249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.080 qpair failed and we were unable to recover it. 00:26:30.080 [2024-07-26 14:20:37.819358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.080 [2024-07-26 14:20:37.819384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.080 qpair failed and we were unable to recover it. 00:26:30.080 [2024-07-26 14:20:37.819480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.080 [2024-07-26 14:20:37.819506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.080 qpair failed and we were unable to recover it. 00:26:30.080 [2024-07-26 14:20:37.819603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.080 [2024-07-26 14:20:37.819629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.080 qpair failed and we were unable to recover it. 00:26:30.080 [2024-07-26 14:20:37.819705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.080 [2024-07-26 14:20:37.819731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.080 qpair failed and we were unable to recover it. 00:26:30.080 [2024-07-26 14:20:37.819853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.080 [2024-07-26 14:20:37.819880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.080 qpair failed and we were unable to recover it. 00:26:30.080 [2024-07-26 14:20:37.820003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.080 [2024-07-26 14:20:37.820029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.080 qpair failed and we were unable to recover it. 00:26:30.080 [2024-07-26 14:20:37.820140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.080 [2024-07-26 14:20:37.820165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.080 qpair failed and we were unable to recover it. 00:26:30.080 [2024-07-26 14:20:37.820251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.080 [2024-07-26 14:20:37.820283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.080 qpair failed and we were unable to recover it. 00:26:30.080 [2024-07-26 14:20:37.820396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.080 [2024-07-26 14:20:37.820422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.081 qpair failed and we were unable to recover it. 00:26:30.081 [2024-07-26 14:20:37.820553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.081 [2024-07-26 14:20:37.820580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.081 qpair failed and we were unable to recover it. 00:26:30.081 [2024-07-26 14:20:37.820698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.081 [2024-07-26 14:20:37.820723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.081 qpair failed and we were unable to recover it. 00:26:30.081 [2024-07-26 14:20:37.820818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.081 [2024-07-26 14:20:37.820848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.081 qpair failed and we were unable to recover it. 00:26:30.081 [2024-07-26 14:20:37.820924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.081 [2024-07-26 14:20:37.820949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.081 qpair failed and we were unable to recover it. 00:26:30.081 [2024-07-26 14:20:37.821067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.081 [2024-07-26 14:20:37.821093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.081 qpair failed and we were unable to recover it. 00:26:30.081 [2024-07-26 14:20:37.821211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.081 [2024-07-26 14:20:37.821237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.081 qpair failed and we were unable to recover it. 00:26:30.081 [2024-07-26 14:20:37.821316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.081 [2024-07-26 14:20:37.821341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.081 qpair failed and we were unable to recover it. 00:26:30.081 [2024-07-26 14:20:37.821428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.081 [2024-07-26 14:20:37.821453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.081 qpair failed and we were unable to recover it. 00:26:30.081 [2024-07-26 14:20:37.821594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.081 [2024-07-26 14:20:37.821621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.081 qpair failed and we were unable to recover it. 00:26:30.081 [2024-07-26 14:20:37.821734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.081 [2024-07-26 14:20:37.821760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.081 qpair failed and we were unable to recover it. 00:26:30.081 [2024-07-26 14:20:37.821866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.081 [2024-07-26 14:20:37.821904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.081 qpair failed and we were unable to recover it. 00:26:30.081 [2024-07-26 14:20:37.822045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.081 [2024-07-26 14:20:37.822072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.081 qpair failed and we were unable to recover it. 00:26:30.081 [2024-07-26 14:20:37.822187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.081 [2024-07-26 14:20:37.822217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.081 qpair failed and we were unable to recover it. 00:26:30.081 [2024-07-26 14:20:37.822336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.081 [2024-07-26 14:20:37.822364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.081 qpair failed and we were unable to recover it. 00:26:30.081 [2024-07-26 14:20:37.822482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.081 [2024-07-26 14:20:37.822508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.081 qpair failed and we were unable to recover it. 00:26:30.081 [2024-07-26 14:20:37.822612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.081 [2024-07-26 14:20:37.822640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.081 qpair failed and we were unable to recover it. 00:26:30.081 [2024-07-26 14:20:37.822721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.081 [2024-07-26 14:20:37.822747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.081 qpair failed and we were unable to recover it. 00:26:30.081 [2024-07-26 14:20:37.822867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.081 [2024-07-26 14:20:37.822895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.081 qpair failed and we were unable to recover it. 00:26:30.081 [2024-07-26 14:20:37.822973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.081 [2024-07-26 14:20:37.822999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.081 qpair failed and we were unable to recover it. 00:26:30.081 [2024-07-26 14:20:37.823114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.081 [2024-07-26 14:20:37.823141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.081 qpair failed and we were unable to recover it. 00:26:30.081 [2024-07-26 14:20:37.823862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.081 [2024-07-26 14:20:37.823893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.081 qpair failed and we were unable to recover it. 00:26:30.081 [2024-07-26 14:20:37.824044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.081 [2024-07-26 14:20:37.824072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.081 qpair failed and we were unable to recover it. 00:26:30.081 [2024-07-26 14:20:37.824973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.081 [2024-07-26 14:20:37.825003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.081 qpair failed and we were unable to recover it. 00:26:30.081 [2024-07-26 14:20:37.825177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.081 [2024-07-26 14:20:37.825205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.081 qpair failed and we were unable to recover it. 00:26:30.081 [2024-07-26 14:20:37.825661] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:26:30.081 [2024-07-26 14:20:37.825725] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:30.081 [2024-07-26 14:20:37.825905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.081 [2024-07-26 14:20:37.825937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.081 qpair failed and we were unable to recover it. 00:26:30.081 [2024-07-26 14:20:37.826069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.081 [2024-07-26 14:20:37.826095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.081 qpair failed and we were unable to recover it. 00:26:30.081 [2024-07-26 14:20:37.826235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.081 [2024-07-26 14:20:37.826262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.081 qpair failed and we were unable to recover it. 00:26:30.081 [2024-07-26 14:20:37.826375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.081 [2024-07-26 14:20:37.826401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.081 qpair failed and we were unable to recover it. 00:26:30.081 [2024-07-26 14:20:37.826570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.081 [2024-07-26 14:20:37.826599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.081 qpair failed and we were unable to recover it. 00:26:30.081 [2024-07-26 14:20:37.826700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.081 [2024-07-26 14:20:37.826727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.081 qpair failed and we were unable to recover it. 00:26:30.081 [2024-07-26 14:20:37.826836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.081 [2024-07-26 14:20:37.826873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.081 qpair failed and we were unable to recover it. 00:26:30.081 [2024-07-26 14:20:37.826999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.081 [2024-07-26 14:20:37.827025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.081 qpair failed and we were unable to recover it. 00:26:30.081 [2024-07-26 14:20:37.827117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.081 [2024-07-26 14:20:37.827143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.081 qpair failed and we were unable to recover it. 00:26:30.081 [2024-07-26 14:20:37.827259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.081 [2024-07-26 14:20:37.827284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.081 qpair failed and we were unable to recover it. 00:26:30.081 [2024-07-26 14:20:37.827364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.081 [2024-07-26 14:20:37.827391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.081 qpair failed and we were unable to recover it. 00:26:30.081 [2024-07-26 14:20:37.827505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.081 [2024-07-26 14:20:37.827550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.081 qpair failed and we were unable to recover it. 00:26:30.081 [2024-07-26 14:20:37.827636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.082 [2024-07-26 14:20:37.827662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.082 qpair failed and we were unable to recover it. 00:26:30.082 [2024-07-26 14:20:37.827754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.082 [2024-07-26 14:20:37.827780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.082 qpair failed and we were unable to recover it. 00:26:30.082 [2024-07-26 14:20:37.827870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.082 [2024-07-26 14:20:37.827896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.082 qpair failed and we were unable to recover it. 00:26:30.082 [2024-07-26 14:20:37.827987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.082 [2024-07-26 14:20:37.828015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.082 qpair failed and we were unable to recover it. 00:26:30.082 [2024-07-26 14:20:37.828129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.082 [2024-07-26 14:20:37.828154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.082 qpair failed and we were unable to recover it. 00:26:30.082 [2024-07-26 14:20:37.828276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.082 [2024-07-26 14:20:37.828304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.082 qpair failed and we were unable to recover it. 00:26:30.082 [2024-07-26 14:20:37.828380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.082 [2024-07-26 14:20:37.828406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.082 qpair failed and we were unable to recover it. 00:26:30.082 [2024-07-26 14:20:37.828482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.082 [2024-07-26 14:20:37.828508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.082 qpair failed and we were unable to recover it. 00:26:30.082 [2024-07-26 14:20:37.828604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.082 [2024-07-26 14:20:37.828630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.082 qpair failed and we were unable to recover it. 00:26:30.082 [2024-07-26 14:20:37.828709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.082 [2024-07-26 14:20:37.828736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.082 qpair failed and we were unable to recover it. 00:26:30.082 [2024-07-26 14:20:37.828818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.082 [2024-07-26 14:20:37.828851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.082 qpair failed and we were unable to recover it. 00:26:30.082 [2024-07-26 14:20:37.828962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.082 [2024-07-26 14:20:37.828989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.082 qpair failed and we were unable to recover it. 00:26:30.082 [2024-07-26 14:20:37.829686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.082 [2024-07-26 14:20:37.829716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.082 qpair failed and we were unable to recover it. 00:26:30.082 [2024-07-26 14:20:37.829816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.082 [2024-07-26 14:20:37.829854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.082 qpair failed and we were unable to recover it. 00:26:30.082 [2024-07-26 14:20:37.829947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.082 [2024-07-26 14:20:37.829973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.082 qpair failed and we were unable to recover it. 00:26:30.082 [2024-07-26 14:20:37.830064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.082 [2024-07-26 14:20:37.830091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.082 qpair failed and we were unable to recover it. 00:26:30.082 [2024-07-26 14:20:37.830204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.082 [2024-07-26 14:20:37.830230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.082 qpair failed and we were unable to recover it. 00:26:30.082 [2024-07-26 14:20:37.830363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.082 [2024-07-26 14:20:37.830389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.082 qpair failed and we were unable to recover it. 00:26:30.082 [2024-07-26 14:20:37.830477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.082 [2024-07-26 14:20:37.830504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.082 qpair failed and we were unable to recover it. 00:26:30.082 [2024-07-26 14:20:37.830612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.082 [2024-07-26 14:20:37.830639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.082 qpair failed and we were unable to recover it. 00:26:30.082 [2024-07-26 14:20:37.830782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.082 [2024-07-26 14:20:37.830808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.082 qpair failed and we were unable to recover it. 00:26:30.082 [2024-07-26 14:20:37.830907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.082 [2024-07-26 14:20:37.830934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.082 qpair failed and we were unable to recover it. 00:26:30.082 [2024-07-26 14:20:37.831034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.082 [2024-07-26 14:20:37.831059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.082 qpair failed and we were unable to recover it. 00:26:30.082 [2024-07-26 14:20:37.831142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.082 [2024-07-26 14:20:37.831170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.082 qpair failed and we were unable to recover it. 00:26:30.082 [2024-07-26 14:20:37.831260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.082 [2024-07-26 14:20:37.831285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.082 qpair failed and we were unable to recover it. 00:26:30.082 [2024-07-26 14:20:37.831412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.082 [2024-07-26 14:20:37.831439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.082 qpair failed and we were unable to recover it. 00:26:30.082 [2024-07-26 14:20:37.831517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.082 [2024-07-26 14:20:37.831550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.082 qpair failed and we were unable to recover it. 00:26:30.082 [2024-07-26 14:20:37.831669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.082 [2024-07-26 14:20:37.831696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.082 qpair failed and we were unable to recover it. 00:26:30.082 [2024-07-26 14:20:37.831786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.082 [2024-07-26 14:20:37.831818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.082 qpair failed and we were unable to recover it. 00:26:30.082 [2024-07-26 14:20:37.831923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.082 [2024-07-26 14:20:37.831949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.082 qpair failed and we were unable to recover it. 00:26:30.082 [2024-07-26 14:20:37.832889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.082 [2024-07-26 14:20:37.832920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.082 qpair failed and we were unable to recover it. 00:26:30.082 [2024-07-26 14:20:37.833041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.082 [2024-07-26 14:20:37.833068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.082 qpair failed and we were unable to recover it. 00:26:30.082 [2024-07-26 14:20:37.833178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.082 [2024-07-26 14:20:37.833204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.082 qpair failed and we were unable to recover it. 00:26:30.082 [2024-07-26 14:20:37.833319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.082 [2024-07-26 14:20:37.833344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.082 qpair failed and we were unable to recover it. 00:26:30.082 [2024-07-26 14:20:37.833442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.082 [2024-07-26 14:20:37.833467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.082 qpair failed and we were unable to recover it. 00:26:30.082 [2024-07-26 14:20:37.833576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.082 [2024-07-26 14:20:37.833604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.082 qpair failed and we were unable to recover it. 00:26:30.082 [2024-07-26 14:20:37.833700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.082 [2024-07-26 14:20:37.833726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.082 qpair failed and we were unable to recover it. 00:26:30.082 [2024-07-26 14:20:37.833808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.082 [2024-07-26 14:20:37.833845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.082 qpair failed and we were unable to recover it. 00:26:30.083 [2024-07-26 14:20:37.833941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.083 [2024-07-26 14:20:37.833967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.083 qpair failed and we were unable to recover it. 00:26:30.083 [2024-07-26 14:20:37.834062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.083 [2024-07-26 14:20:37.834087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.083 qpair failed and we were unable to recover it. 00:26:30.083 [2024-07-26 14:20:37.834201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.083 [2024-07-26 14:20:37.834228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.083 qpair failed and we were unable to recover it. 00:26:30.083 [2024-07-26 14:20:37.834318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.083 [2024-07-26 14:20:37.834345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.083 qpair failed and we were unable to recover it. 00:26:30.083 [2024-07-26 14:20:37.834441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.083 [2024-07-26 14:20:37.834467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.083 qpair failed and we were unable to recover it. 00:26:30.083 [2024-07-26 14:20:37.834575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.083 [2024-07-26 14:20:37.834603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.083 qpair failed and we were unable to recover it. 00:26:30.083 [2024-07-26 14:20:37.834700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.083 [2024-07-26 14:20:37.834726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.083 qpair failed and we were unable to recover it. 00:26:30.083 [2024-07-26 14:20:37.834840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.083 [2024-07-26 14:20:37.834872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.083 qpair failed and we were unable to recover it. 00:26:30.083 [2024-07-26 14:20:37.834960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.083 [2024-07-26 14:20:37.834985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.083 qpair failed and we were unable to recover it. 00:26:30.083 [2024-07-26 14:20:37.835074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.083 [2024-07-26 14:20:37.835100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.083 qpair failed and we were unable to recover it. 00:26:30.083 [2024-07-26 14:20:37.835214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.083 [2024-07-26 14:20:37.835240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.083 qpair failed and we were unable to recover it. 00:26:30.083 [2024-07-26 14:20:37.835352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.083 [2024-07-26 14:20:37.835378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.083 qpair failed and we were unable to recover it. 00:26:30.083 [2024-07-26 14:20:37.835488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.083 [2024-07-26 14:20:37.835514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.083 qpair failed and we were unable to recover it. 00:26:30.083 [2024-07-26 14:20:37.835606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.083 [2024-07-26 14:20:37.835633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.083 qpair failed and we were unable to recover it. 00:26:30.083 [2024-07-26 14:20:37.835766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.083 [2024-07-26 14:20:37.835791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.083 qpair failed and we were unable to recover it. 00:26:30.083 [2024-07-26 14:20:37.835893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.083 [2024-07-26 14:20:37.835918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.083 qpair failed and we were unable to recover it. 00:26:30.083 [2024-07-26 14:20:37.836572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.083 [2024-07-26 14:20:37.836602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.083 qpair failed and we were unable to recover it. 00:26:30.083 [2024-07-26 14:20:37.836752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.083 [2024-07-26 14:20:37.836780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.083 qpair failed and we were unable to recover it. 00:26:30.083 [2024-07-26 14:20:37.836907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.083 [2024-07-26 14:20:37.836933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.083 qpair failed and we were unable to recover it. 00:26:30.083 [2024-07-26 14:20:37.837053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.083 [2024-07-26 14:20:37.837079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.083 qpair failed and we were unable to recover it. 00:26:30.083 [2024-07-26 14:20:37.837204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.083 [2024-07-26 14:20:37.837230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.083 qpair failed and we were unable to recover it. 00:26:30.083 [2024-07-26 14:20:37.837367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.083 [2024-07-26 14:20:37.837394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.083 qpair failed and we were unable to recover it. 00:26:30.083 [2024-07-26 14:20:37.837474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.083 [2024-07-26 14:20:37.837499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.083 qpair failed and we were unable to recover it. 00:26:30.083 [2024-07-26 14:20:37.837635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.083 [2024-07-26 14:20:37.837661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.083 qpair failed and we were unable to recover it. 00:26:30.083 [2024-07-26 14:20:37.837756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.083 [2024-07-26 14:20:37.837781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.083 qpair failed and we were unable to recover it. 00:26:30.083 [2024-07-26 14:20:37.837916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.083 [2024-07-26 14:20:37.837942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.083 qpair failed and we were unable to recover it. 00:26:30.083 [2024-07-26 14:20:37.838031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.083 [2024-07-26 14:20:37.838058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.083 qpair failed and we were unable to recover it. 00:26:30.083 [2024-07-26 14:20:37.838197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.083 [2024-07-26 14:20:37.838224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.083 qpair failed and we were unable to recover it. 00:26:30.083 [2024-07-26 14:20:37.838314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.083 [2024-07-26 14:20:37.838340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.083 qpair failed and we were unable to recover it. 00:26:30.083 [2024-07-26 14:20:37.838434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.083 [2024-07-26 14:20:37.838460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.083 qpair failed and we were unable to recover it. 00:26:30.083 [2024-07-26 14:20:37.838589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.083 [2024-07-26 14:20:37.838622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.083 qpair failed and we were unable to recover it. 00:26:30.083 [2024-07-26 14:20:37.838709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.083 [2024-07-26 14:20:37.838735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.083 qpair failed and we were unable to recover it. 00:26:30.083 [2024-07-26 14:20:37.838843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.084 [2024-07-26 14:20:37.838869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.084 qpair failed and we were unable to recover it. 00:26:30.084 [2024-07-26 14:20:37.838984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.084 [2024-07-26 14:20:37.839011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.084 qpair failed and we were unable to recover it. 00:26:30.084 [2024-07-26 14:20:37.839138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.084 [2024-07-26 14:20:37.839164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.084 qpair failed and we were unable to recover it. 00:26:30.084 [2024-07-26 14:20:37.839258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.084 [2024-07-26 14:20:37.839296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.084 qpair failed and we were unable to recover it. 00:26:30.084 [2024-07-26 14:20:37.839410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.084 [2024-07-26 14:20:37.839435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.084 qpair failed and we were unable to recover it. 00:26:30.084 [2024-07-26 14:20:37.839533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.084 [2024-07-26 14:20:37.839561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.084 qpair failed and we were unable to recover it. 00:26:30.084 [2024-07-26 14:20:37.839675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.084 [2024-07-26 14:20:37.839700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.084 qpair failed and we were unable to recover it. 00:26:30.084 [2024-07-26 14:20:37.839802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.084 [2024-07-26 14:20:37.839840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.084 qpair failed and we were unable to recover it. 00:26:30.084 [2024-07-26 14:20:37.839961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.084 [2024-07-26 14:20:37.839987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.084 qpair failed and we were unable to recover it. 00:26:30.084 [2024-07-26 14:20:37.840095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.084 [2024-07-26 14:20:37.840120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.084 qpair failed and we were unable to recover it. 00:26:30.084 [2024-07-26 14:20:37.840230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.084 [2024-07-26 14:20:37.840256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.084 qpair failed and we were unable to recover it. 00:26:30.084 [2024-07-26 14:20:37.840376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.084 [2024-07-26 14:20:37.840403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.084 qpair failed and we were unable to recover it. 00:26:30.084 [2024-07-26 14:20:37.840534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.084 [2024-07-26 14:20:37.840561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.084 qpair failed and we were unable to recover it. 00:26:30.084 [2024-07-26 14:20:37.840678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.084 [2024-07-26 14:20:37.840704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.084 qpair failed and we were unable to recover it. 00:26:30.084 [2024-07-26 14:20:37.840815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.084 [2024-07-26 14:20:37.840841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.084 qpair failed and we were unable to recover it. 00:26:30.084 [2024-07-26 14:20:37.840960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.084 [2024-07-26 14:20:37.840987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.084 qpair failed and we were unable to recover it. 00:26:30.084 [2024-07-26 14:20:37.841071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.084 [2024-07-26 14:20:37.841098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.084 qpair failed and we were unable to recover it. 00:26:30.084 [2024-07-26 14:20:37.841195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.084 [2024-07-26 14:20:37.841221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.084 qpair failed and we were unable to recover it. 00:26:30.084 [2024-07-26 14:20:37.841334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.084 [2024-07-26 14:20:37.841361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.084 qpair failed and we were unable to recover it. 00:26:30.084 [2024-07-26 14:20:37.841471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.084 [2024-07-26 14:20:37.841497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.084 qpair failed and we were unable to recover it. 00:26:30.084 [2024-07-26 14:20:37.841660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.084 [2024-07-26 14:20:37.841687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.084 qpair failed and we were unable to recover it. 00:26:30.084 [2024-07-26 14:20:37.841775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.084 [2024-07-26 14:20:37.841800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.084 qpair failed and we were unable to recover it. 00:26:30.084 [2024-07-26 14:20:37.841892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.084 [2024-07-26 14:20:37.841918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.084 qpair failed and we were unable to recover it. 00:26:30.084 [2024-07-26 14:20:37.842016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.084 [2024-07-26 14:20:37.842044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.084 qpair failed and we were unable to recover it. 00:26:30.084 [2024-07-26 14:20:37.842156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.084 [2024-07-26 14:20:37.842182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.084 qpair failed and we were unable to recover it. 00:26:30.084 [2024-07-26 14:20:37.842267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.084 [2024-07-26 14:20:37.842294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.084 qpair failed and we were unable to recover it. 00:26:30.084 [2024-07-26 14:20:37.842382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.084 [2024-07-26 14:20:37.842407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.084 qpair failed and we were unable to recover it. 00:26:30.084 [2024-07-26 14:20:37.842514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.084 [2024-07-26 14:20:37.842596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.084 qpair failed and we were unable to recover it. 00:26:30.084 [2024-07-26 14:20:37.843296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.084 [2024-07-26 14:20:37.843325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.084 qpair failed and we were unable to recover it. 00:26:30.084 [2024-07-26 14:20:37.843461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.084 [2024-07-26 14:20:37.843489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.084 qpair failed and we were unable to recover it. 00:26:30.084 [2024-07-26 14:20:37.843592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.084 [2024-07-26 14:20:37.843619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.084 qpair failed and we were unable to recover it. 00:26:30.084 [2024-07-26 14:20:37.843712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.084 [2024-07-26 14:20:37.843738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.084 qpair failed and we were unable to recover it. 00:26:30.084 [2024-07-26 14:20:37.843833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.084 [2024-07-26 14:20:37.843859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.084 qpair failed and we were unable to recover it. 00:26:30.084 [2024-07-26 14:20:37.843981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.084 [2024-07-26 14:20:37.844015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.084 qpair failed and we were unable to recover it. 00:26:30.084 [2024-07-26 14:20:37.844104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.084 [2024-07-26 14:20:37.844130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.084 qpair failed and we were unable to recover it. 00:26:30.084 [2024-07-26 14:20:37.844244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.084 [2024-07-26 14:20:37.844270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.084 qpair failed and we were unable to recover it. 00:26:30.084 [2024-07-26 14:20:37.844375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.084 [2024-07-26 14:20:37.844401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.084 qpair failed and we were unable to recover it. 00:26:30.084 [2024-07-26 14:20:37.844484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.084 [2024-07-26 14:20:37.844510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.084 qpair failed and we were unable to recover it. 00:26:30.085 [2024-07-26 14:20:37.844613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.085 [2024-07-26 14:20:37.844647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.085 qpair failed and we were unable to recover it. 00:26:30.085 [2024-07-26 14:20:37.844764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.085 [2024-07-26 14:20:37.844789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.085 qpair failed and we were unable to recover it. 00:26:30.085 [2024-07-26 14:20:37.844920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.085 [2024-07-26 14:20:37.844946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.085 qpair failed and we were unable to recover it. 00:26:30.085 [2024-07-26 14:20:37.845066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.085 [2024-07-26 14:20:37.845092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.085 qpair failed and we were unable to recover it. 00:26:30.085 [2024-07-26 14:20:37.845171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.085 [2024-07-26 14:20:37.845197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.085 qpair failed and we were unable to recover it. 00:26:30.085 [2024-07-26 14:20:37.845308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.085 [2024-07-26 14:20:37.845335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.085 qpair failed and we were unable to recover it. 00:26:30.085 [2024-07-26 14:20:37.845425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.085 [2024-07-26 14:20:37.845451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.085 qpair failed and we were unable to recover it. 00:26:30.085 [2024-07-26 14:20:37.845562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.085 [2024-07-26 14:20:37.845590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.085 qpair failed and we were unable to recover it. 00:26:30.085 [2024-07-26 14:20:37.845685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.085 [2024-07-26 14:20:37.845712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.085 qpair failed and we were unable to recover it. 00:26:30.085 [2024-07-26 14:20:37.845800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.085 [2024-07-26 14:20:37.845837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.085 qpair failed and we were unable to recover it. 00:26:30.085 [2024-07-26 14:20:37.845922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.085 [2024-07-26 14:20:37.845948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.085 qpair failed and we were unable to recover it. 00:26:30.085 [2024-07-26 14:20:37.846028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.085 [2024-07-26 14:20:37.846054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.085 qpair failed and we were unable to recover it. 00:26:30.085 [2024-07-26 14:20:37.846134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.085 [2024-07-26 14:20:37.846161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.085 qpair failed and we were unable to recover it. 00:26:30.085 [2024-07-26 14:20:37.846273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.085 [2024-07-26 14:20:37.846299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.085 qpair failed and we were unable to recover it. 00:26:30.085 [2024-07-26 14:20:37.846409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.085 [2024-07-26 14:20:37.846435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.085 qpair failed and we were unable to recover it. 00:26:30.085 [2024-07-26 14:20:37.846536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.085 [2024-07-26 14:20:37.846564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.085 qpair failed and we were unable to recover it. 00:26:30.085 [2024-07-26 14:20:37.846649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.085 [2024-07-26 14:20:37.846676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.085 qpair failed and we were unable to recover it. 00:26:30.085 [2024-07-26 14:20:37.846792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.085 [2024-07-26 14:20:37.846829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.085 qpair failed and we were unable to recover it. 00:26:30.085 [2024-07-26 14:20:37.846935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.085 [2024-07-26 14:20:37.846962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.085 qpair failed and we were unable to recover it. 00:26:30.085 [2024-07-26 14:20:37.847076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.085 [2024-07-26 14:20:37.847101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.085 qpair failed and we were unable to recover it. 00:26:30.085 [2024-07-26 14:20:37.847195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.085 [2024-07-26 14:20:37.847221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.085 qpair failed and we were unable to recover it. 00:26:30.085 [2024-07-26 14:20:37.847323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.085 [2024-07-26 14:20:37.847365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.085 qpair failed and we were unable to recover it. 00:26:30.085 [2024-07-26 14:20:37.847497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.085 [2024-07-26 14:20:37.847552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.085 qpair failed and we were unable to recover it. 00:26:30.085 [2024-07-26 14:20:37.847647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.085 [2024-07-26 14:20:37.847673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.085 qpair failed and we were unable to recover it. 00:26:30.085 [2024-07-26 14:20:37.847758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.085 [2024-07-26 14:20:37.847784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.085 qpair failed and we were unable to recover it. 00:26:30.085 [2024-07-26 14:20:37.847933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.085 [2024-07-26 14:20:37.847958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.085 qpair failed and we were unable to recover it. 00:26:30.085 [2024-07-26 14:20:37.848073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.085 [2024-07-26 14:20:37.848098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.085 qpair failed and we were unable to recover it. 00:26:30.085 [2024-07-26 14:20:37.848182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.085 [2024-07-26 14:20:37.848214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.085 qpair failed and we were unable to recover it. 00:26:30.085 [2024-07-26 14:20:37.848295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.085 [2024-07-26 14:20:37.848321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.085 qpair failed and we were unable to recover it. 00:26:30.085 [2024-07-26 14:20:37.848433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.085 [2024-07-26 14:20:37.848461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.085 qpair failed and we were unable to recover it. 00:26:30.085 [2024-07-26 14:20:37.848561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.085 [2024-07-26 14:20:37.848588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.085 qpair failed and we were unable to recover it. 00:26:30.085 [2024-07-26 14:20:37.848683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.085 [2024-07-26 14:20:37.848709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.085 qpair failed and we were unable to recover it. 00:26:30.085 [2024-07-26 14:20:37.848807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.085 [2024-07-26 14:20:37.848838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.085 qpair failed and we were unable to recover it. 00:26:30.085 [2024-07-26 14:20:37.848982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.085 [2024-07-26 14:20:37.849009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.085 qpair failed and we were unable to recover it. 00:26:30.085 [2024-07-26 14:20:37.849118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.085 [2024-07-26 14:20:37.849143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.085 qpair failed and we were unable to recover it. 00:26:30.085 [2024-07-26 14:20:37.849219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.085 [2024-07-26 14:20:37.849244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.085 qpair failed and we were unable to recover it. 00:26:30.085 [2024-07-26 14:20:37.849356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.085 [2024-07-26 14:20:37.849382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.085 qpair failed and we were unable to recover it. 00:26:30.085 [2024-07-26 14:20:37.849462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.086 [2024-07-26 14:20:37.849489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.086 qpair failed and we were unable to recover it. 00:26:30.086 [2024-07-26 14:20:37.849609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.086 [2024-07-26 14:20:37.849649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.086 qpair failed and we were unable to recover it. 00:26:30.086 [2024-07-26 14:20:37.849771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.086 [2024-07-26 14:20:37.849800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.086 qpair failed and we were unable to recover it. 00:26:30.086 [2024-07-26 14:20:37.849930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.086 [2024-07-26 14:20:37.849955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.086 qpair failed and we were unable to recover it. 00:26:30.086 [2024-07-26 14:20:37.850070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.086 [2024-07-26 14:20:37.850096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.086 qpair failed and we were unable to recover it. 00:26:30.086 [2024-07-26 14:20:37.850207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.086 [2024-07-26 14:20:37.850232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.086 qpair failed and we were unable to recover it. 00:26:30.086 [2024-07-26 14:20:37.850319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.086 [2024-07-26 14:20:37.850344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.086 qpair failed and we were unable to recover it. 00:26:30.086 [2024-07-26 14:20:37.850426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.086 [2024-07-26 14:20:37.850451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.086 qpair failed and we were unable to recover it. 00:26:30.086 [2024-07-26 14:20:37.850595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.086 [2024-07-26 14:20:37.850635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.086 qpair failed and we were unable to recover it. 00:26:30.086 [2024-07-26 14:20:37.850732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.086 [2024-07-26 14:20:37.850760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.086 qpair failed and we were unable to recover it. 00:26:30.086 [2024-07-26 14:20:37.850920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.086 [2024-07-26 14:20:37.850947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.086 qpair failed and we were unable to recover it. 00:26:30.086 [2024-07-26 14:20:37.851034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.086 [2024-07-26 14:20:37.851060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.086 qpair failed and we were unable to recover it. 00:26:30.086 [2024-07-26 14:20:37.851175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.086 [2024-07-26 14:20:37.851201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.086 qpair failed and we were unable to recover it. 00:26:30.086 [2024-07-26 14:20:37.851312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.086 [2024-07-26 14:20:37.851338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.086 qpair failed and we were unable to recover it. 00:26:30.086 [2024-07-26 14:20:37.851452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.086 [2024-07-26 14:20:37.851478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.086 qpair failed and we were unable to recover it. 00:26:30.086 [2024-07-26 14:20:37.851631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.086 [2024-07-26 14:20:37.851670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.086 qpair failed and we were unable to recover it. 00:26:30.086 [2024-07-26 14:20:37.851761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.086 [2024-07-26 14:20:37.851789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.086 qpair failed and we were unable to recover it. 00:26:30.086 [2024-07-26 14:20:37.851952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.086 [2024-07-26 14:20:37.851979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.086 qpair failed and we were unable to recover it. 00:26:30.086 [2024-07-26 14:20:37.852096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.086 [2024-07-26 14:20:37.852123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.086 qpair failed and we were unable to recover it. 00:26:30.086 [2024-07-26 14:20:37.852216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.086 [2024-07-26 14:20:37.852241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.086 qpair failed and we were unable to recover it. 00:26:30.086 [2024-07-26 14:20:37.852378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.086 [2024-07-26 14:20:37.852404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.086 qpair failed and we were unable to recover it. 00:26:30.086 [2024-07-26 14:20:37.852524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.086 [2024-07-26 14:20:37.852559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.086 qpair failed and we were unable to recover it. 00:26:30.086 [2024-07-26 14:20:37.852648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.086 [2024-07-26 14:20:37.852677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.086 qpair failed and we were unable to recover it. 00:26:30.086 [2024-07-26 14:20:37.852763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.086 [2024-07-26 14:20:37.852789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.086 qpair failed and we were unable to recover it. 00:26:30.086 [2024-07-26 14:20:37.852875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.086 [2024-07-26 14:20:37.852902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.086 qpair failed and we were unable to recover it. 00:26:30.086 [2024-07-26 14:20:37.853015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.086 [2024-07-26 14:20:37.853041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.086 qpair failed and we were unable to recover it. 00:26:30.086 [2024-07-26 14:20:37.853162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.086 [2024-07-26 14:20:37.853191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.086 qpair failed and we were unable to recover it. 00:26:30.086 [2024-07-26 14:20:37.853308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.086 [2024-07-26 14:20:37.853335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.086 qpair failed and we were unable to recover it. 00:26:30.086 [2024-07-26 14:20:37.853413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.086 [2024-07-26 14:20:37.853439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.086 qpair failed and we were unable to recover it. 00:26:30.086 [2024-07-26 14:20:37.853563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.086 [2024-07-26 14:20:37.853592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.086 qpair failed and we were unable to recover it. 00:26:30.086 [2024-07-26 14:20:37.853676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.086 [2024-07-26 14:20:37.853708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.086 qpair failed and we were unable to recover it. 00:26:30.086 [2024-07-26 14:20:37.853805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.086 [2024-07-26 14:20:37.853836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.086 qpair failed and we were unable to recover it. 00:26:30.086 [2024-07-26 14:20:37.853924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.086 [2024-07-26 14:20:37.853953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.086 qpair failed and we were unable to recover it. 00:26:30.086 [2024-07-26 14:20:37.854063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.086 [2024-07-26 14:20:37.854091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.086 qpair failed and we were unable to recover it. 00:26:30.086 [2024-07-26 14:20:37.854182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.086 [2024-07-26 14:20:37.854208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.086 qpair failed and we were unable to recover it. 00:26:30.086 [2024-07-26 14:20:37.854314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.086 [2024-07-26 14:20:37.854339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.086 qpair failed and we were unable to recover it. 00:26:30.086 [2024-07-26 14:20:37.854447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.086 [2024-07-26 14:20:37.854473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.086 qpair failed and we were unable to recover it. 00:26:30.086 [2024-07-26 14:20:37.854606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.086 [2024-07-26 14:20:37.854633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.086 qpair failed and we were unable to recover it. 00:26:30.086 [2024-07-26 14:20:37.854742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.087 [2024-07-26 14:20:37.854768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.087 qpair failed and we were unable to recover it. 00:26:30.087 [2024-07-26 14:20:37.854882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.087 [2024-07-26 14:20:37.854909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.087 qpair failed and we were unable to recover it. 00:26:30.087 [2024-07-26 14:20:37.855022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.087 [2024-07-26 14:20:37.855048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.087 qpair failed and we were unable to recover it. 00:26:30.087 [2024-07-26 14:20:37.855132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.087 [2024-07-26 14:20:37.855158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.087 qpair failed and we were unable to recover it. 00:26:30.087 [2024-07-26 14:20:37.855236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.087 [2024-07-26 14:20:37.855261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.087 qpair failed and we were unable to recover it. 00:26:30.087 [2024-07-26 14:20:37.855344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.087 [2024-07-26 14:20:37.855371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.087 qpair failed and we were unable to recover it. 00:26:30.087 [2024-07-26 14:20:37.855491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.087 [2024-07-26 14:20:37.855533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.087 qpair failed and we were unable to recover it. 00:26:30.087 [2024-07-26 14:20:37.855632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.087 [2024-07-26 14:20:37.855658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.087 qpair failed and we were unable to recover it. 00:26:30.087 [2024-07-26 14:20:37.855746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.087 [2024-07-26 14:20:37.855772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.087 qpair failed and we were unable to recover it. 00:26:30.087 [2024-07-26 14:20:37.855918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.087 [2024-07-26 14:20:37.855944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.087 qpair failed and we were unable to recover it. 00:26:30.087 [2024-07-26 14:20:37.856065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.087 [2024-07-26 14:20:37.856092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.087 qpair failed and we were unable to recover it. 00:26:30.087 [2024-07-26 14:20:37.856208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.087 [2024-07-26 14:20:37.856234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.087 qpair failed and we were unable to recover it. 00:26:30.087 [2024-07-26 14:20:37.856369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.087 [2024-07-26 14:20:37.856395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.087 qpair failed and we were unable to recover it. 00:26:30.087 [2024-07-26 14:20:37.856480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.087 [2024-07-26 14:20:37.856506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.087 qpair failed and we were unable to recover it. 00:26:30.087 [2024-07-26 14:20:37.856639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.087 [2024-07-26 14:20:37.856666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.087 qpair failed and we were unable to recover it. 00:26:30.087 [2024-07-26 14:20:37.856779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.087 [2024-07-26 14:20:37.856805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.087 qpair failed and we were unable to recover it. 00:26:30.087 [2024-07-26 14:20:37.856933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.087 [2024-07-26 14:20:37.856959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.087 qpair failed and we were unable to recover it. 00:26:30.087 [2024-07-26 14:20:37.857099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.087 [2024-07-26 14:20:37.857125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.087 qpair failed and we were unable to recover it. 00:26:30.087 [2024-07-26 14:20:37.857264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.087 [2024-07-26 14:20:37.857289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.087 qpair failed and we were unable to recover it. 00:26:30.087 [2024-07-26 14:20:37.857447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.087 [2024-07-26 14:20:37.857487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.087 qpair failed and we were unable to recover it. 00:26:30.087 [2024-07-26 14:20:37.857646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.087 [2024-07-26 14:20:37.857673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.087 qpair failed and we were unable to recover it. 00:26:30.087 [2024-07-26 14:20:37.857765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.087 [2024-07-26 14:20:37.857792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.087 qpair failed and we were unable to recover it. 00:26:30.087 [2024-07-26 14:20:37.857877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.087 [2024-07-26 14:20:37.857902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.087 qpair failed and we were unable to recover it. 00:26:30.087 [2024-07-26 14:20:37.857981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.087 [2024-07-26 14:20:37.858007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.087 qpair failed and we were unable to recover it. 00:26:30.087 [2024-07-26 14:20:37.858104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.087 [2024-07-26 14:20:37.858130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.087 qpair failed and we were unable to recover it. 00:26:30.087 [2024-07-26 14:20:37.858247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.087 [2024-07-26 14:20:37.858274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.087 qpair failed and we were unable to recover it. 00:26:30.087 [2024-07-26 14:20:37.858367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.087 [2024-07-26 14:20:37.858393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.087 qpair failed and we were unable to recover it. 00:26:30.087 [2024-07-26 14:20:37.858503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.087 [2024-07-26 14:20:37.858544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.087 qpair failed and we were unable to recover it. 00:26:30.087 [2024-07-26 14:20:37.858663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.087 [2024-07-26 14:20:37.858690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.087 qpair failed and we were unable to recover it. 00:26:30.087 [2024-07-26 14:20:37.858810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.087 [2024-07-26 14:20:37.858835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.087 qpair failed and we were unable to recover it. 00:26:30.087 [2024-07-26 14:20:37.858959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.087 [2024-07-26 14:20:37.858985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.087 qpair failed and we were unable to recover it. 00:26:30.087 [2024-07-26 14:20:37.859078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.087 [2024-07-26 14:20:37.859104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.087 qpair failed and we were unable to recover it. 00:26:30.087 [2024-07-26 14:20:37.859242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.087 [2024-07-26 14:20:37.859273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.087 qpair failed and we were unable to recover it. 00:26:30.087 [2024-07-26 14:20:37.859386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.087 [2024-07-26 14:20:37.859413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.087 qpair failed and we were unable to recover it. 00:26:30.087 [2024-07-26 14:20:37.859538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.087 [2024-07-26 14:20:37.859565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.087 qpair failed and we were unable to recover it. 00:26:30.087 [2024-07-26 14:20:37.859650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.087 [2024-07-26 14:20:37.859677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.087 qpair failed and we were unable to recover it. 00:26:30.087 [2024-07-26 14:20:37.859795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.087 [2024-07-26 14:20:37.859829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.087 qpair failed and we were unable to recover it. 00:26:30.087 [2024-07-26 14:20:37.859950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.087 [2024-07-26 14:20:37.859977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.087 qpair failed and we were unable to recover it. 00:26:30.088 [2024-07-26 14:20:37.860088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.088 [2024-07-26 14:20:37.860114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.088 qpair failed and we were unable to recover it. 00:26:30.088 [2024-07-26 14:20:37.860254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.088 [2024-07-26 14:20:37.860280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.088 qpair failed and we were unable to recover it. 00:26:30.088 [2024-07-26 14:20:37.860374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.088 [2024-07-26 14:20:37.860404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.088 qpair failed and we were unable to recover it. 00:26:30.088 [2024-07-26 14:20:37.860494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.088 [2024-07-26 14:20:37.860538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.088 qpair failed and we were unable to recover it. 00:26:30.088 [2024-07-26 14:20:37.860631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.088 [2024-07-26 14:20:37.860658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.088 qpair failed and we were unable to recover it. 00:26:30.088 [2024-07-26 14:20:37.860737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.088 [2024-07-26 14:20:37.860763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.088 qpair failed and we were unable to recover it. 00:26:30.088 [2024-07-26 14:20:37.860883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.088 [2024-07-26 14:20:37.860910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.088 qpair failed and we were unable to recover it. 00:26:30.088 [2024-07-26 14:20:37.861019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.088 [2024-07-26 14:20:37.861045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.088 qpair failed and we were unable to recover it. 00:26:30.088 [2024-07-26 14:20:37.861133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.088 [2024-07-26 14:20:37.861160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.088 qpair failed and we were unable to recover it. 00:26:30.088 [2024-07-26 14:20:37.861300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.088 [2024-07-26 14:20:37.861326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.088 qpair failed and we were unable to recover it. 00:26:30.088 [2024-07-26 14:20:37.861443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.088 [2024-07-26 14:20:37.861469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.088 qpair failed and we were unable to recover it. 00:26:30.088 [2024-07-26 14:20:37.861554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.088 [2024-07-26 14:20:37.861580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.088 qpair failed and we were unable to recover it. 00:26:30.088 [2024-07-26 14:20:37.861690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.088 [2024-07-26 14:20:37.861716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.088 qpair failed and we were unable to recover it. 00:26:30.088 [2024-07-26 14:20:37.861807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.088 [2024-07-26 14:20:37.861844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.088 qpair failed and we were unable to recover it. 00:26:30.088 [2024-07-26 14:20:37.862042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.088 [2024-07-26 14:20:37.862069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.088 qpair failed and we were unable to recover it. 00:26:30.088 [2024-07-26 14:20:37.862189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.088 [2024-07-26 14:20:37.862215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.088 qpair failed and we were unable to recover it. 00:26:30.088 [2024-07-26 14:20:37.862342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.088 [2024-07-26 14:20:37.862381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.088 qpair failed and we were unable to recover it. 00:26:30.088 [2024-07-26 14:20:37.862505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.088 [2024-07-26 14:20:37.862553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.088 qpair failed and we were unable to recover it. 00:26:30.088 [2024-07-26 14:20:37.862647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.088 [2024-07-26 14:20:37.862674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.088 qpair failed and we were unable to recover it. 00:26:30.088 [2024-07-26 14:20:37.862757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.088 [2024-07-26 14:20:37.862784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.088 qpair failed and we were unable to recover it. 00:26:30.088 [2024-07-26 14:20:37.862903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.088 [2024-07-26 14:20:37.862932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.088 qpair failed and we were unable to recover it. 00:26:30.088 [2024-07-26 14:20:37.863055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.088 [2024-07-26 14:20:37.863095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.088 qpair failed and we were unable to recover it. 00:26:30.088 [2024-07-26 14:20:37.863205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.088 [2024-07-26 14:20:37.863233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.088 qpair failed and we were unable to recover it. 00:26:30.088 [2024-07-26 14:20:37.863351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.088 [2024-07-26 14:20:37.863380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.088 qpair failed and we were unable to recover it. 00:26:30.088 [2024-07-26 14:20:37.863470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.088 [2024-07-26 14:20:37.863497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.088 qpair failed and we were unable to recover it. 00:26:30.088 [2024-07-26 14:20:37.863595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.088 [2024-07-26 14:20:37.863623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.088 qpair failed and we were unable to recover it. 00:26:30.088 EAL: No free 2048 kB hugepages reported on node 1 00:26:30.088 [2024-07-26 14:20:37.863727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.088 [2024-07-26 14:20:37.863752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.088 qpair failed and we were unable to recover it. 00:26:30.088 [2024-07-26 14:20:37.863888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.088 [2024-07-26 14:20:37.863914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.088 qpair failed and we were unable to recover it. 00:26:30.088 [2024-07-26 14:20:37.864006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.088 [2024-07-26 14:20:37.864031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.088 qpair failed and we were unable to recover it. 00:26:30.088 [2024-07-26 14:20:37.864121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.088 [2024-07-26 14:20:37.864150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.088 qpair failed and we were unable to recover it. 00:26:30.088 [2024-07-26 14:20:37.864272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.088 [2024-07-26 14:20:37.864300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.088 qpair failed and we were unable to recover it. 00:26:30.088 [2024-07-26 14:20:37.864419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.088 [2024-07-26 14:20:37.864446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.089 qpair failed and we were unable to recover it. 00:26:30.089 [2024-07-26 14:20:37.864587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.089 [2024-07-26 14:20:37.864614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.089 qpair failed and we were unable to recover it. 00:26:30.089 [2024-07-26 14:20:37.864704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.089 [2024-07-26 14:20:37.864730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.089 qpair failed and we were unable to recover it. 00:26:30.089 [2024-07-26 14:20:37.864853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.089 [2024-07-26 14:20:37.864884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.089 qpair failed and we were unable to recover it. 00:26:30.089 [2024-07-26 14:20:37.865005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.089 [2024-07-26 14:20:37.865033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.089 qpair failed and we were unable to recover it. 00:26:30.089 [2024-07-26 14:20:37.865146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.089 [2024-07-26 14:20:37.865173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.089 qpair failed and we were unable to recover it. 00:26:30.089 [2024-07-26 14:20:37.865311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.089 [2024-07-26 14:20:37.865351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.089 qpair failed and we were unable to recover it. 00:26:30.089 [2024-07-26 14:20:37.865451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.089 [2024-07-26 14:20:37.865479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.089 qpair failed and we were unable to recover it. 00:26:30.089 [2024-07-26 14:20:37.865586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.089 [2024-07-26 14:20:37.865612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.089 qpair failed and we were unable to recover it. 00:26:30.089 [2024-07-26 14:20:37.865727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.089 [2024-07-26 14:20:37.865753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.089 qpair failed and we were unable to recover it. 00:26:30.089 [2024-07-26 14:20:37.865863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.089 [2024-07-26 14:20:37.865899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.089 qpair failed and we were unable to recover it. 00:26:30.089 [2024-07-26 14:20:37.865980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.089 [2024-07-26 14:20:37.866007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.089 qpair failed and we were unable to recover it. 00:26:30.089 [2024-07-26 14:20:37.866097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.089 [2024-07-26 14:20:37.866126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.089 qpair failed and we were unable to recover it. 00:26:30.089 [2024-07-26 14:20:37.866210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.089 [2024-07-26 14:20:37.866239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.089 qpair failed and we were unable to recover it. 00:26:30.089 [2024-07-26 14:20:37.866327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.089 [2024-07-26 14:20:37.866355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.089 qpair failed and we were unable to recover it. 00:26:30.089 [2024-07-26 14:20:37.866443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.089 [2024-07-26 14:20:37.866469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.089 qpair failed and we were unable to recover it. 00:26:30.089 [2024-07-26 14:20:37.866608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.089 [2024-07-26 14:20:37.866635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.089 qpair failed and we were unable to recover it. 00:26:30.089 [2024-07-26 14:20:37.866723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.089 [2024-07-26 14:20:37.866749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.089 qpair failed and we were unable to recover it. 00:26:30.089 [2024-07-26 14:20:37.866857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.089 [2024-07-26 14:20:37.866883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.089 qpair failed and we were unable to recover it. 00:26:30.089 [2024-07-26 14:20:37.867004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.089 [2024-07-26 14:20:37.867030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.089 qpair failed and we were unable to recover it. 00:26:30.089 [2024-07-26 14:20:37.867111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.089 [2024-07-26 14:20:37.867137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.089 qpair failed and we were unable to recover it. 00:26:30.089 [2024-07-26 14:20:37.867238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.089 [2024-07-26 14:20:37.867264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.089 qpair failed and we were unable to recover it. 00:26:30.089 [2024-07-26 14:20:37.867350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.089 [2024-07-26 14:20:37.867379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.089 qpair failed and we were unable to recover it. 00:26:30.089 [2024-07-26 14:20:37.867493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.089 [2024-07-26 14:20:37.867537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.089 qpair failed and we were unable to recover it. 00:26:30.089 [2024-07-26 14:20:37.867659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.089 [2024-07-26 14:20:37.867685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.089 qpair failed and we were unable to recover it. 00:26:30.089 [2024-07-26 14:20:37.867768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.089 [2024-07-26 14:20:37.867794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.089 qpair failed and we were unable to recover it. 00:26:30.089 [2024-07-26 14:20:37.867906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.089 [2024-07-26 14:20:37.867945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.089 qpair failed and we were unable to recover it. 00:26:30.089 [2024-07-26 14:20:37.868056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.089 [2024-07-26 14:20:37.868084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.089 qpair failed and we were unable to recover it. 00:26:30.089 [2024-07-26 14:20:37.868166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.089 [2024-07-26 14:20:37.868193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.089 qpair failed and we were unable to recover it. 00:26:30.089 [2024-07-26 14:20:37.868332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.089 [2024-07-26 14:20:37.868358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.089 qpair failed and we were unable to recover it. 00:26:30.089 [2024-07-26 14:20:37.868469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.089 [2024-07-26 14:20:37.868501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.089 qpair failed and we were unable to recover it. 00:26:30.089 [2024-07-26 14:20:37.868622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.089 [2024-07-26 14:20:37.868648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.089 qpair failed and we were unable to recover it. 00:26:30.089 [2024-07-26 14:20:37.868736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.089 [2024-07-26 14:20:37.868762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.089 qpair failed and we were unable to recover it. 00:26:30.089 [2024-07-26 14:20:37.868887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.089 [2024-07-26 14:20:37.868913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.089 qpair failed and we were unable to recover it. 00:26:30.089 [2024-07-26 14:20:37.868994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.089 [2024-07-26 14:20:37.869019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.089 qpair failed and we were unable to recover it. 00:26:30.089 [2024-07-26 14:20:37.869127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.089 [2024-07-26 14:20:37.869153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.089 qpair failed and we were unable to recover it. 00:26:30.089 [2024-07-26 14:20:37.869256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.089 [2024-07-26 14:20:37.869282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.089 qpair failed and we were unable to recover it. 00:26:30.089 [2024-07-26 14:20:37.869372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.089 [2024-07-26 14:20:37.869400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.089 qpair failed and we were unable to recover it. 00:26:30.089 [2024-07-26 14:20:37.869525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.089 [2024-07-26 14:20:37.869558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.090 qpair failed and we were unable to recover it. 00:26:30.090 [2024-07-26 14:20:37.869654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.090 [2024-07-26 14:20:37.869684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.090 qpair failed and we were unable to recover it. 00:26:30.090 [2024-07-26 14:20:37.869774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.090 [2024-07-26 14:20:37.869802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.090 qpair failed and we were unable to recover it. 00:26:30.090 [2024-07-26 14:20:37.869925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.090 [2024-07-26 14:20:37.869952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.090 qpair failed and we were unable to recover it. 00:26:30.090 [2024-07-26 14:20:37.870063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.090 [2024-07-26 14:20:37.870090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.090 qpair failed and we were unable to recover it. 00:26:30.090 [2024-07-26 14:20:37.870202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.090 [2024-07-26 14:20:37.870229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.090 qpair failed and we were unable to recover it. 00:26:30.090 [2024-07-26 14:20:37.870343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.090 [2024-07-26 14:20:37.870368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.090 qpair failed and we were unable to recover it. 00:26:30.090 [2024-07-26 14:20:37.870484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.090 [2024-07-26 14:20:37.870511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.090 qpair failed and we were unable to recover it. 00:26:30.090 [2024-07-26 14:20:37.870602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.090 [2024-07-26 14:20:37.870630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.090 qpair failed and we were unable to recover it. 00:26:30.090 [2024-07-26 14:20:37.870719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.090 [2024-07-26 14:20:37.870745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.090 qpair failed and we were unable to recover it. 00:26:30.090 [2024-07-26 14:20:37.870866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.090 [2024-07-26 14:20:37.870892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.090 qpair failed and we were unable to recover it. 00:26:30.090 [2024-07-26 14:20:37.870968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.090 [2024-07-26 14:20:37.870994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.090 qpair failed and we were unable to recover it. 00:26:30.090 [2024-07-26 14:20:37.871097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.090 [2024-07-26 14:20:37.871137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.090 qpair failed and we were unable to recover it. 00:26:30.090 [2024-07-26 14:20:37.871237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.090 [2024-07-26 14:20:37.871264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.090 qpair failed and we were unable to recover it. 00:26:30.090 [2024-07-26 14:20:37.871402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.090 [2024-07-26 14:20:37.871428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.090 qpair failed and we were unable to recover it. 00:26:30.090 [2024-07-26 14:20:37.871544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.090 [2024-07-26 14:20:37.871571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.090 qpair failed and we were unable to recover it. 00:26:30.090 [2024-07-26 14:20:37.871686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.090 [2024-07-26 14:20:37.871712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.090 qpair failed and we were unable to recover it. 00:26:30.090 [2024-07-26 14:20:37.871792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.090 [2024-07-26 14:20:37.871829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.090 qpair failed and we were unable to recover it. 00:26:30.090 [2024-07-26 14:20:37.871911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.090 [2024-07-26 14:20:37.871938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.090 qpair failed and we were unable to recover it. 00:26:30.090 [2024-07-26 14:20:37.872056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.090 [2024-07-26 14:20:37.872086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.090 qpair failed and we were unable to recover it. 00:26:30.090 [2024-07-26 14:20:37.872199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.090 [2024-07-26 14:20:37.872224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.090 qpair failed and we were unable to recover it. 00:26:30.090 [2024-07-26 14:20:37.872311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.090 [2024-07-26 14:20:37.872337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.090 qpair failed and we were unable to recover it. 00:26:30.090 [2024-07-26 14:20:37.872445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.090 [2024-07-26 14:20:37.872470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.090 qpair failed and we were unable to recover it. 00:26:30.090 [2024-07-26 14:20:37.872569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.090 [2024-07-26 14:20:37.872595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.090 qpair failed and we were unable to recover it. 00:26:30.090 [2024-07-26 14:20:37.872677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.090 [2024-07-26 14:20:37.872703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.090 qpair failed and we were unable to recover it. 00:26:30.090 [2024-07-26 14:20:37.872815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.090 [2024-07-26 14:20:37.872842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.090 qpair failed and we were unable to recover it. 00:26:30.090 [2024-07-26 14:20:37.872927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.090 [2024-07-26 14:20:37.872953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.090 qpair failed and we were unable to recover it. 00:26:30.090 [2024-07-26 14:20:37.873064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.090 [2024-07-26 14:20:37.873090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.090 qpair failed and we were unable to recover it. 00:26:30.090 [2024-07-26 14:20:37.873183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.090 [2024-07-26 14:20:37.873209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.090 qpair failed and we were unable to recover it. 00:26:30.090 [2024-07-26 14:20:37.873317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.090 [2024-07-26 14:20:37.873344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.090 qpair failed and we were unable to recover it. 00:26:30.090 [2024-07-26 14:20:37.873431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.090 [2024-07-26 14:20:37.873458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.090 qpair failed and we were unable to recover it. 00:26:30.090 [2024-07-26 14:20:37.873556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.090 [2024-07-26 14:20:37.873583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.090 qpair failed and we were unable to recover it. 00:26:30.090 [2024-07-26 14:20:37.873690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.090 [2024-07-26 14:20:37.873743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.090 qpair failed and we were unable to recover it. 00:26:30.090 [2024-07-26 14:20:37.873849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.090 [2024-07-26 14:20:37.873877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.090 qpair failed and we were unable to recover it. 00:26:30.090 [2024-07-26 14:20:37.873992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.090 [2024-07-26 14:20:37.874018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.090 qpair failed and we were unable to recover it. 00:26:30.090 [2024-07-26 14:20:37.874123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.090 [2024-07-26 14:20:37.874149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.090 qpair failed and we were unable to recover it. 00:26:30.090 [2024-07-26 14:20:37.874230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.090 [2024-07-26 14:20:37.874256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.090 qpair failed and we were unable to recover it. 00:26:30.090 [2024-07-26 14:20:37.874363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.090 [2024-07-26 14:20:37.874389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.090 qpair failed and we were unable to recover it. 00:26:30.090 [2024-07-26 14:20:37.874515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.091 [2024-07-26 14:20:37.874561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.091 qpair failed and we were unable to recover it. 00:26:30.091 [2024-07-26 14:20:37.874682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.091 [2024-07-26 14:20:37.874709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.091 qpair failed and we were unable to recover it. 00:26:30.091 [2024-07-26 14:20:37.874809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.091 [2024-07-26 14:20:37.874837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.091 qpair failed and we were unable to recover it. 00:26:30.091 [2024-07-26 14:20:37.874945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.091 [2024-07-26 14:20:37.874970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.091 qpair failed and we were unable to recover it. 00:26:30.091 [2024-07-26 14:20:37.875059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.091 [2024-07-26 14:20:37.875086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.091 qpair failed and we were unable to recover it. 00:26:30.091 [2024-07-26 14:20:37.875162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.091 [2024-07-26 14:20:37.875189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.091 qpair failed and we were unable to recover it. 00:26:30.091 [2024-07-26 14:20:37.875303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.091 [2024-07-26 14:20:37.875328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.091 qpair failed and we were unable to recover it. 00:26:30.091 [2024-07-26 14:20:37.875414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.091 [2024-07-26 14:20:37.875440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.091 qpair failed and we were unable to recover it. 00:26:30.091 [2024-07-26 14:20:37.875567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.091 [2024-07-26 14:20:37.875595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.091 qpair failed and we were unable to recover it. 00:26:30.091 [2024-07-26 14:20:37.875684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.091 [2024-07-26 14:20:37.875710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.091 qpair failed and we were unable to recover it. 00:26:30.091 [2024-07-26 14:20:37.875819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.091 [2024-07-26 14:20:37.875845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.091 qpair failed and we were unable to recover it. 00:26:30.091 [2024-07-26 14:20:37.875982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.091 [2024-07-26 14:20:37.876008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.091 qpair failed and we were unable to recover it. 00:26:30.091 [2024-07-26 14:20:37.876123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.091 [2024-07-26 14:20:37.876151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.091 qpair failed and we were unable to recover it. 00:26:30.091 [2024-07-26 14:20:37.876267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.091 [2024-07-26 14:20:37.876293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.091 qpair failed and we were unable to recover it. 00:26:30.091 [2024-07-26 14:20:37.876376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.091 [2024-07-26 14:20:37.876402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.091 qpair failed and we were unable to recover it. 00:26:30.091 [2024-07-26 14:20:37.876485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.091 [2024-07-26 14:20:37.876511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.091 qpair failed and we were unable to recover it. 00:26:30.091 [2024-07-26 14:20:37.876649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.091 [2024-07-26 14:20:37.876675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.091 qpair failed and we were unable to recover it. 00:26:30.091 [2024-07-26 14:20:37.876793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.091 [2024-07-26 14:20:37.876829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.091 qpair failed and we were unable to recover it. 00:26:30.091 [2024-07-26 14:20:37.876949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.091 [2024-07-26 14:20:37.876975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.091 qpair failed and we were unable to recover it. 00:26:30.091 [2024-07-26 14:20:37.877093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.091 [2024-07-26 14:20:37.877119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.091 qpair failed and we were unable to recover it. 00:26:30.091 [2024-07-26 14:20:37.877236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.091 [2024-07-26 14:20:37.877264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.091 qpair failed and we were unable to recover it. 00:26:30.091 [2024-07-26 14:20:37.877377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.091 [2024-07-26 14:20:37.877407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.091 qpair failed and we were unable to recover it. 00:26:30.091 [2024-07-26 14:20:37.877495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.091 [2024-07-26 14:20:37.877538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.091 qpair failed and we were unable to recover it. 00:26:30.091 [2024-07-26 14:20:37.877639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.091 [2024-07-26 14:20:37.877666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.091 qpair failed and we were unable to recover it. 00:26:30.091 [2024-07-26 14:20:37.877780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.091 [2024-07-26 14:20:37.877805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.091 qpair failed and we were unable to recover it. 00:26:30.091 [2024-07-26 14:20:37.877901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.091 [2024-07-26 14:20:37.877926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.091 qpair failed and we were unable to recover it. 00:26:30.091 [2024-07-26 14:20:37.878011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.091 [2024-07-26 14:20:37.878037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.091 qpair failed and we were unable to recover it. 00:26:30.091 [2024-07-26 14:20:37.878142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.091 [2024-07-26 14:20:37.878168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.091 qpair failed and we were unable to recover it. 00:26:30.091 [2024-07-26 14:20:37.878272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.091 [2024-07-26 14:20:37.878312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.091 qpair failed and we were unable to recover it. 00:26:30.091 [2024-07-26 14:20:37.878432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.091 [2024-07-26 14:20:37.878460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.091 qpair failed and we were unable to recover it. 00:26:30.091 [2024-07-26 14:20:37.878613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.091 [2024-07-26 14:20:37.878640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.091 qpair failed and we were unable to recover it. 00:26:30.091 [2024-07-26 14:20:37.878751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.091 [2024-07-26 14:20:37.878777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.091 qpair failed and we were unable to recover it. 00:26:30.091 [2024-07-26 14:20:37.878889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.091 [2024-07-26 14:20:37.878915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.091 qpair failed and we were unable to recover it. 00:26:30.091 [2024-07-26 14:20:37.878999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.091 [2024-07-26 14:20:37.879025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.091 qpair failed and we were unable to recover it. 00:26:30.091 [2024-07-26 14:20:37.879138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.091 [2024-07-26 14:20:37.879164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.091 qpair failed and we were unable to recover it. 00:26:30.091 [2024-07-26 14:20:37.879266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.091 [2024-07-26 14:20:37.879306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.091 qpair failed and we were unable to recover it. 00:26:30.091 [2024-07-26 14:20:37.879401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.091 [2024-07-26 14:20:37.879440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.091 qpair failed and we were unable to recover it. 00:26:30.091 [2024-07-26 14:20:37.879547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.091 [2024-07-26 14:20:37.879575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.091 qpair failed and we were unable to recover it. 00:26:30.092 [2024-07-26 14:20:37.879662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.092 [2024-07-26 14:20:37.879688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.092 qpair failed and we were unable to recover it. 00:26:30.092 [2024-07-26 14:20:37.879827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.092 [2024-07-26 14:20:37.879853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.092 qpair failed and we were unable to recover it. 00:26:30.092 [2024-07-26 14:20:37.879925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.092 [2024-07-26 14:20:37.879951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.092 qpair failed and we were unable to recover it. 00:26:30.092 [2024-07-26 14:20:37.880042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.092 [2024-07-26 14:20:37.880069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.092 qpair failed and we were unable to recover it. 00:26:30.092 [2024-07-26 14:20:37.880155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.092 [2024-07-26 14:20:37.880183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.092 qpair failed and we were unable to recover it. 00:26:30.092 [2024-07-26 14:20:37.880299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.092 [2024-07-26 14:20:37.880326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.092 qpair failed and we were unable to recover it. 00:26:30.092 [2024-07-26 14:20:37.880435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.092 [2024-07-26 14:20:37.880461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.092 qpair failed and we were unable to recover it. 00:26:30.092 [2024-07-26 14:20:37.880589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.092 [2024-07-26 14:20:37.880615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.092 qpair failed and we were unable to recover it. 00:26:30.092 [2024-07-26 14:20:37.880698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.092 [2024-07-26 14:20:37.880725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.092 qpair failed and we were unable to recover it. 00:26:30.092 [2024-07-26 14:20:37.880823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.092 [2024-07-26 14:20:37.880857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.092 qpair failed and we were unable to recover it. 00:26:30.092 [2024-07-26 14:20:37.880951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.092 [2024-07-26 14:20:37.880976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.092 qpair failed and we were unable to recover it. 00:26:30.092 [2024-07-26 14:20:37.881081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.092 [2024-07-26 14:20:37.881108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.092 qpair failed and we were unable to recover it. 00:26:30.092 [2024-07-26 14:20:37.881220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.092 [2024-07-26 14:20:37.881248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.092 qpair failed and we were unable to recover it. 00:26:30.092 [2024-07-26 14:20:37.881393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.092 [2024-07-26 14:20:37.881419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.092 qpair failed and we were unable to recover it. 00:26:30.092 [2024-07-26 14:20:37.881563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.092 [2024-07-26 14:20:37.881591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.092 qpair failed and we were unable to recover it. 00:26:30.092 [2024-07-26 14:20:37.881670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.092 [2024-07-26 14:20:37.881696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.092 qpair failed and we were unable to recover it. 00:26:30.092 [2024-07-26 14:20:37.881807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.092 [2024-07-26 14:20:37.881833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.092 qpair failed and we were unable to recover it. 00:26:30.092 [2024-07-26 14:20:37.881967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.092 [2024-07-26 14:20:37.881993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.092 qpair failed and we were unable to recover it. 00:26:30.092 [2024-07-26 14:20:37.882096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.092 [2024-07-26 14:20:37.882122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.092 qpair failed and we were unable to recover it. 00:26:30.092 [2024-07-26 14:20:37.882232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.092 [2024-07-26 14:20:37.882257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.092 qpair failed and we were unable to recover it. 00:26:30.092 [2024-07-26 14:20:37.882386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.092 [2024-07-26 14:20:37.882425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.092 qpair failed and we were unable to recover it. 00:26:30.092 [2024-07-26 14:20:37.882551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.092 [2024-07-26 14:20:37.882594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.092 qpair failed and we were unable to recover it. 00:26:30.092 [2024-07-26 14:20:37.882686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.092 [2024-07-26 14:20:37.882712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.092 qpair failed and we were unable to recover it. 00:26:30.092 [2024-07-26 14:20:37.882821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.092 [2024-07-26 14:20:37.882851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.092 qpair failed and we were unable to recover it. 00:26:30.092 [2024-07-26 14:20:37.882964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.092 [2024-07-26 14:20:37.882990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.092 qpair failed and we were unable to recover it. 00:26:30.092 [2024-07-26 14:20:37.883078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.092 [2024-07-26 14:20:37.883108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.092 qpair failed and we were unable to recover it. 00:26:30.092 [2024-07-26 14:20:37.883194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.092 [2024-07-26 14:20:37.883220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.092 qpair failed and we were unable to recover it. 00:26:30.092 [2024-07-26 14:20:37.883311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.092 [2024-07-26 14:20:37.883338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.092 qpair failed and we were unable to recover it. 00:26:30.092 [2024-07-26 14:20:37.883451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.092 [2024-07-26 14:20:37.883477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.092 qpair failed and we were unable to recover it. 00:26:30.092 [2024-07-26 14:20:37.883607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.092 [2024-07-26 14:20:37.883634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.092 qpair failed and we were unable to recover it. 00:26:30.092 [2024-07-26 14:20:37.883725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.092 [2024-07-26 14:20:37.883752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.092 qpair failed and we were unable to recover it. 00:26:30.092 [2024-07-26 14:20:37.883893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.092 [2024-07-26 14:20:37.883919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.092 qpair failed and we were unable to recover it. 00:26:30.092 [2024-07-26 14:20:37.883993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.092 [2024-07-26 14:20:37.884019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.092 qpair failed and we were unable to recover it. 00:26:30.092 [2024-07-26 14:20:37.884129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.092 [2024-07-26 14:20:37.884155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.092 qpair failed and we were unable to recover it. 00:26:30.092 [2024-07-26 14:20:37.884267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.092 [2024-07-26 14:20:37.884293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.092 qpair failed and we were unable to recover it. 00:26:30.092 [2024-07-26 14:20:37.884406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.092 [2024-07-26 14:20:37.884432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.092 qpair failed and we were unable to recover it. 00:26:30.092 [2024-07-26 14:20:37.884539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.092 [2024-07-26 14:20:37.884579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.092 qpair failed and we were unable to recover it. 00:26:30.092 [2024-07-26 14:20:37.884698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.093 [2024-07-26 14:20:37.884726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.093 qpair failed and we were unable to recover it. 00:26:30.093 [2024-07-26 14:20:37.884876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.093 [2024-07-26 14:20:37.884902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.093 qpair failed and we were unable to recover it. 00:26:30.093 [2024-07-26 14:20:37.884980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.093 [2024-07-26 14:20:37.885007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.093 qpair failed and we were unable to recover it. 00:26:30.093 [2024-07-26 14:20:37.885113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.093 [2024-07-26 14:20:37.885138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.093 qpair failed and we were unable to recover it. 00:26:30.093 [2024-07-26 14:20:37.885221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.093 [2024-07-26 14:20:37.885250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.093 qpair failed and we were unable to recover it. 00:26:30.093 [2024-07-26 14:20:37.885395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.093 [2024-07-26 14:20:37.885421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.093 qpair failed and we were unable to recover it. 00:26:30.093 [2024-07-26 14:20:37.885512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.093 [2024-07-26 14:20:37.885551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.093 qpair failed and we were unable to recover it. 00:26:30.093 [2024-07-26 14:20:37.885637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.093 [2024-07-26 14:20:37.885664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.093 qpair failed and we were unable to recover it. 00:26:30.093 [2024-07-26 14:20:37.885753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.093 [2024-07-26 14:20:37.885779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.093 qpair failed and we were unable to recover it. 00:26:30.093 [2024-07-26 14:20:37.885914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.093 [2024-07-26 14:20:37.885940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.093 qpair failed and we were unable to recover it. 00:26:30.093 [2024-07-26 14:20:37.886024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.093 [2024-07-26 14:20:37.886051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.093 qpair failed and we were unable to recover it. 00:26:30.093 [2024-07-26 14:20:37.886163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.093 [2024-07-26 14:20:37.886190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.093 qpair failed and we were unable to recover it. 00:26:30.093 [2024-07-26 14:20:37.886325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.093 [2024-07-26 14:20:37.886351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.093 qpair failed and we were unable to recover it. 00:26:30.093 [2024-07-26 14:20:37.886469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.093 [2024-07-26 14:20:37.886497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.093 qpair failed and we were unable to recover it. 00:26:30.093 [2024-07-26 14:20:37.886657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.093 [2024-07-26 14:20:37.886685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.093 qpair failed and we were unable to recover it. 00:26:30.093 [2024-07-26 14:20:37.886774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.093 [2024-07-26 14:20:37.886800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.093 qpair failed and we were unable to recover it. 00:26:30.093 [2024-07-26 14:20:37.886903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.093 [2024-07-26 14:20:37.886928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.093 qpair failed and we were unable to recover it. 00:26:30.093 [2024-07-26 14:20:37.887010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.093 [2024-07-26 14:20:37.887037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.093 qpair failed and we were unable to recover it. 00:26:30.093 [2024-07-26 14:20:37.887125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.093 [2024-07-26 14:20:37.887151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.093 qpair failed and we were unable to recover it. 00:26:30.093 [2024-07-26 14:20:37.887227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.093 [2024-07-26 14:20:37.887253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.093 qpair failed and we were unable to recover it. 00:26:30.093 [2024-07-26 14:20:37.887329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.093 [2024-07-26 14:20:37.887355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.093 qpair failed and we were unable to recover it. 00:26:30.093 [2024-07-26 14:20:37.887507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.093 [2024-07-26 14:20:37.887570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.093 qpair failed and we were unable to recover it. 00:26:30.093 [2024-07-26 14:20:37.887669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.093 [2024-07-26 14:20:37.887696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.093 qpair failed and we were unable to recover it. 00:26:30.093 [2024-07-26 14:20:37.887788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.093 [2024-07-26 14:20:37.887826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.093 qpair failed and we were unable to recover it. 00:26:30.093 [2024-07-26 14:20:37.887934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.093 [2024-07-26 14:20:37.887961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.093 qpair failed and we were unable to recover it. 00:26:30.093 [2024-07-26 14:20:37.888072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.093 [2024-07-26 14:20:37.888098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.093 qpair failed and we were unable to recover it. 00:26:30.093 [2024-07-26 14:20:37.888207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.093 [2024-07-26 14:20:37.888238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.093 qpair failed and we were unable to recover it. 00:26:30.093 [2024-07-26 14:20:37.888384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.093 [2024-07-26 14:20:37.888410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.093 qpair failed and we were unable to recover it. 00:26:30.093 [2024-07-26 14:20:37.888494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.093 [2024-07-26 14:20:37.888540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.093 qpair failed and we were unable to recover it. 00:26:30.093 [2024-07-26 14:20:37.888689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.093 [2024-07-26 14:20:37.888716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.093 qpair failed and we were unable to recover it. 00:26:30.093 [2024-07-26 14:20:37.888834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.093 [2024-07-26 14:20:37.888861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.093 qpair failed and we were unable to recover it. 00:26:30.093 [2024-07-26 14:20:37.888976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.093 [2024-07-26 14:20:37.889004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.093 qpair failed and we were unable to recover it. 00:26:30.093 [2024-07-26 14:20:37.889119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.093 [2024-07-26 14:20:37.889145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.093 qpair failed and we were unable to recover it. 00:26:30.094 [2024-07-26 14:20:37.889231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.094 [2024-07-26 14:20:37.889258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.094 qpair failed and we were unable to recover it. 00:26:30.094 [2024-07-26 14:20:37.889339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.094 [2024-07-26 14:20:37.889366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.094 qpair failed and we were unable to recover it. 00:26:30.094 [2024-07-26 14:20:37.889454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.094 [2024-07-26 14:20:37.889482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.094 qpair failed and we were unable to recover it. 00:26:30.094 [2024-07-26 14:20:37.889600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.094 [2024-07-26 14:20:37.889627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.094 qpair failed and we were unable to recover it. 00:26:30.094 [2024-07-26 14:20:37.889744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.094 [2024-07-26 14:20:37.889770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.094 qpair failed and we were unable to recover it. 00:26:30.094 [2024-07-26 14:20:37.889890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.094 [2024-07-26 14:20:37.889917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.094 qpair failed and we were unable to recover it. 00:26:30.094 [2024-07-26 14:20:37.890000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.094 [2024-07-26 14:20:37.890026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.094 qpair failed and we were unable to recover it. 00:26:30.094 [2024-07-26 14:20:37.890140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.094 [2024-07-26 14:20:37.890166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.094 qpair failed and we were unable to recover it. 00:26:30.094 [2024-07-26 14:20:37.890242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.094 [2024-07-26 14:20:37.890268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.094 qpair failed and we were unable to recover it. 00:26:30.094 [2024-07-26 14:20:37.890355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.094 [2024-07-26 14:20:37.890381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.094 qpair failed and we were unable to recover it. 00:26:30.094 [2024-07-26 14:20:37.890462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.094 [2024-07-26 14:20:37.890489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.094 qpair failed and we were unable to recover it. 00:26:30.094 [2024-07-26 14:20:37.890609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.094 [2024-07-26 14:20:37.890638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.094 qpair failed and we were unable to recover it. 00:26:30.094 [2024-07-26 14:20:37.890750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.094 [2024-07-26 14:20:37.890777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.094 qpair failed and we were unable to recover it. 00:26:30.094 [2024-07-26 14:20:37.890856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.094 [2024-07-26 14:20:37.890882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.094 qpair failed and we were unable to recover it. 00:26:30.094 [2024-07-26 14:20:37.890988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.094 [2024-07-26 14:20:37.891014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.094 qpair failed and we were unable to recover it. 00:26:30.094 [2024-07-26 14:20:37.891093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.094 [2024-07-26 14:20:37.891120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.094 qpair failed and we were unable to recover it. 00:26:30.094 [2024-07-26 14:20:37.891207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.094 [2024-07-26 14:20:37.891234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.094 qpair failed and we were unable to recover it. 00:26:30.094 [2024-07-26 14:20:37.891347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.094 [2024-07-26 14:20:37.891374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.094 qpair failed and we were unable to recover it. 00:26:30.094 [2024-07-26 14:20:37.891485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.094 [2024-07-26 14:20:37.891511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.094 qpair failed and we were unable to recover it. 00:26:30.094 [2024-07-26 14:20:37.891650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.094 [2024-07-26 14:20:37.891676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.094 qpair failed and we were unable to recover it. 00:26:30.094 [2024-07-26 14:20:37.891788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.094 [2024-07-26 14:20:37.891829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.094 qpair failed and we were unable to recover it. 00:26:30.094 [2024-07-26 14:20:37.891943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.094 [2024-07-26 14:20:37.891970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.094 qpair failed and we were unable to recover it. 00:26:30.094 [2024-07-26 14:20:37.892078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.094 [2024-07-26 14:20:37.892105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.094 qpair failed and we were unable to recover it. 00:26:30.094 [2024-07-26 14:20:37.892239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.094 [2024-07-26 14:20:37.892264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.094 qpair failed and we were unable to recover it. 00:26:30.094 [2024-07-26 14:20:37.892356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.094 [2024-07-26 14:20:37.892382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.094 qpair failed and we were unable to recover it. 00:26:30.094 [2024-07-26 14:20:37.892516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.094 [2024-07-26 14:20:37.892554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.094 qpair failed and we were unable to recover it. 00:26:30.094 [2024-07-26 14:20:37.892667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.094 [2024-07-26 14:20:37.892693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.094 qpair failed and we were unable to recover it. 00:26:30.094 [2024-07-26 14:20:37.892837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.094 [2024-07-26 14:20:37.892863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.094 qpair failed and we were unable to recover it. 00:26:30.094 [2024-07-26 14:20:37.892949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.094 [2024-07-26 14:20:37.892976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.094 qpair failed and we were unable to recover it. 00:26:30.094 [2024-07-26 14:20:37.893097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.094 [2024-07-26 14:20:37.893125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.094 qpair failed and we were unable to recover it. 00:26:30.095 [2024-07-26 14:20:37.893214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.095 [2024-07-26 14:20:37.893243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.095 qpair failed and we were unable to recover it. 00:26:30.095 [2024-07-26 14:20:37.893344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.095 [2024-07-26 14:20:37.893384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.095 qpair failed and we were unable to recover it. 00:26:30.095 [2024-07-26 14:20:37.893502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.095 [2024-07-26 14:20:37.893538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.095 qpair failed and we were unable to recover it. 00:26:30.095 [2024-07-26 14:20:37.893627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.095 [2024-07-26 14:20:37.893653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.095 qpair failed and we were unable to recover it. 00:26:30.095 [2024-07-26 14:20:37.893751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.095 [2024-07-26 14:20:37.893777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.095 qpair failed and we were unable to recover it. 00:26:30.095 [2024-07-26 14:20:37.893861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.095 [2024-07-26 14:20:37.893887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.095 qpair failed and we were unable to recover it. 00:26:30.095 [2024-07-26 14:20:37.893972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.095 [2024-07-26 14:20:37.893999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.095 qpair failed and we were unable to recover it. 00:26:30.095 [2024-07-26 14:20:37.894112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.095 [2024-07-26 14:20:37.894139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.095 qpair failed and we were unable to recover it. 00:26:30.095 [2024-07-26 14:20:37.894250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.095 [2024-07-26 14:20:37.894276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.095 qpair failed and we were unable to recover it. 00:26:30.095 [2024-07-26 14:20:37.894370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.095 [2024-07-26 14:20:37.894396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.095 qpair failed and we were unable to recover it. 00:26:30.095 [2024-07-26 14:20:37.894503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.095 [2024-07-26 14:20:37.894546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.095 qpair failed and we were unable to recover it. 00:26:30.095 [2024-07-26 14:20:37.894633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.095 [2024-07-26 14:20:37.894659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.095 qpair failed and we were unable to recover it. 00:26:30.095 [2024-07-26 14:20:37.894745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.095 [2024-07-26 14:20:37.894771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.095 qpair failed and we were unable to recover it. 00:26:30.095 [2024-07-26 14:20:37.894912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.095 [2024-07-26 14:20:37.894938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.095 qpair failed and we were unable to recover it. 00:26:30.095 [2024-07-26 14:20:37.895049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.095 [2024-07-26 14:20:37.895075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.095 qpair failed and we were unable to recover it. 00:26:30.095 [2024-07-26 14:20:37.895190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.095 [2024-07-26 14:20:37.895216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.095 qpair failed and we were unable to recover it. 00:26:30.095 [2024-07-26 14:20:37.895301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.095 [2024-07-26 14:20:37.895330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.095 qpair failed and we were unable to recover it. 00:26:30.095 [2024-07-26 14:20:37.895460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.095 [2024-07-26 14:20:37.895499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.095 qpair failed and we were unable to recover it. 00:26:30.095 [2024-07-26 14:20:37.895598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.095 [2024-07-26 14:20:37.895629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.095 qpair failed and we were unable to recover it. 00:26:30.095 [2024-07-26 14:20:37.895708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.095 [2024-07-26 14:20:37.895734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.095 qpair failed and we were unable to recover it. 00:26:30.095 [2024-07-26 14:20:37.895874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.095 [2024-07-26 14:20:37.895901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.095 qpair failed and we were unable to recover it. 00:26:30.095 [2024-07-26 14:20:37.896010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.095 [2024-07-26 14:20:37.896037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.095 qpair failed and we were unable to recover it. 00:26:30.095 [2024-07-26 14:20:37.896135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.095 [2024-07-26 14:20:37.896162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.095 qpair failed and we were unable to recover it. 00:26:30.095 [2024-07-26 14:20:37.896272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.095 [2024-07-26 14:20:37.896299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.095 qpair failed and we were unable to recover it. 00:26:30.095 [2024-07-26 14:20:37.896359] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:30.095 [2024-07-26 14:20:37.896383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.095 [2024-07-26 14:20:37.896408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.095 qpair failed and we were unable to recover it. 00:26:30.095 [2024-07-26 14:20:37.896488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.095 [2024-07-26 14:20:37.896515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.095 qpair failed and we were unable to recover it. 00:26:30.095 [2024-07-26 14:20:37.896611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.095 [2024-07-26 14:20:37.896637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.095 qpair failed and we were unable to recover it. 00:26:30.095 [2024-07-26 14:20:37.896724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.095 [2024-07-26 14:20:37.896752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.095 qpair failed and we were unable to recover it. 00:26:30.095 [2024-07-26 14:20:37.896841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.095 [2024-07-26 14:20:37.896867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.095 qpair failed and we were unable to recover it. 00:26:30.095 [2024-07-26 14:20:37.896977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.095 [2024-07-26 14:20:37.897004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.095 qpair failed and we were unable to recover it. 00:26:30.095 [2024-07-26 14:20:37.897137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.095 [2024-07-26 14:20:37.897177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.095 qpair failed and we were unable to recover it. 00:26:30.095 [2024-07-26 14:20:37.897297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.095 [2024-07-26 14:20:37.897326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.095 qpair failed and we were unable to recover it. 00:26:30.095 [2024-07-26 14:20:37.897416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.095 [2024-07-26 14:20:37.897443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.095 qpair failed and we were unable to recover it. 00:26:30.095 [2024-07-26 14:20:37.897542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.095 [2024-07-26 14:20:37.897569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.095 qpair failed and we were unable to recover it. 00:26:30.095 [2024-07-26 14:20:37.897662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.095 [2024-07-26 14:20:37.897688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.095 qpair failed and we were unable to recover it. 00:26:30.095 [2024-07-26 14:20:37.897771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.096 [2024-07-26 14:20:37.897798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.096 qpair failed and we were unable to recover it. 00:26:30.096 [2024-07-26 14:20:37.897906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.096 [2024-07-26 14:20:37.897932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.096 qpair failed and we were unable to recover it. 00:26:30.096 [2024-07-26 14:20:37.898125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.096 [2024-07-26 14:20:37.898151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.096 qpair failed and we were unable to recover it. 00:26:30.096 [2024-07-26 14:20:37.898278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.096 [2024-07-26 14:20:37.898306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.096 qpair failed and we were unable to recover it. 00:26:30.096 [2024-07-26 14:20:37.898397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.096 [2024-07-26 14:20:37.898424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.096 qpair failed and we were unable to recover it. 00:26:30.096 [2024-07-26 14:20:37.898561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.096 [2024-07-26 14:20:37.898588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.096 qpair failed and we were unable to recover it. 00:26:30.096 [2024-07-26 14:20:37.898728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.096 [2024-07-26 14:20:37.898754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.096 qpair failed and we were unable to recover it. 00:26:30.096 [2024-07-26 14:20:37.898869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.096 [2024-07-26 14:20:37.898896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.096 qpair failed and we were unable to recover it. 00:26:30.096 [2024-07-26 14:20:37.898972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.096 [2024-07-26 14:20:37.899004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.096 qpair failed and we were unable to recover it. 00:26:30.096 [2024-07-26 14:20:37.899094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.096 [2024-07-26 14:20:37.899120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.096 qpair failed and we were unable to recover it. 00:26:30.096 [2024-07-26 14:20:37.899236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.096 [2024-07-26 14:20:37.899266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.096 qpair failed and we were unable to recover it. 00:26:30.096 [2024-07-26 14:20:37.899398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.096 [2024-07-26 14:20:37.899437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.096 qpair failed and we were unable to recover it. 00:26:30.096 [2024-07-26 14:20:37.899580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.096 [2024-07-26 14:20:37.899608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.096 qpair failed and we were unable to recover it. 00:26:30.096 [2024-07-26 14:20:37.899748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.096 [2024-07-26 14:20:37.899774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.096 qpair failed and we were unable to recover it. 00:26:30.096 [2024-07-26 14:20:37.899857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.096 [2024-07-26 14:20:37.899883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.096 qpair failed and we were unable to recover it. 00:26:30.096 [2024-07-26 14:20:37.899998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.096 [2024-07-26 14:20:37.900025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.096 qpair failed and we were unable to recover it. 00:26:30.096 [2024-07-26 14:20:37.900132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.096 [2024-07-26 14:20:37.900158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.096 qpair failed and we were unable to recover it. 00:26:30.096 [2024-07-26 14:20:37.900264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.096 [2024-07-26 14:20:37.900303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.096 qpair failed and we were unable to recover it. 00:26:30.096 [2024-07-26 14:20:37.900418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.096 [2024-07-26 14:20:37.900446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.096 qpair failed and we were unable to recover it. 00:26:30.096 [2024-07-26 14:20:37.900567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.096 [2024-07-26 14:20:37.900596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.096 qpair failed and we were unable to recover it. 00:26:30.096 [2024-07-26 14:20:37.900673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.096 [2024-07-26 14:20:37.900699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.096 qpair failed and we were unable to recover it. 00:26:30.096 [2024-07-26 14:20:37.900781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.096 [2024-07-26 14:20:37.900808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.096 qpair failed and we were unable to recover it. 00:26:30.096 [2024-07-26 14:20:37.900903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.096 [2024-07-26 14:20:37.900931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.096 qpair failed and we were unable to recover it. 00:26:30.096 [2024-07-26 14:20:37.901043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.096 [2024-07-26 14:20:37.901070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.096 qpair failed and we were unable to recover it. 00:26:30.096 [2024-07-26 14:20:37.901207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.096 [2024-07-26 14:20:37.901247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.096 qpair failed and we were unable to recover it. 00:26:30.096 [2024-07-26 14:20:37.901368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.096 [2024-07-26 14:20:37.901396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.096 qpair failed and we were unable to recover it. 00:26:30.096 [2024-07-26 14:20:37.901477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.096 [2024-07-26 14:20:37.901504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.096 qpair failed and we were unable to recover it. 00:26:30.096 [2024-07-26 14:20:37.901612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.096 [2024-07-26 14:20:37.901639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.096 qpair failed and we were unable to recover it. 00:26:30.096 [2024-07-26 14:20:37.901742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.096 [2024-07-26 14:20:37.901768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.096 qpair failed and we were unable to recover it. 00:26:30.096 [2024-07-26 14:20:37.901878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.096 [2024-07-26 14:20:37.901904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.096 qpair failed and we were unable to recover it. 00:26:30.096 [2024-07-26 14:20:37.901984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.096 [2024-07-26 14:20:37.902011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.096 qpair failed and we were unable to recover it. 00:26:30.096 [2024-07-26 14:20:37.902129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.096 [2024-07-26 14:20:37.902155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.096 qpair failed and we were unable to recover it. 00:26:30.096 [2024-07-26 14:20:37.902268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.096 [2024-07-26 14:20:37.902294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.096 qpair failed and we were unable to recover it. 00:26:30.096 [2024-07-26 14:20:37.902376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.096 [2024-07-26 14:20:37.902404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.096 qpair failed and we were unable to recover it. 00:26:30.096 [2024-07-26 14:20:37.902494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.096 [2024-07-26 14:20:37.902520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.096 qpair failed and we were unable to recover it. 00:26:30.096 [2024-07-26 14:20:37.902609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.096 [2024-07-26 14:20:37.902641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.096 qpair failed and we were unable to recover it. 00:26:30.097 [2024-07-26 14:20:37.902761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.097 [2024-07-26 14:20:37.902787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.097 qpair failed and we were unable to recover it. 00:26:30.097 [2024-07-26 14:20:37.902889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.097 [2024-07-26 14:20:37.902915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.097 qpair failed and we were unable to recover it. 00:26:30.097 [2024-07-26 14:20:37.902992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.097 [2024-07-26 14:20:37.903019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.097 qpair failed and we were unable to recover it. 00:26:30.097 [2024-07-26 14:20:37.903107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.097 [2024-07-26 14:20:37.903135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.097 qpair failed and we were unable to recover it. 00:26:30.097 [2024-07-26 14:20:37.903225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.097 [2024-07-26 14:20:37.903251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.097 qpair failed and we were unable to recover it. 00:26:30.097 [2024-07-26 14:20:37.903356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.097 [2024-07-26 14:20:37.903382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.097 qpair failed and we were unable to recover it. 00:26:30.097 [2024-07-26 14:20:37.903474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.097 [2024-07-26 14:20:37.903500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.097 qpair failed and we were unable to recover it. 00:26:30.097 [2024-07-26 14:20:37.903600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.097 [2024-07-26 14:20:37.903627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.097 qpair failed and we were unable to recover it. 00:26:30.097 [2024-07-26 14:20:37.903716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.097 [2024-07-26 14:20:37.903743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.097 qpair failed and we were unable to recover it. 00:26:30.097 [2024-07-26 14:20:37.903875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.097 [2024-07-26 14:20:37.903901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.097 qpair failed and we were unable to recover it. 00:26:30.097 [2024-07-26 14:20:37.904015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.097 [2024-07-26 14:20:37.904041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.097 qpair failed and we were unable to recover it. 00:26:30.097 [2024-07-26 14:20:37.904156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.097 [2024-07-26 14:20:37.904182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.097 qpair failed and we were unable to recover it. 00:26:30.097 [2024-07-26 14:20:37.904322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.097 [2024-07-26 14:20:37.904348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.097 qpair failed and we were unable to recover it. 00:26:30.097 [2024-07-26 14:20:37.904480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.097 [2024-07-26 14:20:37.904506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.097 qpair failed and we were unable to recover it. 00:26:30.097 [2024-07-26 14:20:37.904627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.097 [2024-07-26 14:20:37.904666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.097 qpair failed and we were unable to recover it. 00:26:30.097 [2024-07-26 14:20:37.904787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.097 [2024-07-26 14:20:37.904816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.097 qpair failed and we were unable to recover it. 00:26:30.097 [2024-07-26 14:20:37.904903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.097 [2024-07-26 14:20:37.904930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.097 qpair failed and we were unable to recover it. 00:26:30.097 [2024-07-26 14:20:37.905040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.097 [2024-07-26 14:20:37.905067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.097 qpair failed and we were unable to recover it. 00:26:30.097 [2024-07-26 14:20:37.905175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.097 [2024-07-26 14:20:37.905202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.097 qpair failed and we were unable to recover it. 00:26:30.097 [2024-07-26 14:20:37.905294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.097 [2024-07-26 14:20:37.905321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.097 qpair failed and we were unable to recover it. 00:26:30.097 [2024-07-26 14:20:37.905405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.097 [2024-07-26 14:20:37.905433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.097 qpair failed and we were unable to recover it. 00:26:30.097 [2024-07-26 14:20:37.905544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.097 [2024-07-26 14:20:37.905571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.097 qpair failed and we were unable to recover it. 00:26:30.097 [2024-07-26 14:20:37.905659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.097 [2024-07-26 14:20:37.905687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.097 qpair failed and we were unable to recover it. 00:26:30.097 [2024-07-26 14:20:37.905807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.097 [2024-07-26 14:20:37.905835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.097 qpair failed and we were unable to recover it. 00:26:30.097 [2024-07-26 14:20:37.905971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.097 [2024-07-26 14:20:37.905998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.097 qpair failed and we were unable to recover it. 00:26:30.097 [2024-07-26 14:20:37.906079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.097 [2024-07-26 14:20:37.906105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.097 qpair failed and we were unable to recover it. 00:26:30.097 [2024-07-26 14:20:37.906222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.097 [2024-07-26 14:20:37.906253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.097 qpair failed and we were unable to recover it. 00:26:30.097 [2024-07-26 14:20:37.906365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.097 [2024-07-26 14:20:37.906392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.097 qpair failed and we were unable to recover it. 00:26:30.097 [2024-07-26 14:20:37.906478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.097 [2024-07-26 14:20:37.906504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.097 qpair failed and we were unable to recover it. 00:26:30.097 [2024-07-26 14:20:37.906603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.097 [2024-07-26 14:20:37.906630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.097 qpair failed and we were unable to recover it. 00:26:30.097 [2024-07-26 14:20:37.906716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.097 [2024-07-26 14:20:37.906743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.097 qpair failed and we were unable to recover it. 00:26:30.097 [2024-07-26 14:20:37.906849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.098 [2024-07-26 14:20:37.906876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.098 qpair failed and we were unable to recover it. 00:26:30.098 [2024-07-26 14:20:37.906978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.098 [2024-07-26 14:20:37.907004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.098 qpair failed and we were unable to recover it. 00:26:30.098 [2024-07-26 14:20:37.907112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.098 [2024-07-26 14:20:37.907151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.098 qpair failed and we were unable to recover it. 00:26:30.098 [2024-07-26 14:20:37.907242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.098 [2024-07-26 14:20:37.907270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.098 qpair failed and we were unable to recover it. 00:26:30.098 [2024-07-26 14:20:37.907362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.098 [2024-07-26 14:20:37.907390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.098 qpair failed and we were unable to recover it. 00:26:30.098 [2024-07-26 14:20:37.907503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.098 [2024-07-26 14:20:37.907536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.098 qpair failed and we were unable to recover it. 00:26:30.098 [2024-07-26 14:20:37.907621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.098 [2024-07-26 14:20:37.907648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.098 qpair failed and we were unable to recover it. 00:26:30.098 [2024-07-26 14:20:37.907734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.098 [2024-07-26 14:20:37.907760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.098 qpair failed and we were unable to recover it. 00:26:30.098 [2024-07-26 14:20:37.907873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.098 [2024-07-26 14:20:37.907900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.098 qpair failed and we were unable to recover it. 00:26:30.098 [2024-07-26 14:20:37.908005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.098 [2024-07-26 14:20:37.908031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.098 qpair failed and we were unable to recover it. 00:26:30.098 [2024-07-26 14:20:37.908132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.098 [2024-07-26 14:20:37.908161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.098 qpair failed and we were unable to recover it. 00:26:30.098 [2024-07-26 14:20:37.908243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.098 [2024-07-26 14:20:37.908270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.098 qpair failed and we were unable to recover it. 00:26:30.098 [2024-07-26 14:20:37.908406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.098 [2024-07-26 14:20:37.908433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.098 qpair failed and we were unable to recover it. 00:26:30.098 [2024-07-26 14:20:37.908568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.098 [2024-07-26 14:20:37.908596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.098 qpair failed and we were unable to recover it. 00:26:30.098 [2024-07-26 14:20:37.908710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.098 [2024-07-26 14:20:37.908738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.098 qpair failed and we were unable to recover it. 00:26:30.098 [2024-07-26 14:20:37.908825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.098 [2024-07-26 14:20:37.908852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.098 qpair failed and we were unable to recover it. 00:26:30.098 [2024-07-26 14:20:37.908935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.098 [2024-07-26 14:20:37.908963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.098 qpair failed and we were unable to recover it. 00:26:30.098 [2024-07-26 14:20:37.909054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.098 [2024-07-26 14:20:37.909081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.098 qpair failed and we were unable to recover it. 00:26:30.098 [2024-07-26 14:20:37.909172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.098 [2024-07-26 14:20:37.909198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.098 qpair failed and we were unable to recover it. 00:26:30.098 [2024-07-26 14:20:37.909286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.098 [2024-07-26 14:20:37.909312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.098 qpair failed and we were unable to recover it. 00:26:30.098 [2024-07-26 14:20:37.909428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.098 [2024-07-26 14:20:37.909456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.098 qpair failed and we were unable to recover it. 00:26:30.098 [2024-07-26 14:20:37.909552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.098 [2024-07-26 14:20:37.909579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.098 qpair failed and we were unable to recover it. 00:26:30.098 [2024-07-26 14:20:37.909702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.098 [2024-07-26 14:20:37.909728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.098 qpair failed and we were unable to recover it. 00:26:30.098 [2024-07-26 14:20:37.909817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.098 [2024-07-26 14:20:37.909843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.098 qpair failed and we were unable to recover it. 00:26:30.098 [2024-07-26 14:20:37.909932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.098 [2024-07-26 14:20:37.909958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.098 qpair failed and we were unable to recover it. 00:26:30.098 [2024-07-26 14:20:37.910039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.098 [2024-07-26 14:20:37.910067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.098 qpair failed and we were unable to recover it. 00:26:30.098 [2024-07-26 14:20:37.910164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.098 [2024-07-26 14:20:37.910192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.098 qpair failed and we were unable to recover it. 00:26:30.098 [2024-07-26 14:20:37.910306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.098 [2024-07-26 14:20:37.910333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.098 qpair failed and we were unable to recover it. 00:26:30.098 [2024-07-26 14:20:37.910448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.098 [2024-07-26 14:20:37.910475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.098 qpair failed and we were unable to recover it. 00:26:30.098 [2024-07-26 14:20:37.910562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.098 [2024-07-26 14:20:37.910589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.098 qpair failed and we were unable to recover it. 00:26:30.098 [2024-07-26 14:20:37.910696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.098 [2024-07-26 14:20:37.910723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.099 qpair failed and we were unable to recover it. 00:26:30.099 [2024-07-26 14:20:37.910915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.099 [2024-07-26 14:20:37.910942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.099 qpair failed and we were unable to recover it. 00:26:30.099 [2024-07-26 14:20:37.911060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.099 [2024-07-26 14:20:37.911087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.099 qpair failed and we were unable to recover it. 00:26:30.099 [2024-07-26 14:20:37.911170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.099 [2024-07-26 14:20:37.911197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.099 qpair failed and we were unable to recover it. 00:26:30.099 [2024-07-26 14:20:37.911308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.099 [2024-07-26 14:20:37.911335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.099 qpair failed and we were unable to recover it. 00:26:30.099 [2024-07-26 14:20:37.911439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.099 [2024-07-26 14:20:37.911470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.099 qpair failed and we were unable to recover it. 00:26:30.099 [2024-07-26 14:20:37.911562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.099 [2024-07-26 14:20:37.911590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.099 qpair failed and we were unable to recover it. 00:26:30.099 [2024-07-26 14:20:37.911711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.099 [2024-07-26 14:20:37.911738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.099 qpair failed and we were unable to recover it. 00:26:30.099 [2024-07-26 14:20:37.911817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.099 [2024-07-26 14:20:37.911842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.099 qpair failed and we were unable to recover it. 00:26:30.099 [2024-07-26 14:20:37.911928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.099 [2024-07-26 14:20:37.911956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.099 qpair failed and we were unable to recover it. 00:26:30.099 [2024-07-26 14:20:37.912041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.099 [2024-07-26 14:20:37.912068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.099 qpair failed and we were unable to recover it. 00:26:30.099 [2024-07-26 14:20:37.912183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.099 [2024-07-26 14:20:37.912209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.099 qpair failed and we were unable to recover it. 00:26:30.099 [2024-07-26 14:20:37.912346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.099 [2024-07-26 14:20:37.912373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.099 qpair failed and we were unable to recover it. 00:26:30.099 [2024-07-26 14:20:37.912486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.099 [2024-07-26 14:20:37.912514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.099 qpair failed and we were unable to recover it. 00:26:30.099 [2024-07-26 14:20:37.912613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.099 [2024-07-26 14:20:37.912640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.099 qpair failed and we were unable to recover it. 00:26:30.099 [2024-07-26 14:20:37.912830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.099 [2024-07-26 14:20:37.912856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.099 qpair failed and we were unable to recover it. 00:26:30.099 [2024-07-26 14:20:37.912966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.099 [2024-07-26 14:20:37.912992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.099 qpair failed and we were unable to recover it. 00:26:30.099 [2024-07-26 14:20:37.913076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.099 [2024-07-26 14:20:37.913103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.099 qpair failed and we were unable to recover it. 00:26:30.099 [2024-07-26 14:20:37.913182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.099 [2024-07-26 14:20:37.913208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.099 qpair failed and we were unable to recover it. 00:26:30.099 [2024-07-26 14:20:37.913326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.099 [2024-07-26 14:20:37.913353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.099 qpair failed and we were unable to recover it. 00:26:30.099 [2024-07-26 14:20:37.913458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.099 [2024-07-26 14:20:37.913485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.099 qpair failed and we were unable to recover it. 00:26:30.099 [2024-07-26 14:20:37.913577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.099 [2024-07-26 14:20:37.913604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.099 qpair failed and we were unable to recover it. 00:26:30.099 [2024-07-26 14:20:37.913691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.099 [2024-07-26 14:20:37.913718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.099 qpair failed and we were unable to recover it. 00:26:30.099 [2024-07-26 14:20:37.913834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.099 [2024-07-26 14:20:37.913861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.099 qpair failed and we were unable to recover it. 00:26:30.099 [2024-07-26 14:20:37.913944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.099 [2024-07-26 14:20:37.913971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.099 qpair failed and we were unable to recover it. 00:26:30.099 [2024-07-26 14:20:37.914048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.099 [2024-07-26 14:20:37.914076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.099 qpair failed and we were unable to recover it. 00:26:30.099 [2024-07-26 14:20:37.914218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.099 [2024-07-26 14:20:37.914244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.099 qpair failed and we were unable to recover it. 00:26:30.099 [2024-07-26 14:20:37.914354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.099 [2024-07-26 14:20:37.914381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.099 qpair failed and we were unable to recover it. 00:26:30.099 [2024-07-26 14:20:37.914492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.099 [2024-07-26 14:20:37.914540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.099 qpair failed and we were unable to recover it. 00:26:30.099 [2024-07-26 14:20:37.914633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.099 [2024-07-26 14:20:37.914659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.099 qpair failed and we were unable to recover it. 00:26:30.099 [2024-07-26 14:20:37.914736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.100 [2024-07-26 14:20:37.914762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.100 qpair failed and we were unable to recover it. 00:26:30.100 [2024-07-26 14:20:37.914864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.100 [2024-07-26 14:20:37.914891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.100 qpair failed and we were unable to recover it. 00:26:30.100 [2024-07-26 14:20:37.914991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.100 [2024-07-26 14:20:37.915031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.100 qpair failed and we were unable to recover it. 00:26:30.100 [2024-07-26 14:20:37.915150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.100 [2024-07-26 14:20:37.915179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.100 qpair failed and we were unable to recover it. 00:26:30.100 [2024-07-26 14:20:37.915299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.100 [2024-07-26 14:20:37.915326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.100 qpair failed and we were unable to recover it. 00:26:30.100 [2024-07-26 14:20:37.915435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.100 [2024-07-26 14:20:37.915461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.100 qpair failed and we were unable to recover it. 00:26:30.100 [2024-07-26 14:20:37.915585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.100 [2024-07-26 14:20:37.915613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.100 qpair failed and we were unable to recover it. 00:26:30.100 [2024-07-26 14:20:37.915727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.100 [2024-07-26 14:20:37.915755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.100 qpair failed and we were unable to recover it. 00:26:30.100 [2024-07-26 14:20:37.915835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.100 [2024-07-26 14:20:37.915861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.100 qpair failed and we were unable to recover it. 00:26:30.100 [2024-07-26 14:20:37.915976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.100 [2024-07-26 14:20:37.916002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.100 qpair failed and we were unable to recover it. 00:26:30.100 [2024-07-26 14:20:37.916143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.100 [2024-07-26 14:20:37.916169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.100 qpair failed and we were unable to recover it. 00:26:30.100 [2024-07-26 14:20:37.916285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.100 [2024-07-26 14:20:37.916311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.100 qpair failed and we were unable to recover it. 00:26:30.100 [2024-07-26 14:20:37.916407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.100 [2024-07-26 14:20:37.916435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.100 qpair failed and we were unable to recover it. 00:26:30.100 [2024-07-26 14:20:37.916553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.100 [2024-07-26 14:20:37.916581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.100 qpair failed and we were unable to recover it. 00:26:30.100 [2024-07-26 14:20:37.916688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.100 [2024-07-26 14:20:37.916715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.100 qpair failed and we were unable to recover it. 00:26:30.100 [2024-07-26 14:20:37.916798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.100 [2024-07-26 14:20:37.916830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.100 qpair failed and we were unable to recover it. 00:26:30.100 [2024-07-26 14:20:37.916945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.100 [2024-07-26 14:20:37.916972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.100 qpair failed and we were unable to recover it. 00:26:30.100 [2024-07-26 14:20:37.917110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.100 [2024-07-26 14:20:37.917137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.100 qpair failed and we were unable to recover it. 00:26:30.100 [2024-07-26 14:20:37.917230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.100 [2024-07-26 14:20:37.917257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.100 qpair failed and we were unable to recover it. 00:26:30.100 [2024-07-26 14:20:37.917393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.100 [2024-07-26 14:20:37.917434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.100 qpair failed and we were unable to recover it. 00:26:30.100 [2024-07-26 14:20:37.917560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.100 [2024-07-26 14:20:37.917589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.100 qpair failed and we were unable to recover it. 00:26:30.100 [2024-07-26 14:20:37.917678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.100 [2024-07-26 14:20:37.917705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.100 qpair failed and we were unable to recover it. 00:26:30.100 [2024-07-26 14:20:37.917782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.100 [2024-07-26 14:20:37.917808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.100 qpair failed and we were unable to recover it. 00:26:30.100 [2024-07-26 14:20:37.917915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.100 [2024-07-26 14:20:37.917942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.100 qpair failed and we were unable to recover it. 00:26:30.100 [2024-07-26 14:20:37.918056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.100 [2024-07-26 14:20:37.918084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.100 qpair failed and we were unable to recover it. 00:26:30.100 [2024-07-26 14:20:37.918169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.100 [2024-07-26 14:20:37.918195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.100 qpair failed and we were unable to recover it. 00:26:30.100 [2024-07-26 14:20:37.918296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.100 [2024-07-26 14:20:37.918336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.100 qpair failed and we were unable to recover it. 00:26:30.100 [2024-07-26 14:20:37.918455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.100 [2024-07-26 14:20:37.918484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.100 qpair failed and we were unable to recover it. 00:26:30.100 [2024-07-26 14:20:37.918584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.100 [2024-07-26 14:20:37.918612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.100 qpair failed and we were unable to recover it. 00:26:30.100 [2024-07-26 14:20:37.918706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.100 [2024-07-26 14:20:37.918734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.100 qpair failed and we were unable to recover it. 00:26:30.100 [2024-07-26 14:20:37.918852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.100 [2024-07-26 14:20:37.918879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.100 qpair failed and we were unable to recover it. 00:26:30.100 [2024-07-26 14:20:37.919018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.100 [2024-07-26 14:20:37.919045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.100 qpair failed and we were unable to recover it. 00:26:30.100 [2024-07-26 14:20:37.919157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.100 [2024-07-26 14:20:37.919184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.100 qpair failed and we were unable to recover it. 00:26:30.100 [2024-07-26 14:20:37.919289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.100 [2024-07-26 14:20:37.919316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.100 qpair failed and we were unable to recover it. 00:26:30.100 [2024-07-26 14:20:37.919451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.100 [2024-07-26 14:20:37.919478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.100 qpair failed and we were unable to recover it. 00:26:30.100 [2024-07-26 14:20:37.919596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.100 [2024-07-26 14:20:37.919624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.100 qpair failed and we were unable to recover it. 00:26:30.100 [2024-07-26 14:20:37.919715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.100 [2024-07-26 14:20:37.919742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.100 qpair failed and we were unable to recover it. 00:26:30.100 [2024-07-26 14:20:37.919827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.100 [2024-07-26 14:20:37.919854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.100 qpair failed and we were unable to recover it. 00:26:30.100 [2024-07-26 14:20:37.919944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.101 [2024-07-26 14:20:37.919970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.101 qpair failed and we were unable to recover it. 00:26:30.101 [2024-07-26 14:20:37.920071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.101 [2024-07-26 14:20:37.920098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.101 qpair failed and we were unable to recover it. 00:26:30.101 [2024-07-26 14:20:37.920239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.101 [2024-07-26 14:20:37.920266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.101 qpair failed and we were unable to recover it. 00:26:30.101 [2024-07-26 14:20:37.920354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.101 [2024-07-26 14:20:37.920382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.101 qpair failed and we were unable to recover it. 00:26:30.101 [2024-07-26 14:20:37.920471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.101 [2024-07-26 14:20:37.920502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.101 qpair failed and we were unable to recover it. 00:26:30.101 [2024-07-26 14:20:37.920604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.101 [2024-07-26 14:20:37.920631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.101 qpair failed and we were unable to recover it. 00:26:30.101 [2024-07-26 14:20:37.920714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.101 [2024-07-26 14:20:37.920740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.101 qpair failed and we were unable to recover it. 00:26:30.101 [2024-07-26 14:20:37.920826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.101 [2024-07-26 14:20:37.920852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.101 qpair failed and we were unable to recover it. 00:26:30.101 [2024-07-26 14:20:37.920994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.101 [2024-07-26 14:20:37.921021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.101 qpair failed and we were unable to recover it. 00:26:30.101 [2024-07-26 14:20:37.921097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.101 [2024-07-26 14:20:37.921124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.101 qpair failed and we were unable to recover it. 00:26:30.101 [2024-07-26 14:20:37.921237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.101 [2024-07-26 14:20:37.921264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.101 qpair failed and we were unable to recover it. 00:26:30.101 [2024-07-26 14:20:37.921382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.101 [2024-07-26 14:20:37.921408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.101 qpair failed and we were unable to recover it. 00:26:30.101 [2024-07-26 14:20:37.921516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.101 [2024-07-26 14:20:37.921551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.101 qpair failed and we were unable to recover it. 00:26:30.101 [2024-07-26 14:20:37.921648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.101 [2024-07-26 14:20:37.921674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.101 qpair failed and we were unable to recover it. 00:26:30.101 [2024-07-26 14:20:37.921756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.101 [2024-07-26 14:20:37.921783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.101 qpair failed and we were unable to recover it. 00:26:30.101 [2024-07-26 14:20:37.921875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.101 [2024-07-26 14:20:37.921902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.101 qpair failed and we were unable to recover it. 00:26:30.101 [2024-07-26 14:20:37.922008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.101 [2024-07-26 14:20:37.922035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.101 qpair failed and we were unable to recover it. 00:26:30.101 [2024-07-26 14:20:37.922116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.101 [2024-07-26 14:20:37.922143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.101 qpair failed and we were unable to recover it. 00:26:30.101 [2024-07-26 14:20:37.922251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.101 [2024-07-26 14:20:37.922278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.101 qpair failed and we were unable to recover it. 00:26:30.101 [2024-07-26 14:20:37.922363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.101 [2024-07-26 14:20:37.922390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.101 qpair failed and we were unable to recover it. 00:26:30.101 [2024-07-26 14:20:37.922502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.101 [2024-07-26 14:20:37.922537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.101 qpair failed and we were unable to recover it. 00:26:30.101 [2024-07-26 14:20:37.922647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.101 [2024-07-26 14:20:37.922674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.101 qpair failed and we were unable to recover it. 00:26:30.101 [2024-07-26 14:20:37.922783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.101 [2024-07-26 14:20:37.922809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.101 qpair failed and we were unable to recover it. 00:26:30.101 [2024-07-26 14:20:37.922891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.101 [2024-07-26 14:20:37.922917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.101 qpair failed and we were unable to recover it. 00:26:30.101 [2024-07-26 14:20:37.922989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.101 [2024-07-26 14:20:37.923015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.101 qpair failed and we were unable to recover it. 00:26:30.101 [2024-07-26 14:20:37.923156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.101 [2024-07-26 14:20:37.923183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.101 qpair failed and we were unable to recover it. 00:26:30.101 [2024-07-26 14:20:37.923291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.101 [2024-07-26 14:20:37.923317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.101 qpair failed and we were unable to recover it. 00:26:30.101 [2024-07-26 14:20:37.923429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.101 [2024-07-26 14:20:37.923456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.101 qpair failed and we were unable to recover it. 00:26:30.101 [2024-07-26 14:20:37.923574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.101 [2024-07-26 14:20:37.923603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.101 qpair failed and we were unable to recover it. 00:26:30.101 [2024-07-26 14:20:37.923686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.101 [2024-07-26 14:20:37.923712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.101 qpair failed and we were unable to recover it. 00:26:30.102 [2024-07-26 14:20:37.923823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.102 [2024-07-26 14:20:37.923849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.102 qpair failed and we were unable to recover it. 00:26:30.102 [2024-07-26 14:20:37.923966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.102 [2024-07-26 14:20:37.923996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.102 qpair failed and we were unable to recover it. 00:26:30.102 [2024-07-26 14:20:37.924081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.102 [2024-07-26 14:20:37.924107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.102 qpair failed and we were unable to recover it. 00:26:30.102 [2024-07-26 14:20:37.924189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.102 [2024-07-26 14:20:37.924217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.102 qpair failed and we were unable to recover it. 00:26:30.102 [2024-07-26 14:20:37.924295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.102 [2024-07-26 14:20:37.924322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.102 qpair failed and we were unable to recover it. 00:26:30.102 [2024-07-26 14:20:37.924415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.102 [2024-07-26 14:20:37.924441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.102 qpair failed and we were unable to recover it. 00:26:30.102 [2024-07-26 14:20:37.924547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.102 [2024-07-26 14:20:37.924574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.102 qpair failed and we were unable to recover it. 00:26:30.102 [2024-07-26 14:20:37.924688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.102 [2024-07-26 14:20:37.924714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.102 qpair failed and we were unable to recover it. 00:26:30.102 [2024-07-26 14:20:37.924800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.102 [2024-07-26 14:20:37.924826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.102 qpair failed and we were unable to recover it. 00:26:30.102 [2024-07-26 14:20:37.924912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.102 [2024-07-26 14:20:37.924938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.102 qpair failed and we were unable to recover it. 00:26:30.102 [2024-07-26 14:20:37.925022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.102 [2024-07-26 14:20:37.925048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.102 qpair failed and we were unable to recover it. 00:26:30.102 [2024-07-26 14:20:37.925169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.102 [2024-07-26 14:20:37.925210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.102 qpair failed and we were unable to recover it. 00:26:30.102 [2024-07-26 14:20:37.925330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.102 [2024-07-26 14:20:37.925358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.102 qpair failed and we were unable to recover it. 00:26:30.102 [2024-07-26 14:20:37.925466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.102 [2024-07-26 14:20:37.925494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.102 qpair failed and we were unable to recover it. 00:26:30.102 [2024-07-26 14:20:37.925612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.102 [2024-07-26 14:20:37.925639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.102 qpair failed and we were unable to recover it. 00:26:30.102 [2024-07-26 14:20:37.925731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.102 [2024-07-26 14:20:37.925759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.102 qpair failed and we were unable to recover it. 00:26:30.102 [2024-07-26 14:20:37.925849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.102 [2024-07-26 14:20:37.925876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.102 qpair failed and we were unable to recover it. 00:26:30.102 [2024-07-26 14:20:37.925964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.102 [2024-07-26 14:20:37.925991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.102 qpair failed and we were unable to recover it. 00:26:30.102 [2024-07-26 14:20:37.926102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.102 [2024-07-26 14:20:37.926129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.102 qpair failed and we were unable to recover it. 00:26:30.102 [2024-07-26 14:20:37.926273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.102 [2024-07-26 14:20:37.926300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.102 qpair failed and we were unable to recover it. 00:26:30.102 [2024-07-26 14:20:37.926420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.102 [2024-07-26 14:20:37.926447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.102 qpair failed and we were unable to recover it. 00:26:30.102 [2024-07-26 14:20:37.926541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.102 [2024-07-26 14:20:37.926569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.102 qpair failed and we were unable to recover it. 00:26:30.102 [2024-07-26 14:20:37.926680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.102 [2024-07-26 14:20:37.926707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.102 qpair failed and we were unable to recover it. 00:26:30.102 [2024-07-26 14:20:37.926822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.102 [2024-07-26 14:20:37.926850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.102 qpair failed and we were unable to recover it. 00:26:30.102 [2024-07-26 14:20:37.926970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.102 [2024-07-26 14:20:37.926998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.102 qpair failed and we were unable to recover it. 00:26:30.102 [2024-07-26 14:20:37.927113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.102 [2024-07-26 14:20:37.927140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.102 qpair failed and we were unable to recover it. 00:26:30.102 [2024-07-26 14:20:37.927252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.102 [2024-07-26 14:20:37.927279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.102 qpair failed and we were unable to recover it. 00:26:30.102 [2024-07-26 14:20:37.927386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.102 [2024-07-26 14:20:37.927413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.102 qpair failed and we were unable to recover it. 00:26:30.102 [2024-07-26 14:20:37.927536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.102 [2024-07-26 14:20:37.927569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.102 qpair failed and we were unable to recover it. 00:26:30.102 [2024-07-26 14:20:37.927663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.102 [2024-07-26 14:20:37.927691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.102 qpair failed and we were unable to recover it. 00:26:30.102 [2024-07-26 14:20:37.927804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.102 [2024-07-26 14:20:37.927831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.102 qpair failed and we were unable to recover it. 00:26:30.102 [2024-07-26 14:20:37.927944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.102 [2024-07-26 14:20:37.927971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.102 qpair failed and we were unable to recover it. 00:26:30.102 [2024-07-26 14:20:37.928091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.102 [2024-07-26 14:20:37.928118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.102 qpair failed and we were unable to recover it. 00:26:30.102 [2024-07-26 14:20:37.928226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.102 [2024-07-26 14:20:37.928253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.102 qpair failed and we were unable to recover it. 00:26:30.102 [2024-07-26 14:20:37.928367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.102 [2024-07-26 14:20:37.928394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.102 qpair failed and we were unable to recover it. 00:26:30.102 [2024-07-26 14:20:37.928503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.102 [2024-07-26 14:20:37.928536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.102 qpair failed and we were unable to recover it. 00:26:30.102 [2024-07-26 14:20:37.928628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.103 [2024-07-26 14:20:37.928656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.103 qpair failed and we were unable to recover it. 00:26:30.103 [2024-07-26 14:20:37.928767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.103 [2024-07-26 14:20:37.928793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.103 qpair failed and we were unable to recover it. 00:26:30.103 [2024-07-26 14:20:37.928908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.103 [2024-07-26 14:20:37.928936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.103 qpair failed and we were unable to recover it. 00:26:30.103 [2024-07-26 14:20:37.929051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.103 [2024-07-26 14:20:37.929078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.103 qpair failed and we were unable to recover it. 00:26:30.103 [2024-07-26 14:20:37.929192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.103 [2024-07-26 14:20:37.929219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.103 qpair failed and we were unable to recover it. 00:26:30.103 [2024-07-26 14:20:37.929309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.103 [2024-07-26 14:20:37.929336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.103 qpair failed and we were unable to recover it. 00:26:30.103 [2024-07-26 14:20:37.929428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.103 [2024-07-26 14:20:37.929456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.103 qpair failed and we were unable to recover it. 00:26:30.103 [2024-07-26 14:20:37.929549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.103 [2024-07-26 14:20:37.929577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.103 qpair failed and we were unable to recover it. 00:26:30.103 [2024-07-26 14:20:37.929694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.103 [2024-07-26 14:20:37.929723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.103 qpair failed and we were unable to recover it. 00:26:30.103 [2024-07-26 14:20:37.929841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.103 [2024-07-26 14:20:37.929868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.103 qpair failed and we were unable to recover it. 00:26:30.103 [2024-07-26 14:20:37.929979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.103 [2024-07-26 14:20:37.930005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.103 qpair failed and we were unable to recover it. 00:26:30.103 [2024-07-26 14:20:37.930112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.103 [2024-07-26 14:20:37.930140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.103 qpair failed and we were unable to recover it. 00:26:30.103 [2024-07-26 14:20:37.930279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.103 [2024-07-26 14:20:37.930305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.103 qpair failed and we were unable to recover it. 00:26:30.103 [2024-07-26 14:20:37.930411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.103 [2024-07-26 14:20:37.930438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.103 qpair failed and we were unable to recover it. 00:26:30.103 [2024-07-26 14:20:37.930532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.103 [2024-07-26 14:20:37.930560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.103 qpair failed and we were unable to recover it. 00:26:30.103 [2024-07-26 14:20:37.930642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.103 [2024-07-26 14:20:37.930669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.103 qpair failed and we were unable to recover it. 00:26:30.103 [2024-07-26 14:20:37.930777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.103 [2024-07-26 14:20:37.930804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.103 qpair failed and we were unable to recover it. 00:26:30.103 [2024-07-26 14:20:37.930920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.103 [2024-07-26 14:20:37.930949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.103 qpair failed and we were unable to recover it. 00:26:30.103 [2024-07-26 14:20:37.931095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.103 [2024-07-26 14:20:37.931122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.103 qpair failed and we were unable to recover it. 00:26:30.103 [2024-07-26 14:20:37.931214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.103 [2024-07-26 14:20:37.931242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.103 qpair failed and we were unable to recover it. 00:26:30.103 [2024-07-26 14:20:37.931333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.103 [2024-07-26 14:20:37.931360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.103 qpair failed and we were unable to recover it. 00:26:30.103 [2024-07-26 14:20:37.931498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.103 [2024-07-26 14:20:37.931549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.103 qpair failed and we were unable to recover it. 00:26:30.103 [2024-07-26 14:20:37.931701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.103 [2024-07-26 14:20:37.931729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.103 qpair failed and we were unable to recover it. 00:26:30.103 [2024-07-26 14:20:37.931871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.103 [2024-07-26 14:20:37.931898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.103 qpair failed and we were unable to recover it. 00:26:30.103 [2024-07-26 14:20:37.932033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.103 [2024-07-26 14:20:37.932059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.103 qpair failed and we were unable to recover it. 00:26:30.103 [2024-07-26 14:20:37.932169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.103 [2024-07-26 14:20:37.932196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.103 qpair failed and we were unable to recover it. 00:26:30.103 [2024-07-26 14:20:37.932317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.103 [2024-07-26 14:20:37.932346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.103 qpair failed and we were unable to recover it. 00:26:30.103 [2024-07-26 14:20:37.932438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.103 [2024-07-26 14:20:37.932466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.103 qpair failed and we were unable to recover it. 00:26:30.103 [2024-07-26 14:20:37.932573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.103 [2024-07-26 14:20:37.932601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.103 qpair failed and we were unable to recover it. 00:26:30.103 [2024-07-26 14:20:37.932716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.103 [2024-07-26 14:20:37.932743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.103 qpair failed and we were unable to recover it. 00:26:30.103 [2024-07-26 14:20:37.932854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.103 [2024-07-26 14:20:37.932881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.104 qpair failed and we were unable to recover it. 00:26:30.104 [2024-07-26 14:20:37.932968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.104 [2024-07-26 14:20:37.932994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.104 qpair failed and we were unable to recover it. 00:26:30.104 [2024-07-26 14:20:37.933085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.104 [2024-07-26 14:20:37.933115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.104 qpair failed and we were unable to recover it. 00:26:30.104 [2024-07-26 14:20:37.933226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.104 [2024-07-26 14:20:37.933252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.104 qpair failed and we were unable to recover it. 00:26:30.104 [2024-07-26 14:20:37.933330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.104 [2024-07-26 14:20:37.933357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.104 qpair failed and we were unable to recover it. 00:26:30.104 [2024-07-26 14:20:37.933467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.104 [2024-07-26 14:20:37.933505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.104 qpair failed and we were unable to recover it. 00:26:30.104 [2024-07-26 14:20:37.933600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.104 [2024-07-26 14:20:37.933629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.104 qpair failed and we were unable to recover it. 00:26:30.104 [2024-07-26 14:20:37.933713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.104 [2024-07-26 14:20:37.933741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.104 qpair failed and we were unable to recover it. 00:26:30.104 [2024-07-26 14:20:37.933854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.104 [2024-07-26 14:20:37.933882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.104 qpair failed and we were unable to recover it. 00:26:30.104 [2024-07-26 14:20:37.934023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.104 [2024-07-26 14:20:37.934049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.104 qpair failed and we were unable to recover it. 00:26:30.104 [2024-07-26 14:20:37.934171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.104 [2024-07-26 14:20:37.934198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.104 qpair failed and we were unable to recover it. 00:26:30.104 [2024-07-26 14:20:37.934279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.104 [2024-07-26 14:20:37.934307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.104 qpair failed and we were unable to recover it. 00:26:30.104 [2024-07-26 14:20:37.934425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.104 [2024-07-26 14:20:37.934452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.104 qpair failed and we were unable to recover it. 00:26:30.104 [2024-07-26 14:20:37.934557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.104 [2024-07-26 14:20:37.934585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.104 qpair failed and we were unable to recover it. 00:26:30.104 [2024-07-26 14:20:37.934670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.104 [2024-07-26 14:20:37.934699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.104 qpair failed and we were unable to recover it. 00:26:30.104 [2024-07-26 14:20:37.934792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.104 [2024-07-26 14:20:37.934821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.104 qpair failed and we were unable to recover it. 00:26:30.104 [2024-07-26 14:20:37.934918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.104 [2024-07-26 14:20:37.934945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.104 qpair failed and we were unable to recover it. 00:26:30.104 [2024-07-26 14:20:37.935059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.104 [2024-07-26 14:20:37.935087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.104 qpair failed and we were unable to recover it. 00:26:30.104 [2024-07-26 14:20:37.935229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.104 [2024-07-26 14:20:37.935257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.104 qpair failed and we were unable to recover it. 00:26:30.104 [2024-07-26 14:20:37.935351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.104 [2024-07-26 14:20:37.935378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.104 qpair failed and we were unable to recover it. 00:26:30.104 [2024-07-26 14:20:37.935454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.104 [2024-07-26 14:20:37.935480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.104 qpair failed and we were unable to recover it. 00:26:30.104 [2024-07-26 14:20:37.935603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.104 [2024-07-26 14:20:37.935630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.104 qpair failed and we were unable to recover it. 00:26:30.104 [2024-07-26 14:20:37.935740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.104 [2024-07-26 14:20:37.935767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.104 qpair failed and we were unable to recover it. 00:26:30.104 [2024-07-26 14:20:37.935878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.104 [2024-07-26 14:20:37.935905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.104 qpair failed and we were unable to recover it. 00:26:30.104 [2024-07-26 14:20:37.935985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.104 [2024-07-26 14:20:37.936013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.104 qpair failed and we were unable to recover it. 00:26:30.104 [2024-07-26 14:20:37.936131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.104 [2024-07-26 14:20:37.936158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.104 qpair failed and we were unable to recover it. 00:26:30.104 [2024-07-26 14:20:37.936276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.104 [2024-07-26 14:20:37.936303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.104 qpair failed and we were unable to recover it. 00:26:30.104 [2024-07-26 14:20:37.936387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.104 [2024-07-26 14:20:37.936414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.104 qpair failed and we were unable to recover it. 00:26:30.104 [2024-07-26 14:20:37.936535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.104 [2024-07-26 14:20:37.936563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.104 qpair failed and we were unable to recover it. 00:26:30.104 [2024-07-26 14:20:37.936657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.104 [2024-07-26 14:20:37.936687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.104 qpair failed and we were unable to recover it. 00:26:30.104 [2024-07-26 14:20:37.936775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.104 [2024-07-26 14:20:37.936803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.104 qpair failed and we were unable to recover it. 00:26:30.104 [2024-07-26 14:20:37.936917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.104 [2024-07-26 14:20:37.936944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.104 qpair failed and we were unable to recover it. 00:26:30.104 [2024-07-26 14:20:37.937057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.104 [2024-07-26 14:20:37.937086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.104 qpair failed and we were unable to recover it. 00:26:30.104 [2024-07-26 14:20:37.937196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.104 [2024-07-26 14:20:37.937223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.104 qpair failed and we were unable to recover it. 00:26:30.104 [2024-07-26 14:20:37.937337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.104 [2024-07-26 14:20:37.937363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.104 qpair failed and we were unable to recover it. 00:26:30.104 [2024-07-26 14:20:37.937445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.104 [2024-07-26 14:20:37.937472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.104 qpair failed and we were unable to recover it. 00:26:30.104 [2024-07-26 14:20:37.937576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.104 [2024-07-26 14:20:37.937604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.104 qpair failed and we were unable to recover it. 00:26:30.104 [2024-07-26 14:20:37.937696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.104 [2024-07-26 14:20:37.937723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.104 qpair failed and we were unable to recover it. 00:26:30.105 [2024-07-26 14:20:37.937811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.105 [2024-07-26 14:20:37.937838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.105 qpair failed and we were unable to recover it. 00:26:30.105 [2024-07-26 14:20:37.937952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.105 [2024-07-26 14:20:37.937980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.105 qpair failed and we were unable to recover it. 00:26:30.105 [2024-07-26 14:20:37.938094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.105 [2024-07-26 14:20:37.938121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.105 qpair failed and we were unable to recover it. 00:26:30.105 [2024-07-26 14:20:37.938212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.105 [2024-07-26 14:20:37.938241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.105 qpair failed and we were unable to recover it. 00:26:30.105 [2024-07-26 14:20:37.938355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.105 [2024-07-26 14:20:37.938383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.105 qpair failed and we were unable to recover it. 00:26:30.105 [2024-07-26 14:20:37.938463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.105 [2024-07-26 14:20:37.938490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.105 qpair failed and we were unable to recover it. 00:26:30.105 [2024-07-26 14:20:37.938647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.105 [2024-07-26 14:20:37.938675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.105 qpair failed and we were unable to recover it. 00:26:30.105 [2024-07-26 14:20:37.938757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.105 [2024-07-26 14:20:37.938784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.105 qpair failed and we were unable to recover it. 00:26:30.105 [2024-07-26 14:20:37.938873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.105 [2024-07-26 14:20:37.938899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.105 qpair failed and we were unable to recover it. 00:26:30.105 [2024-07-26 14:20:37.938980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.105 [2024-07-26 14:20:37.939007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.105 qpair failed and we were unable to recover it. 00:26:30.105 [2024-07-26 14:20:37.939117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.105 [2024-07-26 14:20:37.939144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.105 qpair failed and we were unable to recover it. 00:26:30.105 [2024-07-26 14:20:37.939281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.105 [2024-07-26 14:20:37.939308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.105 qpair failed and we were unable to recover it. 00:26:30.105 [2024-07-26 14:20:37.939468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.105 [2024-07-26 14:20:37.939494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.105 qpair failed and we were unable to recover it. 00:26:30.105 [2024-07-26 14:20:37.939636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.105 [2024-07-26 14:20:37.939663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.105 qpair failed and we were unable to recover it. 00:26:30.105 [2024-07-26 14:20:37.939753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.105 [2024-07-26 14:20:37.939779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.105 qpair failed and we were unable to recover it. 00:26:30.105 [2024-07-26 14:20:37.939916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.105 [2024-07-26 14:20:37.939943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.105 qpair failed and we were unable to recover it. 00:26:30.105 [2024-07-26 14:20:37.940026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.105 [2024-07-26 14:20:37.940052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.105 qpair failed and we were unable to recover it. 00:26:30.105 [2024-07-26 14:20:37.940139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.105 [2024-07-26 14:20:37.940165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.105 qpair failed and we were unable to recover it. 00:26:30.105 [2024-07-26 14:20:37.940275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.105 [2024-07-26 14:20:37.940320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.105 qpair failed and we were unable to recover it. 00:26:30.105 [2024-07-26 14:20:37.940440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.105 [2024-07-26 14:20:37.940469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.105 qpair failed and we were unable to recover it. 00:26:30.105 [2024-07-26 14:20:37.940561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.105 [2024-07-26 14:20:37.940590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.105 qpair failed and we were unable to recover it. 00:26:30.105 [2024-07-26 14:20:37.940680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.105 [2024-07-26 14:20:37.940707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.105 qpair failed and we were unable to recover it. 00:26:30.105 [2024-07-26 14:20:37.940844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.105 [2024-07-26 14:20:37.940871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.105 qpair failed and we were unable to recover it. 00:26:30.105 [2024-07-26 14:20:37.940963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.105 [2024-07-26 14:20:37.940992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.105 qpair failed and we were unable to recover it. 00:26:30.105 [2024-07-26 14:20:37.941082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.105 [2024-07-26 14:20:37.941109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.105 qpair failed and we were unable to recover it. 00:26:30.105 [2024-07-26 14:20:37.941190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.105 [2024-07-26 14:20:37.941217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.105 qpair failed and we were unable to recover it. 00:26:30.105 [2024-07-26 14:20:37.941295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.105 [2024-07-26 14:20:37.941322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.105 qpair failed and we were unable to recover it. 00:26:30.105 [2024-07-26 14:20:37.941447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.105 [2024-07-26 14:20:37.941475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.105 qpair failed and we were unable to recover it. 00:26:30.105 [2024-07-26 14:20:37.941570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.105 [2024-07-26 14:20:37.941598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.105 qpair failed and we were unable to recover it. 00:26:30.105 [2024-07-26 14:20:37.941737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.105 [2024-07-26 14:20:37.941763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.105 qpair failed and we were unable to recover it. 00:26:30.105 [2024-07-26 14:20:37.941873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.105 [2024-07-26 14:20:37.941899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.105 qpair failed and we were unable to recover it. 00:26:30.105 [2024-07-26 14:20:37.942013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.105 [2024-07-26 14:20:37.942040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.105 qpair failed and we were unable to recover it. 00:26:30.105 [2024-07-26 14:20:37.942158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.105 [2024-07-26 14:20:37.942185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.105 qpair failed and we were unable to recover it. 00:26:30.105 [2024-07-26 14:20:37.942282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.105 [2024-07-26 14:20:37.942311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.105 qpair failed and we were unable to recover it. 00:26:30.105 [2024-07-26 14:20:37.942421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.105 [2024-07-26 14:20:37.942449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.105 qpair failed and we were unable to recover it. 00:26:30.105 [2024-07-26 14:20:37.942537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.105 [2024-07-26 14:20:37.942565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.105 qpair failed and we were unable to recover it. 00:26:30.105 [2024-07-26 14:20:37.942656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.105 [2024-07-26 14:20:37.942684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.105 qpair failed and we were unable to recover it. 00:26:30.105 [2024-07-26 14:20:37.942797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.106 [2024-07-26 14:20:37.942825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.106 qpair failed and we were unable to recover it. 00:26:30.106 [2024-07-26 14:20:37.942934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.106 [2024-07-26 14:20:37.942961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.106 qpair failed and we were unable to recover it. 00:26:30.106 [2024-07-26 14:20:37.943068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.106 [2024-07-26 14:20:37.943096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.106 qpair failed and we were unable to recover it. 00:26:30.106 [2024-07-26 14:20:37.943225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.106 [2024-07-26 14:20:37.943265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.106 qpair failed and we were unable to recover it. 00:26:30.106 [2024-07-26 14:20:37.943411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.106 [2024-07-26 14:20:37.943439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.106 qpair failed and we were unable to recover it. 00:26:30.106 [2024-07-26 14:20:37.943554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.106 [2024-07-26 14:20:37.943582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.106 qpair failed and we were unable to recover it. 00:26:30.106 [2024-07-26 14:20:37.943663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.106 [2024-07-26 14:20:37.943689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.106 qpair failed and we were unable to recover it. 00:26:30.106 [2024-07-26 14:20:37.943763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.106 [2024-07-26 14:20:37.943789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.106 qpair failed and we were unable to recover it. 00:26:30.106 [2024-07-26 14:20:37.943906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.106 [2024-07-26 14:20:37.943932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.106 qpair failed and we were unable to recover it. 00:26:30.106 [2024-07-26 14:20:37.944078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.106 [2024-07-26 14:20:37.944107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.106 qpair failed and we were unable to recover it. 00:26:30.106 [2024-07-26 14:20:37.944224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.106 [2024-07-26 14:20:37.944251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.106 qpair failed and we were unable to recover it. 00:26:30.106 [2024-07-26 14:20:37.944339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.106 [2024-07-26 14:20:37.944366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.106 qpair failed and we were unable to recover it. 00:26:30.106 [2024-07-26 14:20:37.944509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.106 [2024-07-26 14:20:37.944548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.106 qpair failed and we were unable to recover it. 00:26:30.106 [2024-07-26 14:20:37.944638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.106 [2024-07-26 14:20:37.944666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.106 qpair failed and we were unable to recover it. 00:26:30.106 [2024-07-26 14:20:37.944803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.106 [2024-07-26 14:20:37.944830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.106 qpair failed and we were unable to recover it. 00:26:30.106 [2024-07-26 14:20:37.944945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.106 [2024-07-26 14:20:37.944973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.106 qpair failed and we were unable to recover it. 00:26:30.106 [2024-07-26 14:20:37.945089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.106 [2024-07-26 14:20:37.945116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.106 qpair failed and we were unable to recover it. 00:26:30.106 [2024-07-26 14:20:37.945245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.106 [2024-07-26 14:20:37.945285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.106 qpair failed and we were unable to recover it. 00:26:30.106 [2024-07-26 14:20:37.945395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.106 [2024-07-26 14:20:37.945423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.106 qpair failed and we were unable to recover it. 00:26:30.106 [2024-07-26 14:20:37.945538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.106 [2024-07-26 14:20:37.945567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.106 qpair failed and we were unable to recover it. 00:26:30.106 [2024-07-26 14:20:37.945688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.106 [2024-07-26 14:20:37.945715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.106 qpair failed and we were unable to recover it. 00:26:30.106 [2024-07-26 14:20:37.945822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.106 [2024-07-26 14:20:37.945854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.106 qpair failed and we were unable to recover it. 00:26:30.106 [2024-07-26 14:20:37.945971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.106 [2024-07-26 14:20:37.945998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.106 qpair failed and we were unable to recover it. 00:26:30.106 [2024-07-26 14:20:37.946111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.106 [2024-07-26 14:20:37.946137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.106 qpair failed and we were unable to recover it. 00:26:30.106 [2024-07-26 14:20:37.946246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.106 [2024-07-26 14:20:37.946272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.106 qpair failed and we were unable to recover it. 00:26:30.106 [2024-07-26 14:20:37.946377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.106 [2024-07-26 14:20:37.946403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.106 qpair failed and we were unable to recover it. 00:26:30.106 [2024-07-26 14:20:37.946533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.106 [2024-07-26 14:20:37.946575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.106 qpair failed and we were unable to recover it. 00:26:30.106 [2024-07-26 14:20:37.946697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.106 [2024-07-26 14:20:37.946726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.106 qpair failed and we were unable to recover it. 00:26:30.106 [2024-07-26 14:20:37.946834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.106 [2024-07-26 14:20:37.946862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.106 qpair failed and we were unable to recover it. 00:26:30.106 [2024-07-26 14:20:37.946977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.106 [2024-07-26 14:20:37.947005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.106 qpair failed and we were unable to recover it. 00:26:30.106 [2024-07-26 14:20:37.947118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.106 [2024-07-26 14:20:37.947145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.106 qpair failed and we were unable to recover it. 00:26:30.106 [2024-07-26 14:20:37.947283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.106 [2024-07-26 14:20:37.947310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.106 qpair failed and we were unable to recover it. 00:26:30.106 [2024-07-26 14:20:37.947395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.106 [2024-07-26 14:20:37.947422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.106 qpair failed and we were unable to recover it. 00:26:30.106 [2024-07-26 14:20:37.947515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.106 [2024-07-26 14:20:37.947560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.106 qpair failed and we were unable to recover it. 00:26:30.106 [2024-07-26 14:20:37.947654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.106 [2024-07-26 14:20:37.947681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.106 qpair failed and we were unable to recover it. 00:26:30.106 [2024-07-26 14:20:37.947772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.106 [2024-07-26 14:20:37.947798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.106 qpair failed and we were unable to recover it. 00:26:30.106 [2024-07-26 14:20:37.947910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.106 [2024-07-26 14:20:37.947937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.106 qpair failed and we were unable to recover it. 00:26:30.106 [2024-07-26 14:20:37.948046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.106 [2024-07-26 14:20:37.948073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.106 qpair failed and we were unable to recover it. 00:26:30.106 [2024-07-26 14:20:37.948183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.107 [2024-07-26 14:20:37.948210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.107 qpair failed and we were unable to recover it. 00:26:30.107 [2024-07-26 14:20:37.948354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.107 [2024-07-26 14:20:37.948382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.107 qpair failed and we were unable to recover it. 00:26:30.107 [2024-07-26 14:20:37.948496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.107 [2024-07-26 14:20:37.948523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.107 qpair failed and we were unable to recover it. 00:26:30.107 [2024-07-26 14:20:37.948676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.107 [2024-07-26 14:20:37.948703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.107 qpair failed and we were unable to recover it. 00:26:30.107 [2024-07-26 14:20:37.948813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.107 [2024-07-26 14:20:37.948840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.107 qpair failed and we were unable to recover it. 00:26:30.107 [2024-07-26 14:20:37.948953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.107 [2024-07-26 14:20:37.948980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.107 qpair failed and we were unable to recover it. 00:26:30.107 [2024-07-26 14:20:37.949069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.107 [2024-07-26 14:20:37.949098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.107 qpair failed and we were unable to recover it. 00:26:30.107 [2024-07-26 14:20:37.949214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.107 [2024-07-26 14:20:37.949242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.107 qpair failed and we were unable to recover it. 00:26:30.107 [2024-07-26 14:20:37.949376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.107 [2024-07-26 14:20:37.949403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.107 qpair failed and we were unable to recover it. 00:26:30.107 [2024-07-26 14:20:37.949516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.107 [2024-07-26 14:20:37.949550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.107 qpair failed and we were unable to recover it. 00:26:30.107 [2024-07-26 14:20:37.949668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.107 [2024-07-26 14:20:37.949700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.107 qpair failed and we were unable to recover it. 00:26:30.107 [2024-07-26 14:20:37.949784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.107 [2024-07-26 14:20:37.949811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.107 qpair failed and we were unable to recover it. 00:26:30.107 [2024-07-26 14:20:37.949919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.107 [2024-07-26 14:20:37.949946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.107 qpair failed and we were unable to recover it. 00:26:30.107 [2024-07-26 14:20:37.950028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.107 [2024-07-26 14:20:37.950056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.107 qpair failed and we were unable to recover it. 00:26:30.107 [2024-07-26 14:20:37.950144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.107 [2024-07-26 14:20:37.950172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.107 qpair failed and we were unable to recover it. 00:26:30.107 [2024-07-26 14:20:37.950283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.107 [2024-07-26 14:20:37.950312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.107 qpair failed and we were unable to recover it. 00:26:30.107 [2024-07-26 14:20:37.950399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.107 [2024-07-26 14:20:37.950426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.107 qpair failed and we were unable to recover it. 00:26:30.107 [2024-07-26 14:20:37.950518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.107 [2024-07-26 14:20:37.950550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.107 qpair failed and we were unable to recover it. 00:26:30.107 [2024-07-26 14:20:37.950659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.107 [2024-07-26 14:20:37.950686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.107 qpair failed and we were unable to recover it. 00:26:30.107 [2024-07-26 14:20:37.950793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.107 [2024-07-26 14:20:37.950820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.107 qpair failed and we were unable to recover it. 00:26:30.107 [2024-07-26 14:20:37.950929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.107 [2024-07-26 14:20:37.950956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.107 qpair failed and we were unable to recover it. 00:26:30.107 [2024-07-26 14:20:37.951050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.107 [2024-07-26 14:20:37.951076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.107 qpair failed and we were unable to recover it. 00:26:30.107 [2024-07-26 14:20:37.951182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.107 [2024-07-26 14:20:37.951208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.107 qpair failed and we were unable to recover it. 00:26:30.107 [2024-07-26 14:20:37.951297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.107 [2024-07-26 14:20:37.951323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.107 qpair failed and we were unable to recover it. 00:26:30.107 [2024-07-26 14:20:37.951435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.107 [2024-07-26 14:20:37.951462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.107 qpair failed and we were unable to recover it. 00:26:30.107 [2024-07-26 14:20:37.951544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.107 [2024-07-26 14:20:37.951571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.107 qpair failed and we were unable to recover it. 00:26:30.107 [2024-07-26 14:20:37.951693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.107 [2024-07-26 14:20:37.951719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.107 qpair failed and we were unable to recover it. 00:26:30.107 [2024-07-26 14:20:37.951838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.107 [2024-07-26 14:20:37.951865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.107 qpair failed and we were unable to recover it. 00:26:30.107 [2024-07-26 14:20:37.951962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.107 [2024-07-26 14:20:37.951989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.107 qpair failed and we were unable to recover it. 00:26:30.107 [2024-07-26 14:20:37.952120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.107 [2024-07-26 14:20:37.952147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.107 qpair failed and we were unable to recover it. 00:26:30.107 [2024-07-26 14:20:37.952269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.107 [2024-07-26 14:20:37.952295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.107 qpair failed and we were unable to recover it. 00:26:30.107 [2024-07-26 14:20:37.952381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.107 [2024-07-26 14:20:37.952408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.107 qpair failed and we were unable to recover it. 00:26:30.107 [2024-07-26 14:20:37.952500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.107 [2024-07-26 14:20:37.952533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.107 qpair failed and we were unable to recover it. 00:26:30.107 [2024-07-26 14:20:37.952668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.107 [2024-07-26 14:20:37.952695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.107 qpair failed and we were unable to recover it. 00:26:30.107 [2024-07-26 14:20:37.952813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.107 [2024-07-26 14:20:37.952839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.107 qpair failed and we were unable to recover it. 00:26:30.107 [2024-07-26 14:20:37.952922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.107 [2024-07-26 14:20:37.952948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.107 qpair failed and we were unable to recover it. 00:26:30.107 [2024-07-26 14:20:37.953065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.107 [2024-07-26 14:20:37.953090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.108 qpair failed and we were unable to recover it. 00:26:30.108 [2024-07-26 14:20:37.953244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.108 [2024-07-26 14:20:37.953283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.108 qpair failed and we were unable to recover it. 00:26:30.108 [2024-07-26 14:20:37.953404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.108 [2024-07-26 14:20:37.953432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.108 qpair failed and we were unable to recover it. 00:26:30.108 [2024-07-26 14:20:37.953577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.108 [2024-07-26 14:20:37.953605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.108 qpair failed and we were unable to recover it. 00:26:30.108 [2024-07-26 14:20:37.953718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.108 [2024-07-26 14:20:37.953745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.108 qpair failed and we were unable to recover it. 00:26:30.108 [2024-07-26 14:20:37.953883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.108 [2024-07-26 14:20:37.953909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.108 qpair failed and we were unable to recover it. 00:26:30.108 [2024-07-26 14:20:37.954053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.108 [2024-07-26 14:20:37.954080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.108 qpair failed and we were unable to recover it. 00:26:30.108 [2024-07-26 14:20:37.954198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.108 [2024-07-26 14:20:37.954226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.108 qpair failed and we were unable to recover it. 00:26:30.108 [2024-07-26 14:20:37.954335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.108 [2024-07-26 14:20:37.954361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.108 qpair failed and we were unable to recover it. 00:26:30.108 [2024-07-26 14:20:37.954474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.108 [2024-07-26 14:20:37.954500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.108 qpair failed and we were unable to recover it. 00:26:30.108 [2024-07-26 14:20:37.954599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.108 [2024-07-26 14:20:37.954626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.108 qpair failed and we were unable to recover it. 00:26:30.108 [2024-07-26 14:20:37.954721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.108 [2024-07-26 14:20:37.954748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.108 qpair failed and we were unable to recover it. 00:26:30.108 [2024-07-26 14:20:37.954852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.108 [2024-07-26 14:20:37.954878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.108 qpair failed and we were unable to recover it. 00:26:30.108 [2024-07-26 14:20:37.954954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.108 [2024-07-26 14:20:37.954982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.108 qpair failed and we were unable to recover it. 00:26:30.108 [2024-07-26 14:20:37.955072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.108 [2024-07-26 14:20:37.955104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.108 qpair failed and we were unable to recover it. 00:26:30.108 [2024-07-26 14:20:37.955199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.108 [2024-07-26 14:20:37.955238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.108 qpair failed and we were unable to recover it. 00:26:30.108 [2024-07-26 14:20:37.955330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.108 [2024-07-26 14:20:37.955358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.108 qpair failed and we were unable to recover it. 00:26:30.108 [2024-07-26 14:20:37.955446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.108 [2024-07-26 14:20:37.955472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.108 qpair failed and we were unable to recover it. 00:26:30.108 [2024-07-26 14:20:37.955587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.108 [2024-07-26 14:20:37.955614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.108 qpair failed and we were unable to recover it. 00:26:30.108 [2024-07-26 14:20:37.955700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.108 [2024-07-26 14:20:37.955727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.108 qpair failed and we were unable to recover it. 00:26:30.108 [2024-07-26 14:20:37.955814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.108 [2024-07-26 14:20:37.955840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.108 qpair failed and we were unable to recover it. 00:26:30.108 [2024-07-26 14:20:37.955949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.108 [2024-07-26 14:20:37.955976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.108 qpair failed and we were unable to recover it. 00:26:30.108 [2024-07-26 14:20:37.956086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.108 [2024-07-26 14:20:37.956114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.108 qpair failed and we were unable to recover it. 00:26:30.108 [2024-07-26 14:20:37.956237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.108 [2024-07-26 14:20:37.956264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.108 qpair failed and we were unable to recover it. 00:26:30.108 [2024-07-26 14:20:37.956379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.108 [2024-07-26 14:20:37.956405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.108 qpair failed and we were unable to recover it. 00:26:30.108 [2024-07-26 14:20:37.956490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.108 [2024-07-26 14:20:37.956516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.108 qpair failed and we were unable to recover it. 00:26:30.108 [2024-07-26 14:20:37.956639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.108 [2024-07-26 14:20:37.956666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.108 qpair failed and we were unable to recover it. 00:26:30.108 [2024-07-26 14:20:37.956791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.108 [2024-07-26 14:20:37.956817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.108 qpair failed and we were unable to recover it. 00:26:30.108 [2024-07-26 14:20:37.956900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.108 [2024-07-26 14:20:37.956928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.108 qpair failed and we were unable to recover it. 00:26:30.108 [2024-07-26 14:20:37.957037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.108 [2024-07-26 14:20:37.957063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.108 qpair failed and we were unable to recover it. 00:26:30.108 [2024-07-26 14:20:37.957158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.108 [2024-07-26 14:20:37.957184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.108 qpair failed and we were unable to recover it. 00:26:30.108 [2024-07-26 14:20:37.957265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.108 [2024-07-26 14:20:37.957292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.108 qpair failed and we were unable to recover it. 00:26:30.108 [2024-07-26 14:20:37.957397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.108 [2024-07-26 14:20:37.957422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.108 qpair failed and we were unable to recover it. 00:26:30.108 [2024-07-26 14:20:37.957537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.108 [2024-07-26 14:20:37.957564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.108 qpair failed and we were unable to recover it. 00:26:30.108 [2024-07-26 14:20:37.957675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.108 [2024-07-26 14:20:37.957702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.108 qpair failed and we were unable to recover it. 00:26:30.108 [2024-07-26 14:20:37.957795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.108 [2024-07-26 14:20:37.957821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.108 qpair failed and we were unable to recover it. 00:26:30.108 [2024-07-26 14:20:37.957902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.109 [2024-07-26 14:20:37.957929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.109 qpair failed and we were unable to recover it. 00:26:30.109 [2024-07-26 14:20:37.958020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.109 [2024-07-26 14:20:37.958046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.109 qpair failed and we were unable to recover it. 00:26:30.109 [2024-07-26 14:20:37.958136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.109 [2024-07-26 14:20:37.958163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.109 qpair failed and we were unable to recover it. 00:26:30.109 [2024-07-26 14:20:37.958273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.109 [2024-07-26 14:20:37.958299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.109 qpair failed and we were unable to recover it. 00:26:30.109 [2024-07-26 14:20:37.958386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.109 [2024-07-26 14:20:37.958412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.109 qpair failed and we were unable to recover it. 00:26:30.109 [2024-07-26 14:20:37.958495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.109 [2024-07-26 14:20:37.958531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.109 qpair failed and we were unable to recover it. 00:26:30.109 [2024-07-26 14:20:37.958625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.109 [2024-07-26 14:20:37.958652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.109 qpair failed and we were unable to recover it. 00:26:30.109 [2024-07-26 14:20:37.958748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.109 [2024-07-26 14:20:37.958775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.109 qpair failed and we were unable to recover it. 00:26:30.109 [2024-07-26 14:20:37.958879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.109 [2024-07-26 14:20:37.958905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.109 qpair failed and we were unable to recover it. 00:26:30.109 [2024-07-26 14:20:37.959016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.109 [2024-07-26 14:20:37.959042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.109 qpair failed and we were unable to recover it. 00:26:30.109 [2024-07-26 14:20:37.959159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.109 [2024-07-26 14:20:37.959188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.109 qpair failed and we were unable to recover it. 00:26:30.109 [2024-07-26 14:20:37.959301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.109 [2024-07-26 14:20:37.959327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.109 qpair failed and we were unable to recover it. 00:26:30.109 [2024-07-26 14:20:37.959422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.109 [2024-07-26 14:20:37.959461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.109 qpair failed and we were unable to recover it. 00:26:30.109 [2024-07-26 14:20:37.959585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.109 [2024-07-26 14:20:37.959614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.109 qpair failed and we were unable to recover it. 00:26:30.109 [2024-07-26 14:20:37.959727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.109 [2024-07-26 14:20:37.959754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.109 qpair failed and we were unable to recover it. 00:26:30.109 [2024-07-26 14:20:37.959864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.109 [2024-07-26 14:20:37.959891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.109 qpair failed and we were unable to recover it. 00:26:30.109 [2024-07-26 14:20:37.960007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.109 [2024-07-26 14:20:37.960035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.109 qpair failed and we were unable to recover it. 00:26:30.109 [2024-07-26 14:20:37.960122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.109 [2024-07-26 14:20:37.960148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.109 qpair failed and we were unable to recover it. 00:26:30.109 [2024-07-26 14:20:37.960263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.109 [2024-07-26 14:20:37.960291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.109 qpair failed and we were unable to recover it. 00:26:30.109 [2024-07-26 14:20:37.960434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.109 [2024-07-26 14:20:37.960461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.109 qpair failed and we were unable to recover it. 00:26:30.109 [2024-07-26 14:20:37.960554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.109 [2024-07-26 14:20:37.960581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.109 qpair failed and we were unable to recover it. 00:26:30.109 [2024-07-26 14:20:37.960699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.109 [2024-07-26 14:20:37.960725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.109 qpair failed and we were unable to recover it. 00:26:30.109 [2024-07-26 14:20:37.960841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.109 [2024-07-26 14:20:37.960867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.109 qpair failed and we were unable to recover it. 00:26:30.109 [2024-07-26 14:20:37.960978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.109 [2024-07-26 14:20:37.961004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.109 qpair failed and we were unable to recover it. 00:26:30.109 [2024-07-26 14:20:37.961086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.109 [2024-07-26 14:20:37.961113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.109 qpair failed and we were unable to recover it. 00:26:30.109 [2024-07-26 14:20:37.961190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.109 [2024-07-26 14:20:37.961216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.109 qpair failed and we were unable to recover it. 00:26:30.109 [2024-07-26 14:20:37.961350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.109 [2024-07-26 14:20:37.961376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.109 qpair failed and we were unable to recover it. 00:26:30.109 [2024-07-26 14:20:37.961463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.109 [2024-07-26 14:20:37.961490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.109 qpair failed and we were unable to recover it. 00:26:30.109 [2024-07-26 14:20:37.961590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.109 [2024-07-26 14:20:37.961620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.109 qpair failed and we were unable to recover it. 00:26:30.109 [2024-07-26 14:20:37.961707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.109 [2024-07-26 14:20:37.961735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.109 qpair failed and we were unable to recover it. 00:26:30.109 [2024-07-26 14:20:37.961821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.109 [2024-07-26 14:20:37.961848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.109 qpair failed and we were unable to recover it. 00:26:30.109 [2024-07-26 14:20:37.961958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.109 [2024-07-26 14:20:37.961993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.109 qpair failed and we were unable to recover it. 00:26:30.109 [2024-07-26 14:20:37.962127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.109 [2024-07-26 14:20:37.962168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.109 qpair failed and we were unable to recover it. 00:26:30.109 [2024-07-26 14:20:37.962270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.109 [2024-07-26 14:20:37.962298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.109 qpair failed and we were unable to recover it. 00:26:30.109 [2024-07-26 14:20:37.962413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.109 [2024-07-26 14:20:37.962439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.109 qpair failed and we were unable to recover it. 00:26:30.109 [2024-07-26 14:20:37.962559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.109 [2024-07-26 14:20:37.962585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.109 qpair failed and we were unable to recover it. 00:26:30.109 [2024-07-26 14:20:37.962712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.109 [2024-07-26 14:20:37.962738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.109 qpair failed and we were unable to recover it. 00:26:30.109 [2024-07-26 14:20:37.962848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.109 [2024-07-26 14:20:37.962874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.109 qpair failed and we were unable to recover it. 00:26:30.109 [2024-07-26 14:20:37.963015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.109 [2024-07-26 14:20:37.963041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.109 qpair failed and we were unable to recover it. 00:26:30.109 [2024-07-26 14:20:37.963151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.109 [2024-07-26 14:20:37.963178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.109 qpair failed and we were unable to recover it. 00:26:30.109 [2024-07-26 14:20:37.963303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.109 [2024-07-26 14:20:37.963330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.109 qpair failed and we were unable to recover it. 00:26:30.109 [2024-07-26 14:20:37.963456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.110 [2024-07-26 14:20:37.963496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.110 qpair failed and we were unable to recover it. 00:26:30.110 [2024-07-26 14:20:37.963586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.110 [2024-07-26 14:20:37.963615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.110 qpair failed and we were unable to recover it. 00:26:30.110 [2024-07-26 14:20:37.963740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.110 [2024-07-26 14:20:37.963768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.110 qpair failed and we were unable to recover it. 00:26:30.110 [2024-07-26 14:20:37.963859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.110 [2024-07-26 14:20:37.963886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.110 qpair failed and we were unable to recover it. 00:26:30.110 [2024-07-26 14:20:37.964035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.110 [2024-07-26 14:20:37.964067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.110 qpair failed and we were unable to recover it. 00:26:30.110 [2024-07-26 14:20:37.964175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.110 [2024-07-26 14:20:37.964213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.110 qpair failed and we were unable to recover it. 00:26:30.110 [2024-07-26 14:20:37.964305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.110 [2024-07-26 14:20:37.964332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.110 qpair failed and we were unable to recover it. 00:26:30.110 [2024-07-26 14:20:37.964446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.110 [2024-07-26 14:20:37.964472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.110 qpair failed and we were unable to recover it. 00:26:30.110 [2024-07-26 14:20:37.964578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.110 [2024-07-26 14:20:37.964605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.110 qpair failed and we were unable to recover it. 00:26:30.110 [2024-07-26 14:20:37.964729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.110 [2024-07-26 14:20:37.964756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.110 qpair failed and we were unable to recover it. 00:26:30.110 [2024-07-26 14:20:37.964903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.110 [2024-07-26 14:20:37.964930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.110 qpair failed and we were unable to recover it. 00:26:30.110 [2024-07-26 14:20:37.965035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.110 [2024-07-26 14:20:37.965061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.110 qpair failed and we were unable to recover it. 00:26:30.110 [2024-07-26 14:20:37.965164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.110 [2024-07-26 14:20:37.965191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.110 qpair failed and we were unable to recover it. 00:26:30.110 [2024-07-26 14:20:37.965301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.110 [2024-07-26 14:20:37.965328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.110 qpair failed and we were unable to recover it. 00:26:30.110 [2024-07-26 14:20:37.965413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.110 [2024-07-26 14:20:37.965440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.110 qpair failed and we were unable to recover it. 00:26:30.110 [2024-07-26 14:20:37.965542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.110 [2024-07-26 14:20:37.965570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.110 qpair failed and we were unable to recover it. 00:26:30.110 [2024-07-26 14:20:37.965674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.110 [2024-07-26 14:20:37.965700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.110 qpair failed and we were unable to recover it. 00:26:30.110 [2024-07-26 14:20:37.965953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.110 [2024-07-26 14:20:37.965983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.110 qpair failed and we were unable to recover it. 00:26:30.110 [2024-07-26 14:20:37.966109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.110 [2024-07-26 14:20:37.966146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.110 qpair failed and we were unable to recover it. 00:26:30.110 [2024-07-26 14:20:37.966263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.110 [2024-07-26 14:20:37.966290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.110 qpair failed and we were unable to recover it. 00:26:30.110 [2024-07-26 14:20:37.966377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.110 [2024-07-26 14:20:37.966403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.110 qpair failed and we were unable to recover it. 00:26:30.110 [2024-07-26 14:20:37.966491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.110 [2024-07-26 14:20:37.966534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.110 qpair failed and we were unable to recover it. 00:26:30.110 [2024-07-26 14:20:37.966647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.110 [2024-07-26 14:20:37.966674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.110 qpair failed and we were unable to recover it. 00:26:30.110 [2024-07-26 14:20:37.966767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.110 [2024-07-26 14:20:37.966793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.110 qpair failed and we were unable to recover it. 00:26:30.110 [2024-07-26 14:20:37.966892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.110 [2024-07-26 14:20:37.966918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.110 qpair failed and we were unable to recover it. 00:26:30.110 [2024-07-26 14:20:37.966997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.110 [2024-07-26 14:20:37.967024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.110 qpair failed and we were unable to recover it. 00:26:30.110 [2024-07-26 14:20:37.967122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.110 [2024-07-26 14:20:37.967149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.110 qpair failed and we were unable to recover it. 00:26:30.110 [2024-07-26 14:20:37.967259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.110 [2024-07-26 14:20:37.967286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.110 qpair failed and we were unable to recover it. 00:26:30.110 [2024-07-26 14:20:37.967407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.110 [2024-07-26 14:20:37.967440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.110 qpair failed and we were unable to recover it. 00:26:30.110 [2024-07-26 14:20:37.967562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.110 [2024-07-26 14:20:37.967591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.110 qpair failed and we were unable to recover it. 00:26:30.110 [2024-07-26 14:20:37.967672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.110 [2024-07-26 14:20:37.967699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.110 qpair failed and we were unable to recover it. 00:26:30.110 [2024-07-26 14:20:37.967786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.110 [2024-07-26 14:20:37.967814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.110 qpair failed and we were unable to recover it. 00:26:30.110 [2024-07-26 14:20:37.967929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.110 [2024-07-26 14:20:37.967956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.110 qpair failed and we were unable to recover it. 00:26:30.110 [2024-07-26 14:20:37.968041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.110 [2024-07-26 14:20:37.968068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.110 qpair failed and we were unable to recover it. 00:26:30.110 [2024-07-26 14:20:37.968157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.110 [2024-07-26 14:20:37.968184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.110 qpair failed and we were unable to recover it. 00:26:30.110 [2024-07-26 14:20:37.968323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.110 [2024-07-26 14:20:37.968350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.110 qpair failed and we were unable to recover it. 00:26:30.110 [2024-07-26 14:20:37.968454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.110 [2024-07-26 14:20:37.968481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.110 qpair failed and we were unable to recover it. 00:26:30.110 [2024-07-26 14:20:37.968601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.110 [2024-07-26 14:20:37.968630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.110 qpair failed and we were unable to recover it. 00:26:30.110 [2024-07-26 14:20:37.968715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.110 [2024-07-26 14:20:37.968743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.110 qpair failed and we were unable to recover it. 00:26:30.110 [2024-07-26 14:20:37.968869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.110 [2024-07-26 14:20:37.968896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.110 qpair failed and we were unable to recover it. 00:26:30.110 [2024-07-26 14:20:37.969009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.111 [2024-07-26 14:20:37.969036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.111 qpair failed and we were unable to recover it. 00:26:30.111 [2024-07-26 14:20:37.969147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.111 [2024-07-26 14:20:37.969174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.111 qpair failed and we were unable to recover it. 00:26:30.111 [2024-07-26 14:20:37.969299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.111 [2024-07-26 14:20:37.969339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.111 qpair failed and we were unable to recover it. 00:26:30.111 [2024-07-26 14:20:37.969481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.111 [2024-07-26 14:20:37.969509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.111 qpair failed and we were unable to recover it. 00:26:30.111 [2024-07-26 14:20:37.969612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.111 [2024-07-26 14:20:37.969644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.111 qpair failed and we were unable to recover it. 00:26:30.111 [2024-07-26 14:20:37.969737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.111 [2024-07-26 14:20:37.969765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.111 qpair failed and we were unable to recover it. 00:26:30.111 [2024-07-26 14:20:37.969910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.111 [2024-07-26 14:20:37.969937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.111 qpair failed and we were unable to recover it. 00:26:30.111 [2024-07-26 14:20:37.970023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.111 [2024-07-26 14:20:37.970049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.111 qpair failed and we were unable to recover it. 00:26:30.111 [2024-07-26 14:20:37.970134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.111 [2024-07-26 14:20:37.970161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.111 qpair failed and we were unable to recover it. 00:26:30.111 [2024-07-26 14:20:37.970242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.111 [2024-07-26 14:20:37.970268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.111 qpair failed and we were unable to recover it. 00:26:30.111 [2024-07-26 14:20:37.970365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.111 [2024-07-26 14:20:37.970405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.111 qpair failed and we were unable to recover it. 00:26:30.111 [2024-07-26 14:20:37.970522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.111 [2024-07-26 14:20:37.970556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.111 qpair failed and we were unable to recover it. 00:26:30.111 [2024-07-26 14:20:37.970647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.111 [2024-07-26 14:20:37.970676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.111 qpair failed and we were unable to recover it. 00:26:30.111 [2024-07-26 14:20:37.970788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.111 [2024-07-26 14:20:37.970828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.111 qpair failed and we were unable to recover it. 00:26:30.111 [2024-07-26 14:20:37.970941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.111 [2024-07-26 14:20:37.970969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.111 qpair failed and we were unable to recover it. 00:26:30.111 [2024-07-26 14:20:37.971085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.111 [2024-07-26 14:20:37.971112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.111 qpair failed and we were unable to recover it. 00:26:30.111 [2024-07-26 14:20:37.971231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.111 [2024-07-26 14:20:37.971259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.111 qpair failed and we were unable to recover it. 00:26:30.111 [2024-07-26 14:20:37.971351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.111 [2024-07-26 14:20:37.971381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.111 qpair failed and we were unable to recover it. 00:26:30.111 [2024-07-26 14:20:37.971479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.111 [2024-07-26 14:20:37.971506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.111 qpair failed and we were unable to recover it. 00:26:30.111 [2024-07-26 14:20:37.971597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.111 [2024-07-26 14:20:37.971627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.111 qpair failed and we were unable to recover it. 00:26:30.111 [2024-07-26 14:20:37.971720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.111 [2024-07-26 14:20:37.971747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.111 qpair failed and we were unable to recover it. 00:26:30.111 [2024-07-26 14:20:37.971885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.111 [2024-07-26 14:20:37.971912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.111 qpair failed and we were unable to recover it. 00:26:30.111 [2024-07-26 14:20:37.972001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.111 [2024-07-26 14:20:37.972029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.111 qpair failed and we were unable to recover it. 00:26:30.111 [2024-07-26 14:20:37.972122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.111 [2024-07-26 14:20:37.972150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.111 qpair failed and we were unable to recover it. 00:26:30.111 [2024-07-26 14:20:37.972273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.111 [2024-07-26 14:20:37.972313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.111 qpair failed and we were unable to recover it. 00:26:30.111 [2024-07-26 14:20:37.972429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.111 [2024-07-26 14:20:37.972458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.111 qpair failed and we were unable to recover it. 00:26:30.111 [2024-07-26 14:20:37.972550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.111 [2024-07-26 14:20:37.972578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.111 qpair failed and we were unable to recover it. 00:26:30.111 [2024-07-26 14:20:37.972670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.111 [2024-07-26 14:20:37.972697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.111 qpair failed and we were unable to recover it. 00:26:30.111 [2024-07-26 14:20:37.972822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.111 [2024-07-26 14:20:37.972849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.111 qpair failed and we were unable to recover it. 00:26:30.111 [2024-07-26 14:20:37.972963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.111 [2024-07-26 14:20:37.972989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.111 qpair failed and we were unable to recover it. 00:26:30.111 [2024-07-26 14:20:37.973103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.111 [2024-07-26 14:20:37.973129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.111 qpair failed and we were unable to recover it. 00:26:30.111 [2024-07-26 14:20:37.973218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.111 [2024-07-26 14:20:37.973250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.111 qpair failed and we were unable to recover it. 00:26:30.111 [2024-07-26 14:20:37.973356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.111 [2024-07-26 14:20:37.973396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.111 qpair failed and we were unable to recover it. 00:26:30.111 [2024-07-26 14:20:37.973490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.111 [2024-07-26 14:20:37.973534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.111 qpair failed and we were unable to recover it. 00:26:30.111 [2024-07-26 14:20:37.973628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.111 [2024-07-26 14:20:37.973655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.111 qpair failed and we were unable to recover it. 00:26:30.111 [2024-07-26 14:20:37.973738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.111 [2024-07-26 14:20:37.973765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.111 qpair failed and we were unable to recover it. 00:26:30.111 [2024-07-26 14:20:37.973894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.111 [2024-07-26 14:20:37.973921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.111 qpair failed and we were unable to recover it. 00:26:30.111 [2024-07-26 14:20:37.974028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.111 [2024-07-26 14:20:37.974055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.111 qpair failed and we were unable to recover it. 00:26:30.111 [2024-07-26 14:20:37.974166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.111 [2024-07-26 14:20:37.974192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.111 qpair failed and we were unable to recover it. 00:26:30.111 [2024-07-26 14:20:37.974300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.111 [2024-07-26 14:20:37.974327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.111 qpair failed and we were unable to recover it. 00:26:30.111 [2024-07-26 14:20:37.974422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.111 [2024-07-26 14:20:37.974463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.111 qpair failed and we were unable to recover it. 00:26:30.111 [2024-07-26 14:20:37.974581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.112 [2024-07-26 14:20:37.974610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.112 qpair failed and we were unable to recover it. 00:26:30.112 [2024-07-26 14:20:37.974702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.112 [2024-07-26 14:20:37.974729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.112 qpair failed and we were unable to recover it. 00:26:30.112 [2024-07-26 14:20:37.974874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.112 [2024-07-26 14:20:37.974901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.112 qpair failed and we were unable to recover it. 00:26:30.112 [2024-07-26 14:20:37.975010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.112 [2024-07-26 14:20:37.975037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.112 qpair failed and we were unable to recover it. 00:26:30.112 [2024-07-26 14:20:37.975133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.112 [2024-07-26 14:20:37.975160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.112 qpair failed and we were unable to recover it. 00:26:30.112 [2024-07-26 14:20:37.975252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.112 [2024-07-26 14:20:37.975281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.112 qpair failed and we were unable to recover it. 00:26:30.112 [2024-07-26 14:20:37.975397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.112 [2024-07-26 14:20:37.975424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.112 qpair failed and we were unable to recover it. 00:26:30.112 [2024-07-26 14:20:37.975510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.112 [2024-07-26 14:20:37.975549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.112 qpair failed and we were unable to recover it. 00:26:30.112 [2024-07-26 14:20:37.975638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.112 [2024-07-26 14:20:37.975665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.112 qpair failed and we were unable to recover it. 00:26:30.112 [2024-07-26 14:20:37.975751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.112 [2024-07-26 14:20:37.975777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.112 qpair failed and we were unable to recover it. 00:26:30.112 [2024-07-26 14:20:37.975858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.112 [2024-07-26 14:20:37.975884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.112 qpair failed and we were unable to recover it. 00:26:30.112 [2024-07-26 14:20:37.975973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.112 [2024-07-26 14:20:37.976000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.112 qpair failed and we were unable to recover it. 00:26:30.112 [2024-07-26 14:20:37.976136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.112 [2024-07-26 14:20:37.976162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.112 qpair failed and we were unable to recover it. 00:26:30.112 [2024-07-26 14:20:37.976291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.112 [2024-07-26 14:20:37.976331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.112 qpair failed and we were unable to recover it. 00:26:30.112 [2024-07-26 14:20:37.976474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.112 [2024-07-26 14:20:37.976502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.112 qpair failed and we were unable to recover it. 00:26:30.112 [2024-07-26 14:20:37.976641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.112 [2024-07-26 14:20:37.976671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.112 qpair failed and we were unable to recover it. 00:26:30.112 [2024-07-26 14:20:37.976762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.112 [2024-07-26 14:20:37.976790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.112 qpair failed and we were unable to recover it. 00:26:30.112 [2024-07-26 14:20:37.976918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.112 [2024-07-26 14:20:37.976950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.112 qpair failed and we were unable to recover it. 00:26:30.112 [2024-07-26 14:20:37.977039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.112 [2024-07-26 14:20:37.977067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.112 qpair failed and we were unable to recover it. 00:26:30.112 [2024-07-26 14:20:37.977158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.112 [2024-07-26 14:20:37.977185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.112 qpair failed and we were unable to recover it. 00:26:30.112 [2024-07-26 14:20:37.977266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.112 [2024-07-26 14:20:37.977293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.112 qpair failed and we were unable to recover it. 00:26:30.112 [2024-07-26 14:20:37.977405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.112 [2024-07-26 14:20:37.977433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.112 qpair failed and we were unable to recover it. 00:26:30.112 [2024-07-26 14:20:37.977508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.112 [2024-07-26 14:20:37.977550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.112 qpair failed and we were unable to recover it. 00:26:30.112 [2024-07-26 14:20:37.977641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.112 [2024-07-26 14:20:37.977669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.112 qpair failed and we were unable to recover it. 00:26:30.112 [2024-07-26 14:20:37.977765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.112 [2024-07-26 14:20:37.977793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.112 qpair failed and we were unable to recover it. 00:26:30.112 [2024-07-26 14:20:37.977950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.112 [2024-07-26 14:20:37.977977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.112 qpair failed and we were unable to recover it. 00:26:30.112 [2024-07-26 14:20:37.978093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.112 [2024-07-26 14:20:37.978121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.112 qpair failed and we were unable to recover it. 00:26:30.112 [2024-07-26 14:20:37.978241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.112 [2024-07-26 14:20:37.978268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.112 qpair failed and we were unable to recover it. 00:26:30.112 [2024-07-26 14:20:37.978357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.112 [2024-07-26 14:20:37.978385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.112 qpair failed and we were unable to recover it. 00:26:30.112 [2024-07-26 14:20:37.978500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.112 [2024-07-26 14:20:37.978546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.112 qpair failed and we were unable to recover it. 00:26:30.112 [2024-07-26 14:20:37.978635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.112 [2024-07-26 14:20:37.978662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.112 qpair failed and we were unable to recover it. 00:26:30.112 [2024-07-26 14:20:37.978782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.112 [2024-07-26 14:20:37.978819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.112 qpair failed and we were unable to recover it. 00:26:30.112 [2024-07-26 14:20:37.978907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.112 [2024-07-26 14:20:37.978934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.112 qpair failed and we were unable to recover it. 00:26:30.112 [2024-07-26 14:20:37.979031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.112 [2024-07-26 14:20:37.979058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.112 qpair failed and we were unable to recover it. 00:26:30.112 [2024-07-26 14:20:37.979205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.112 [2024-07-26 14:20:37.979231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.112 qpair failed and we were unable to recover it. 00:26:30.112 [2024-07-26 14:20:37.979346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.113 [2024-07-26 14:20:37.979372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.113 qpair failed and we were unable to recover it. 00:26:30.113 [2024-07-26 14:20:37.979488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.113 [2024-07-26 14:20:37.979525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.113 qpair failed and we were unable to recover it. 00:26:30.113 [2024-07-26 14:20:37.979696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.113 [2024-07-26 14:20:37.979723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.113 qpair failed and we were unable to recover it. 00:26:30.113 [2024-07-26 14:20:37.979805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.113 [2024-07-26 14:20:37.979837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.113 qpair failed and we were unable to recover it. 00:26:30.113 [2024-07-26 14:20:37.979945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.113 [2024-07-26 14:20:37.979972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.113 qpair failed and we were unable to recover it. 00:26:30.113 [2024-07-26 14:20:37.980065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.113 [2024-07-26 14:20:37.980093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.113 qpair failed and we were unable to recover it. 00:26:30.113 [2024-07-26 14:20:37.980177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.113 [2024-07-26 14:20:37.980205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.113 qpair failed and we were unable to recover it. 00:26:30.113 [2024-07-26 14:20:37.980331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.113 [2024-07-26 14:20:37.980371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.113 qpair failed and we were unable to recover it. 00:26:30.113 [2024-07-26 14:20:37.980541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.113 [2024-07-26 14:20:37.980581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.113 qpair failed and we were unable to recover it. 00:26:30.113 [2024-07-26 14:20:37.980679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.113 [2024-07-26 14:20:37.980708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.113 qpair failed and we were unable to recover it. 00:26:30.113 [2024-07-26 14:20:37.980792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.113 [2024-07-26 14:20:37.980827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.113 qpair failed and we were unable to recover it. 00:26:30.113 [2024-07-26 14:20:37.980941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.113 [2024-07-26 14:20:37.980968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.113 qpair failed and we were unable to recover it. 00:26:30.113 [2024-07-26 14:20:37.981049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.113 [2024-07-26 14:20:37.981076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.113 qpair failed and we were unable to recover it. 00:26:30.113 [2024-07-26 14:20:37.981216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.113 [2024-07-26 14:20:37.981242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.113 qpair failed and we were unable to recover it. 00:26:30.113 [2024-07-26 14:20:37.981379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.113 [2024-07-26 14:20:37.981406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.113 qpair failed and we were unable to recover it. 00:26:30.113 [2024-07-26 14:20:37.981540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.113 [2024-07-26 14:20:37.981581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.113 qpair failed and we were unable to recover it. 00:26:30.113 [2024-07-26 14:20:37.981702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.113 [2024-07-26 14:20:37.981731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.113 qpair failed and we were unable to recover it. 00:26:30.113 [2024-07-26 14:20:37.981825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.113 [2024-07-26 14:20:37.981853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.113 qpair failed and we were unable to recover it. 00:26:30.113 [2024-07-26 14:20:37.981936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.113 [2024-07-26 14:20:37.981964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.113 qpair failed and we were unable to recover it. 00:26:30.113 [2024-07-26 14:20:37.982077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.113 [2024-07-26 14:20:37.982104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.113 qpair failed and we were unable to recover it. 00:26:30.113 [2024-07-26 14:20:37.982184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.113 [2024-07-26 14:20:37.982211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.113 qpair failed and we were unable to recover it. 00:26:30.113 [2024-07-26 14:20:37.982323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.113 [2024-07-26 14:20:37.982350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.113 qpair failed and we were unable to recover it. 00:26:30.113 [2024-07-26 14:20:37.982464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.113 [2024-07-26 14:20:37.982497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.113 qpair failed and we were unable to recover it. 00:26:30.113 [2024-07-26 14:20:37.982627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.113 [2024-07-26 14:20:37.982658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.113 qpair failed and we were unable to recover it. 00:26:30.113 [2024-07-26 14:20:37.982753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.113 [2024-07-26 14:20:37.982782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.113 qpair failed and we were unable to recover it. 00:26:30.113 [2024-07-26 14:20:37.982878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.113 [2024-07-26 14:20:37.982906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.113 qpair failed and we were unable to recover it. 00:26:30.113 [2024-07-26 14:20:37.983045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.113 [2024-07-26 14:20:37.983071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.113 qpair failed and we were unable to recover it. 00:26:30.113 [2024-07-26 14:20:37.983165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.113 [2024-07-26 14:20:37.983192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.113 qpair failed and we were unable to recover it. 00:26:30.113 [2024-07-26 14:20:37.983281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.113 [2024-07-26 14:20:37.983307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.113 qpair failed and we were unable to recover it. 00:26:30.113 [2024-07-26 14:20:37.983396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.113 [2024-07-26 14:20:37.983424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.113 qpair failed and we were unable to recover it. 00:26:30.113 [2024-07-26 14:20:37.983500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.113 [2024-07-26 14:20:37.983547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.113 qpair failed and we were unable to recover it. 00:26:30.113 [2024-07-26 14:20:37.983657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.113 [2024-07-26 14:20:37.983684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.113 qpair failed and we were unable to recover it. 00:26:30.113 [2024-07-26 14:20:37.983806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.113 [2024-07-26 14:20:37.983836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.113 qpair failed and we were unable to recover it. 00:26:30.113 [2024-07-26 14:20:37.983949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.113 [2024-07-26 14:20:37.983976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.113 qpair failed and we were unable to recover it. 00:26:30.113 [2024-07-26 14:20:37.984054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.113 [2024-07-26 14:20:37.984082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.113 qpair failed and we were unable to recover it. 00:26:30.113 [2024-07-26 14:20:37.984190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.113 [2024-07-26 14:20:37.984217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.113 qpair failed and we were unable to recover it. 00:26:30.113 [2024-07-26 14:20:37.984304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.113 [2024-07-26 14:20:37.984331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.113 qpair failed and we were unable to recover it. 00:26:30.113 [2024-07-26 14:20:37.984413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.113 [2024-07-26 14:20:37.984441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.113 qpair failed and we were unable to recover it. 00:26:30.113 [2024-07-26 14:20:37.984554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.113 [2024-07-26 14:20:37.984583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.113 qpair failed and we were unable to recover it. 00:26:30.113 [2024-07-26 14:20:37.984732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.113 [2024-07-26 14:20:37.984762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.113 qpair failed and we were unable to recover it. 00:26:30.113 [2024-07-26 14:20:37.984864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.114 [2024-07-26 14:20:37.984891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.114 qpair failed and we were unable to recover it. 00:26:30.114 [2024-07-26 14:20:37.984980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.114 [2024-07-26 14:20:37.985007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.114 qpair failed and we were unable to recover it. 00:26:30.114 [2024-07-26 14:20:37.985092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.114 [2024-07-26 14:20:37.985119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.114 qpair failed and we were unable to recover it. 00:26:30.114 [2024-07-26 14:20:37.985234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.114 [2024-07-26 14:20:37.985261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.114 qpair failed and we were unable to recover it. 00:26:30.114 [2024-07-26 14:20:37.985366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.114 [2024-07-26 14:20:37.985393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.114 qpair failed and we were unable to recover it. 00:26:30.114 [2024-07-26 14:20:37.985501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.114 [2024-07-26 14:20:37.985544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.114 qpair failed and we were unable to recover it. 00:26:30.114 [2024-07-26 14:20:37.985629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.114 [2024-07-26 14:20:37.985655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.114 qpair failed and we were unable to recover it. 00:26:30.114 [2024-07-26 14:20:37.985742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.114 [2024-07-26 14:20:37.985770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.114 qpair failed and we were unable to recover it. 00:26:30.114 [2024-07-26 14:20:37.985883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.114 [2024-07-26 14:20:37.985911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.114 qpair failed and we were unable to recover it. 00:26:30.114 [2024-07-26 14:20:37.986003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.114 [2024-07-26 14:20:37.986031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.114 qpair failed and we were unable to recover it. 00:26:30.114 [2024-07-26 14:20:37.986146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.114 [2024-07-26 14:20:37.986173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.114 qpair failed and we were unable to recover it. 00:26:30.114 [2024-07-26 14:20:37.986257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.114 [2024-07-26 14:20:37.986285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.114 qpair failed and we were unable to recover it. 00:26:30.114 [2024-07-26 14:20:37.986372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.114 [2024-07-26 14:20:37.986399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.114 qpair failed and we were unable to recover it. 00:26:30.114 [2024-07-26 14:20:37.986508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.114 [2024-07-26 14:20:37.986546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.114 qpair failed and we were unable to recover it. 00:26:30.114 [2024-07-26 14:20:37.986657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.114 [2024-07-26 14:20:37.986684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.114 qpair failed and we were unable to recover it. 00:26:30.114 [2024-07-26 14:20:37.986814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.114 [2024-07-26 14:20:37.986841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.114 qpair failed and we were unable to recover it. 00:26:30.114 [2024-07-26 14:20:37.986991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.114 [2024-07-26 14:20:37.987018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.114 qpair failed and we were unable to recover it. 00:26:30.114 [2024-07-26 14:20:37.987101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.114 [2024-07-26 14:20:37.987128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.114 qpair failed and we were unable to recover it. 00:26:30.114 [2024-07-26 14:20:37.987212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.114 [2024-07-26 14:20:37.987239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.114 qpair failed and we were unable to recover it. 00:26:30.114 [2024-07-26 14:20:37.987327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.114 [2024-07-26 14:20:37.987355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.114 qpair failed and we were unable to recover it. 00:26:30.114 [2024-07-26 14:20:37.987432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.114 [2024-07-26 14:20:37.987458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.114 qpair failed and we were unable to recover it. 00:26:30.114 [2024-07-26 14:20:37.987552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.114 [2024-07-26 14:20:37.987580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.114 qpair failed and we were unable to recover it. 00:26:30.114 [2024-07-26 14:20:37.987699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.114 [2024-07-26 14:20:37.987731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.114 qpair failed and we were unable to recover it. 00:26:30.114 [2024-07-26 14:20:37.987818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.114 [2024-07-26 14:20:37.987845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.114 qpair failed and we were unable to recover it. 00:26:30.114 [2024-07-26 14:20:37.987952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.114 [2024-07-26 14:20:37.987979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.114 qpair failed and we were unable to recover it. 00:26:30.114 [2024-07-26 14:20:37.988065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.114 [2024-07-26 14:20:37.988092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.114 qpair failed and we were unable to recover it. 00:26:30.114 [2024-07-26 14:20:37.988242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.114 [2024-07-26 14:20:37.988283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.114 qpair failed and we were unable to recover it. 00:26:30.114 [2024-07-26 14:20:37.988382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.114 [2024-07-26 14:20:37.988410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.114 qpair failed and we were unable to recover it. 00:26:30.114 [2024-07-26 14:20:37.988517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.114 [2024-07-26 14:20:37.988553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.114 qpair failed and we were unable to recover it. 00:26:30.114 [2024-07-26 14:20:37.988663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.114 [2024-07-26 14:20:37.988690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.114 qpair failed and we were unable to recover it. 00:26:30.114 [2024-07-26 14:20:37.988774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.114 [2024-07-26 14:20:37.988801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.114 qpair failed and we were unable to recover it. 00:26:30.114 [2024-07-26 14:20:37.988899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.114 [2024-07-26 14:20:37.988928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.114 qpair failed and we were unable to recover it. 00:26:30.114 [2024-07-26 14:20:37.989037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.114 [2024-07-26 14:20:37.989065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.114 qpair failed and we were unable to recover it. 00:26:30.114 [2024-07-26 14:20:37.989146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.114 [2024-07-26 14:20:37.989174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.114 qpair failed and we were unable to recover it. 00:26:30.114 [2024-07-26 14:20:37.989274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.114 [2024-07-26 14:20:37.989314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.114 qpair failed and we were unable to recover it. 00:26:30.114 [2024-07-26 14:20:37.989404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.114 [2024-07-26 14:20:37.989433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.114 qpair failed and we were unable to recover it. 00:26:30.114 [2024-07-26 14:20:37.989548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.114 [2024-07-26 14:20:37.989577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.114 qpair failed and we were unable to recover it. 00:26:30.114 [2024-07-26 14:20:37.989694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.114 [2024-07-26 14:20:37.989721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.114 qpair failed and we were unable to recover it. 00:26:30.114 [2024-07-26 14:20:37.989839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.114 [2024-07-26 14:20:37.989866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.114 qpair failed and we were unable to recover it. 00:26:30.114 [2024-07-26 14:20:37.989986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.114 [2024-07-26 14:20:37.990013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.114 qpair failed and we were unable to recover it. 00:26:30.114 [2024-07-26 14:20:37.990118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.114 [2024-07-26 14:20:37.990144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.114 qpair failed and we were unable to recover it. 00:26:30.114 [2024-07-26 14:20:37.990278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.114 [2024-07-26 14:20:37.990304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.114 qpair failed and we were unable to recover it. 00:26:30.114 [2024-07-26 14:20:37.990390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.114 [2024-07-26 14:20:37.990417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.114 qpair failed and we were unable to recover it. 00:26:30.114 [2024-07-26 14:20:37.990500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.115 [2024-07-26 14:20:37.990537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.115 qpair failed and we were unable to recover it. 00:26:30.115 [2024-07-26 14:20:37.990649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.115 [2024-07-26 14:20:37.990677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.115 qpair failed and we were unable to recover it. 00:26:30.115 [2024-07-26 14:20:37.990832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.115 [2024-07-26 14:20:37.990872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.115 qpair failed and we were unable to recover it. 00:26:30.115 [2024-07-26 14:20:37.991020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.115 [2024-07-26 14:20:37.991049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.115 qpair failed and we were unable to recover it. 00:26:30.115 [2024-07-26 14:20:37.991166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.115 [2024-07-26 14:20:37.991193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.115 qpair failed and we were unable to recover it. 00:26:30.115 [2024-07-26 14:20:37.991312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.115 [2024-07-26 14:20:37.991340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.115 qpair failed and we were unable to recover it. 00:26:30.115 [2024-07-26 14:20:37.991443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.115 [2024-07-26 14:20:37.991484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.115 qpair failed and we were unable to recover it. 00:26:30.115 [2024-07-26 14:20:37.991616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.115 [2024-07-26 14:20:37.991646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.115 qpair failed and we were unable to recover it. 00:26:30.115 [2024-07-26 14:20:37.991734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.115 [2024-07-26 14:20:37.991763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.115 qpair failed and we were unable to recover it. 00:26:30.115 [2024-07-26 14:20:37.991860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.115 [2024-07-26 14:20:37.991887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.115 qpair failed and we were unable to recover it. 00:26:30.115 [2024-07-26 14:20:37.991976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.115 [2024-07-26 14:20:37.992003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.115 qpair failed and we were unable to recover it. 00:26:30.115 [2024-07-26 14:20:37.992093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.115 [2024-07-26 14:20:37.992121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.115 qpair failed and we were unable to recover it. 00:26:30.115 [2024-07-26 14:20:37.992235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.115 [2024-07-26 14:20:37.992262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.115 qpair failed and we were unable to recover it. 00:26:30.115 [2024-07-26 14:20:37.992348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.115 [2024-07-26 14:20:37.992376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.115 qpair failed and we were unable to recover it. 00:26:30.115 [2024-07-26 14:20:37.992485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.115 [2024-07-26 14:20:37.992524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.115 qpair failed and we were unable to recover it. 00:26:30.115 [2024-07-26 14:20:37.992617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.115 [2024-07-26 14:20:37.992644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.115 qpair failed and we were unable to recover it. 00:26:30.115 [2024-07-26 14:20:37.992724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.115 [2024-07-26 14:20:37.992751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.115 qpair failed and we were unable to recover it. 00:26:30.115 [2024-07-26 14:20:37.992847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.115 [2024-07-26 14:20:37.992874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.115 qpair failed and we were unable to recover it. 00:26:30.115 [2024-07-26 14:20:37.993002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.115 [2024-07-26 14:20:37.993029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.115 qpair failed and we were unable to recover it. 00:26:30.115 [2024-07-26 14:20:37.993124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.115 [2024-07-26 14:20:37.993155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.115 qpair failed and we were unable to recover it. 00:26:30.115 [2024-07-26 14:20:37.993251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.115 [2024-07-26 14:20:37.993281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.115 qpair failed and we were unable to recover it. 00:26:30.115 [2024-07-26 14:20:37.993364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.115 [2024-07-26 14:20:37.993391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.115 qpair failed and we were unable to recover it. 00:26:30.115 [2024-07-26 14:20:37.993477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.115 [2024-07-26 14:20:37.993504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.115 qpair failed and we were unable to recover it. 00:26:30.115 [2024-07-26 14:20:37.993629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.115 [2024-07-26 14:20:37.993657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.115 qpair failed and we were unable to recover it. 00:26:30.115 [2024-07-26 14:20:37.993771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.115 [2024-07-26 14:20:37.993798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.115 qpair failed and we were unable to recover it. 00:26:30.115 [2024-07-26 14:20:37.993917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.115 [2024-07-26 14:20:37.993945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.115 qpair failed and we were unable to recover it. 00:26:30.115 [2024-07-26 14:20:37.994060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.115 [2024-07-26 14:20:37.994087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.115 qpair failed and we were unable to recover it. 00:26:30.115 [2024-07-26 14:20:37.994183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.115 [2024-07-26 14:20:37.994213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.115 qpair failed and we were unable to recover it. 00:26:30.115 [2024-07-26 14:20:37.994294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.115 [2024-07-26 14:20:37.994321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.115 qpair failed and we were unable to recover it. 00:26:30.115 [2024-07-26 14:20:37.994405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.115 [2024-07-26 14:20:37.994432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.115 qpair failed and we were unable to recover it. 00:26:30.115 [2024-07-26 14:20:37.994517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.115 [2024-07-26 14:20:37.994552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.115 qpair failed and we were unable to recover it. 00:26:30.115 [2024-07-26 14:20:37.994638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.115 [2024-07-26 14:20:37.994666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.115 qpair failed and we were unable to recover it. 00:26:30.115 [2024-07-26 14:20:37.994756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.115 [2024-07-26 14:20:37.994784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.115 qpair failed and we were unable to recover it. 00:26:30.115 [2024-07-26 14:20:37.994911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.115 [2024-07-26 14:20:37.994939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.115 qpair failed and we were unable to recover it. 00:26:30.115 [2024-07-26 14:20:37.995043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.115 [2024-07-26 14:20:37.995070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.115 qpair failed and we were unable to recover it. 00:26:30.115 [2024-07-26 14:20:37.995185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.115 [2024-07-26 14:20:37.995213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.115 qpair failed and we were unable to recover it. 00:26:30.115 [2024-07-26 14:20:37.995306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.115 [2024-07-26 14:20:37.995334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.115 qpair failed and we were unable to recover it. 00:26:30.115 [2024-07-26 14:20:37.995431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.115 [2024-07-26 14:20:37.995470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.115 qpair failed and we were unable to recover it. 00:26:30.115 [2024-07-26 14:20:37.995575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.115 [2024-07-26 14:20:37.995603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.115 qpair failed and we were unable to recover it. 00:26:30.115 [2024-07-26 14:20:37.995698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.115 [2024-07-26 14:20:37.995726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.115 qpair failed and we were unable to recover it. 00:26:30.115 [2024-07-26 14:20:37.995813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.115 [2024-07-26 14:20:37.995846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.115 qpair failed and we were unable to recover it. 00:26:30.115 [2024-07-26 14:20:37.995958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.115 [2024-07-26 14:20:37.995984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.115 qpair failed and we were unable to recover it. 00:26:30.115 [2024-07-26 14:20:37.996067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.115 [2024-07-26 14:20:37.996093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.115 qpair failed and we were unable to recover it. 00:26:30.115 [2024-07-26 14:20:37.996205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.115 [2024-07-26 14:20:37.996232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.115 qpair failed and we were unable to recover it. 00:26:30.115 [2024-07-26 14:20:37.996340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.116 [2024-07-26 14:20:37.996369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.116 qpair failed and we were unable to recover it. 00:26:30.116 [2024-07-26 14:20:37.996482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.116 [2024-07-26 14:20:37.996509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.116 qpair failed and we were unable to recover it. 00:26:30.116 [2024-07-26 14:20:37.996630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.116 [2024-07-26 14:20:37.996663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.116 qpair failed and we were unable to recover it. 00:26:30.116 [2024-07-26 14:20:37.996756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.116 [2024-07-26 14:20:37.996783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.116 qpair failed and we were unable to recover it. 00:26:30.116 [2024-07-26 14:20:37.996862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.116 [2024-07-26 14:20:37.996890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.116 qpair failed and we were unable to recover it. 00:26:30.116 [2024-07-26 14:20:37.997005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.116 [2024-07-26 14:20:37.997032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.116 qpair failed and we were unable to recover it. 00:26:30.116 [2024-07-26 14:20:37.997124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.116 [2024-07-26 14:20:37.997151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.116 qpair failed and we were unable to recover it. 00:26:30.116 [2024-07-26 14:20:37.997237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.116 [2024-07-26 14:20:37.997264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.116 qpair failed and we were unable to recover it. 00:26:30.116 [2024-07-26 14:20:37.997346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.116 [2024-07-26 14:20:37.997373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.116 qpair failed and we were unable to recover it. 00:26:30.116 [2024-07-26 14:20:37.997522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.116 [2024-07-26 14:20:37.997558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.116 qpair failed and we were unable to recover it. 00:26:30.116 [2024-07-26 14:20:37.997672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.116 [2024-07-26 14:20:37.997699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.116 qpair failed and we were unable to recover it. 00:26:30.116 [2024-07-26 14:20:37.997805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.116 [2024-07-26 14:20:37.997842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.116 qpair failed and we were unable to recover it. 00:26:30.116 [2024-07-26 14:20:37.997949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.116 [2024-07-26 14:20:37.997976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.116 qpair failed and we were unable to recover it. 00:26:30.116 [2024-07-26 14:20:37.998115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.116 [2024-07-26 14:20:37.998141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.116 qpair failed and we were unable to recover it. 00:26:30.116 [2024-07-26 14:20:37.998222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.116 [2024-07-26 14:20:37.998250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.116 qpair failed and we were unable to recover it. 00:26:30.116 [2024-07-26 14:20:37.998365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.116 [2024-07-26 14:20:37.998392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.116 qpair failed and we were unable to recover it. 00:26:30.116 [2024-07-26 14:20:37.998518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.116 [2024-07-26 14:20:37.998564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.116 qpair failed and we were unable to recover it. 00:26:30.116 [2024-07-26 14:20:37.998686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.116 [2024-07-26 14:20:37.998714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.116 qpair failed and we were unable to recover it. 00:26:30.116 [2024-07-26 14:20:37.998806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.116 [2024-07-26 14:20:37.998838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.116 qpair failed and we were unable to recover it. 00:26:30.116 [2024-07-26 14:20:37.998920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.116 [2024-07-26 14:20:37.998947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.116 qpair failed and we were unable to recover it. 00:26:30.116 [2024-07-26 14:20:37.999063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.116 [2024-07-26 14:20:37.999089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.116 qpair failed and we were unable to recover it. 00:26:30.116 [2024-07-26 14:20:37.999195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.116 [2024-07-26 14:20:37.999222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.116 qpair failed and we were unable to recover it. 00:26:30.116 [2024-07-26 14:20:37.999333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.116 [2024-07-26 14:20:37.999360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.116 qpair failed and we were unable to recover it. 00:26:30.116 [2024-07-26 14:20:37.999440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.116 [2024-07-26 14:20:37.999467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.116 qpair failed and we were unable to recover it. 00:26:30.116 [2024-07-26 14:20:37.999567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.116 [2024-07-26 14:20:37.999595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.116 qpair failed and we were unable to recover it. 00:26:30.116 [2024-07-26 14:20:37.999688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.116 [2024-07-26 14:20:37.999715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.116 qpair failed and we were unable to recover it. 00:26:30.116 [2024-07-26 14:20:37.999796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.116 [2024-07-26 14:20:37.999827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.116 qpair failed and we were unable to recover it. 00:26:30.116 [2024-07-26 14:20:37.999907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.116 [2024-07-26 14:20:37.999933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.116 qpair failed and we were unable to recover it. 00:26:30.116 [2024-07-26 14:20:38.000048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.116 [2024-07-26 14:20:38.000074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.116 qpair failed and we were unable to recover it. 00:26:30.116 [2024-07-26 14:20:38.000191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.116 [2024-07-26 14:20:38.000220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.116 qpair failed and we were unable to recover it. 00:26:30.116 [2024-07-26 14:20:38.000316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.116 [2024-07-26 14:20:38.000344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.116 qpair failed and we were unable to recover it. 00:26:30.116 [2024-07-26 14:20:38.000453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.116 [2024-07-26 14:20:38.000480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.116 qpair failed and we were unable to recover it. 00:26:30.116 [2024-07-26 14:20:38.000593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.116 [2024-07-26 14:20:38.000621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.116 qpair failed and we were unable to recover it. 00:26:30.116 [2024-07-26 14:20:38.000762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.116 [2024-07-26 14:20:38.000789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.116 qpair failed and we were unable to recover it. 00:26:30.116 [2024-07-26 14:20:38.000888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.116 [2024-07-26 14:20:38.000915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.116 qpair failed and we were unable to recover it. 00:26:30.116 [2024-07-26 14:20:38.001000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.116 [2024-07-26 14:20:38.001027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.116 qpair failed and we were unable to recover it. 00:26:30.116 [2024-07-26 14:20:38.001108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.116 [2024-07-26 14:20:38.001134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.116 qpair failed and we were unable to recover it. 00:26:30.116 [2024-07-26 14:20:38.001213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.116 [2024-07-26 14:20:38.001239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.116 qpair failed and we were unable to recover it. 00:26:30.116 [2024-07-26 14:20:38.001345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.116 [2024-07-26 14:20:38.001372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.116 qpair failed and we were unable to recover it. 00:26:30.116 [2024-07-26 14:20:38.001452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.117 [2024-07-26 14:20:38.001480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.117 qpair failed and we were unable to recover it. 00:26:30.117 [2024-07-26 14:20:38.001618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.117 [2024-07-26 14:20:38.001658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.117 qpair failed and we were unable to recover it. 00:26:30.117 [2024-07-26 14:20:38.001747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.117 [2024-07-26 14:20:38.001776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.117 qpair failed and we were unable to recover it. 00:26:30.117 [2024-07-26 14:20:38.001863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.117 [2024-07-26 14:20:38.001895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.117 qpair failed and we were unable to recover it. 00:26:30.117 [2024-07-26 14:20:38.002003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.117 [2024-07-26 14:20:38.002030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.117 qpair failed and we were unable to recover it. 00:26:30.117 [2024-07-26 14:20:38.002138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.117 [2024-07-26 14:20:38.002165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.117 qpair failed and we were unable to recover it. 00:26:30.117 [2024-07-26 14:20:38.002259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.117 [2024-07-26 14:20:38.002287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.117 qpair failed and we were unable to recover it. 00:26:30.117 [2024-07-26 14:20:38.002369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.117 [2024-07-26 14:20:38.002396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.117 qpair failed and we were unable to recover it. 00:26:30.117 [2024-07-26 14:20:38.002509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.117 [2024-07-26 14:20:38.002542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.117 qpair failed and we were unable to recover it. 00:26:30.117 [2024-07-26 14:20:38.002622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.117 [2024-07-26 14:20:38.002649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.117 qpair failed and we were unable to recover it. 00:26:30.117 [2024-07-26 14:20:38.002741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.117 [2024-07-26 14:20:38.002768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.117 qpair failed and we were unable to recover it. 00:26:30.117 [2024-07-26 14:20:38.002851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.117 [2024-07-26 14:20:38.002878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.117 qpair failed and we were unable to recover it. 00:26:30.117 [2024-07-26 14:20:38.002996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.117 [2024-07-26 14:20:38.003022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.117 qpair failed and we were unable to recover it. 00:26:30.117 [2024-07-26 14:20:38.003111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.117 [2024-07-26 14:20:38.003140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.117 qpair failed and we were unable to recover it. 00:26:30.117 [2024-07-26 14:20:38.003229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.117 [2024-07-26 14:20:38.003256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.117 qpair failed and we were unable to recover it. 00:26:30.117 [2024-07-26 14:20:38.003339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.117 [2024-07-26 14:20:38.003366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.117 qpair failed and we were unable to recover it. 00:26:30.117 [2024-07-26 14:20:38.003481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.117 [2024-07-26 14:20:38.003508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.117 qpair failed and we were unable to recover it. 00:26:30.117 [2024-07-26 14:20:38.003610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.117 [2024-07-26 14:20:38.003638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.117 qpair failed and we were unable to recover it. 00:26:30.117 [2024-07-26 14:20:38.003751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.117 [2024-07-26 14:20:38.003778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.117 qpair failed and we were unable to recover it. 00:26:30.117 [2024-07-26 14:20:38.003887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.117 [2024-07-26 14:20:38.003914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.117 qpair failed and we were unable to recover it. 00:26:30.117 [2024-07-26 14:20:38.003995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.117 [2024-07-26 14:20:38.004022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.117 qpair failed and we were unable to recover it. 00:26:30.117 [2024-07-26 14:20:38.004132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.117 [2024-07-26 14:20:38.004158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.117 qpair failed and we were unable to recover it. 00:26:30.117 [2024-07-26 14:20:38.004264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.118 [2024-07-26 14:20:38.004291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.118 qpair failed and we were unable to recover it. 00:26:30.118 [2024-07-26 14:20:38.004410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.118 [2024-07-26 14:20:38.004437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.118 qpair failed and we were unable to recover it. 00:26:30.118 [2024-07-26 14:20:38.004525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.118 [2024-07-26 14:20:38.004560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.118 qpair failed and we were unable to recover it. 00:26:30.118 [2024-07-26 14:20:38.004676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.118 [2024-07-26 14:20:38.004702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.118 qpair failed and we were unable to recover it. 00:26:30.118 [2024-07-26 14:20:38.004784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.118 [2024-07-26 14:20:38.004811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.118 qpair failed and we were unable to recover it. 00:26:30.118 [2024-07-26 14:20:38.004890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.118 [2024-07-26 14:20:38.004918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.118 qpair failed and we were unable to recover it. 00:26:30.118 [2024-07-26 14:20:38.005005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.118 [2024-07-26 14:20:38.005035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.118 qpair failed and we were unable to recover it. 00:26:30.118 [2024-07-26 14:20:38.005122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.118 [2024-07-26 14:20:38.005151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.118 qpair failed and we were unable to recover it. 00:26:30.118 [2024-07-26 14:20:38.005264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.118 [2024-07-26 14:20:38.005304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.118 qpair failed and we were unable to recover it. 00:26:30.118 [2024-07-26 14:20:38.005452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.118 [2024-07-26 14:20:38.005480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.118 qpair failed and we were unable to recover it. 00:26:30.118 [2024-07-26 14:20:38.005595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.118 [2024-07-26 14:20:38.005622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.118 qpair failed and we were unable to recover it. 00:26:30.118 [2024-07-26 14:20:38.005735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.118 [2024-07-26 14:20:38.005762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.118 qpair failed and we were unable to recover it. 00:26:30.118 [2024-07-26 14:20:38.005853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.118 [2024-07-26 14:20:38.005879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.118 qpair failed and we were unable to recover it. 00:26:30.118 [2024-07-26 14:20:38.005987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.118 [2024-07-26 14:20:38.006014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.118 qpair failed and we were unable to recover it. 00:26:30.118 [2024-07-26 14:20:38.006102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.118 [2024-07-26 14:20:38.006131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.118 qpair failed and we were unable to recover it. 00:26:30.118 [2024-07-26 14:20:38.006263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.118 [2024-07-26 14:20:38.006293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.118 qpair failed and we were unable to recover it. 00:26:30.118 [2024-07-26 14:20:38.006383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.118 [2024-07-26 14:20:38.006410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.118 qpair failed and we were unable to recover it. 00:26:30.118 [2024-07-26 14:20:38.006510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.118 [2024-07-26 14:20:38.006544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.118 qpair failed and we were unable to recover it. 00:26:30.118 [2024-07-26 14:20:38.006627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.118 [2024-07-26 14:20:38.006654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.118 qpair failed and we were unable to recover it. 00:26:30.118 [2024-07-26 14:20:38.006746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.118 [2024-07-26 14:20:38.006772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.118 qpair failed and we were unable to recover it. 00:26:30.118 [2024-07-26 14:20:38.006854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.118 [2024-07-26 14:20:38.006881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.118 qpair failed and we were unable to recover it. 00:26:30.118 [2024-07-26 14:20:38.006987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.118 [2024-07-26 14:20:38.007017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.118 qpair failed and we were unable to recover it. 00:26:30.118 [2024-07-26 14:20:38.007158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.118 [2024-07-26 14:20:38.007184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.118 qpair failed and we were unable to recover it. 00:26:30.118 [2024-07-26 14:20:38.007275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.118 [2024-07-26 14:20:38.007301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.118 qpair failed and we were unable to recover it. 00:26:30.118 [2024-07-26 14:20:38.007382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.118 [2024-07-26 14:20:38.007409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.118 qpair failed and we were unable to recover it. 00:26:30.118 [2024-07-26 14:20:38.007504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.118 [2024-07-26 14:20:38.007535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.118 qpair failed and we were unable to recover it. 00:26:30.118 [2024-07-26 14:20:38.007645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.118 [2024-07-26 14:20:38.007671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.118 qpair failed and we were unable to recover it. 00:26:30.118 [2024-07-26 14:20:38.007779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.118 [2024-07-26 14:20:38.007805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.118 qpair failed and we were unable to recover it. 00:26:30.118 [2024-07-26 14:20:38.007892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.118 [2024-07-26 14:20:38.007918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.118 qpair failed and we were unable to recover it. 00:26:30.118 [2024-07-26 14:20:38.008002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.118 [2024-07-26 14:20:38.008028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.118 qpair failed and we were unable to recover it. 00:26:30.118 [2024-07-26 14:20:38.008132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.118 [2024-07-26 14:20:38.008159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.118 qpair failed and we were unable to recover it. 00:26:30.118 [2024-07-26 14:20:38.008272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.118 [2024-07-26 14:20:38.008297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.118 qpair failed and we were unable to recover it. 00:26:30.118 [2024-07-26 14:20:38.008414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.118 [2024-07-26 14:20:38.008440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.118 qpair failed and we were unable to recover it. 00:26:30.118 [2024-07-26 14:20:38.008521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.118 [2024-07-26 14:20:38.008559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.118 qpair failed and we were unable to recover it. 00:26:30.118 [2024-07-26 14:20:38.008670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.118 [2024-07-26 14:20:38.008697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.118 qpair failed and we were unable to recover it. 00:26:30.118 [2024-07-26 14:20:38.008791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.118 [2024-07-26 14:20:38.008817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.118 qpair failed and we were unable to recover it. 00:26:30.118 [2024-07-26 14:20:38.008903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.118 [2024-07-26 14:20:38.008929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.118 qpair failed and we were unable to recover it. 00:26:30.118 [2024-07-26 14:20:38.009011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.119 [2024-07-26 14:20:38.009038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.119 qpair failed and we were unable to recover it. 00:26:30.119 [2024-07-26 14:20:38.009121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.119 [2024-07-26 14:20:38.009149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.119 qpair failed and we were unable to recover it. 00:26:30.119 [2024-07-26 14:20:38.009230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.119 [2024-07-26 14:20:38.009259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.119 qpair failed and we were unable to recover it. 00:26:30.119 [2024-07-26 14:20:38.009354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.119 [2024-07-26 14:20:38.009393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.119 qpair failed and we were unable to recover it. 00:26:30.119 [2024-07-26 14:20:38.009474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.119 [2024-07-26 14:20:38.009501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.119 qpair failed and we were unable to recover it. 00:26:30.119 [2024-07-26 14:20:38.009592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.119 [2024-07-26 14:20:38.009619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.119 qpair failed and we were unable to recover it. 00:26:30.119 [2024-07-26 14:20:38.009698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.119 [2024-07-26 14:20:38.009724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.119 qpair failed and we were unable to recover it. 00:26:30.119 [2024-07-26 14:20:38.009800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.119 [2024-07-26 14:20:38.009826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.119 qpair failed and we were unable to recover it. 00:26:30.119 [2024-07-26 14:20:38.009907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.119 [2024-07-26 14:20:38.009933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.119 qpair failed and we were unable to recover it. 00:26:30.119 [2024-07-26 14:20:38.010013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.119 [2024-07-26 14:20:38.010041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.119 qpair failed and we were unable to recover it. 00:26:30.119 [2024-07-26 14:20:38.010097] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:30.119 [2024-07-26 14:20:38.010129] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events[2024-07-26 14:20:38.010125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.119 at runtime. 00:26:30.119 [2024-07-26 14:20:38.010151] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:30.119 [2024-07-26 14:20:38.010157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.119 [2024-07-26 14:20:38.010163] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:30.119 qpair failed and we were unable to recover it. 00:26:30.119 [2024-07-26 14:20:38.010174] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:30.119 [2024-07-26 14:20:38.010253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.119 [2024-07-26 14:20:38.010280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.119 [2024-07-26 14:20:38.010227] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:26:30.119 qpair failed and we were unable to recover it. 00:26:30.119 [2024-07-26 14:20:38.010254] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:26:30.119 [2024-07-26 14:20:38.010399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.119 [2024-07-26 14:20:38.010280] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:26:30.119 [2024-07-26 14:20:38.010283] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:26:30.119 [2024-07-26 14:20:38.010426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.119 qpair failed and we were unable to recover it. 00:26:30.119 [2024-07-26 14:20:38.010517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.119 [2024-07-26 14:20:38.010555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.119 qpair failed and we were unable to recover it. 00:26:30.119 [2024-07-26 14:20:38.010639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.119 [2024-07-26 14:20:38.010664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.119 qpair failed and we were unable to recover it. 00:26:30.119 [2024-07-26 14:20:38.010770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.119 [2024-07-26 14:20:38.010796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.119 qpair failed and we were unable to recover it. 00:26:30.119 [2024-07-26 14:20:38.010904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.119 [2024-07-26 14:20:38.010930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.119 qpair failed and we were unable to recover it. 00:26:30.119 [2024-07-26 14:20:38.011013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.119 [2024-07-26 14:20:38.011039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.119 qpair failed and we were unable to recover it. 00:26:30.119 [2024-07-26 14:20:38.011144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.119 [2024-07-26 14:20:38.011171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.119 qpair failed and we were unable to recover it. 00:26:30.119 [2024-07-26 14:20:38.011259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.119 [2024-07-26 14:20:38.011288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.119 qpair failed and we were unable to recover it. 00:26:30.119 [2024-07-26 14:20:38.011377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.119 [2024-07-26 14:20:38.011404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.119 qpair failed and we were unable to recover it. 00:26:30.119 [2024-07-26 14:20:38.011489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.119 [2024-07-26 14:20:38.011518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.119 qpair failed and we were unable to recover it. 00:26:30.119 [2024-07-26 14:20:38.011617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.119 [2024-07-26 14:20:38.011645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.119 qpair failed and we were unable to recover it. 00:26:30.119 [2024-07-26 14:20:38.011728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.119 [2024-07-26 14:20:38.011755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.119 qpair failed and we were unable to recover it. 00:26:30.119 [2024-07-26 14:20:38.011869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.119 [2024-07-26 14:20:38.011896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.119 qpair failed and we were unable to recover it. 00:26:30.119 [2024-07-26 14:20:38.011991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.119 [2024-07-26 14:20:38.012018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.119 qpair failed and we were unable to recover it. 00:26:30.119 [2024-07-26 14:20:38.012105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.119 [2024-07-26 14:20:38.012132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.119 qpair failed and we were unable to recover it. 00:26:30.119 [2024-07-26 14:20:38.012243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.119 [2024-07-26 14:20:38.012270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.119 qpair failed and we were unable to recover it. 00:26:30.119 [2024-07-26 14:20:38.012380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.119 [2024-07-26 14:20:38.012409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.119 qpair failed and we were unable to recover it. 00:26:30.119 [2024-07-26 14:20:38.012493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.119 [2024-07-26 14:20:38.012522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.119 qpair failed and we were unable to recover it. 00:26:30.119 [2024-07-26 14:20:38.012624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.119 [2024-07-26 14:20:38.012651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.119 qpair failed and we were unable to recover it. 00:26:30.119 [2024-07-26 14:20:38.012736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.119 [2024-07-26 14:20:38.012762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.119 qpair failed and we were unable to recover it. 00:26:30.119 [2024-07-26 14:20:38.012871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.119 [2024-07-26 14:20:38.012897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.119 qpair failed and we were unable to recover it. 00:26:30.119 [2024-07-26 14:20:38.013008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.119 [2024-07-26 14:20:38.013037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.119 qpair failed and we were unable to recover it. 00:26:30.119 [2024-07-26 14:20:38.013129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.119 [2024-07-26 14:20:38.013157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.120 qpair failed and we were unable to recover it. 00:26:30.120 [2024-07-26 14:20:38.013245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.120 [2024-07-26 14:20:38.013271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.120 qpair failed and we were unable to recover it. 00:26:30.120 [2024-07-26 14:20:38.013357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.120 [2024-07-26 14:20:38.013383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.120 qpair failed and we were unable to recover it. 00:26:30.120 [2024-07-26 14:20:38.013490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.120 [2024-07-26 14:20:38.013517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.120 qpair failed and we were unable to recover it. 00:26:30.120 [2024-07-26 14:20:38.013608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.120 [2024-07-26 14:20:38.013634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.120 qpair failed and we were unable to recover it. 00:26:30.120 [2024-07-26 14:20:38.013717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.120 [2024-07-26 14:20:38.013743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.120 qpair failed and we were unable to recover it. 00:26:30.120 [2024-07-26 14:20:38.013855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.120 [2024-07-26 14:20:38.013882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.120 qpair failed and we were unable to recover it. 00:26:30.120 [2024-07-26 14:20:38.013986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.120 [2024-07-26 14:20:38.014013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.120 qpair failed and we were unable to recover it. 00:26:30.120 [2024-07-26 14:20:38.014133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.120 [2024-07-26 14:20:38.014160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.120 qpair failed and we were unable to recover it. 00:26:30.120 [2024-07-26 14:20:38.014245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.120 [2024-07-26 14:20:38.014273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.120 qpair failed and we were unable to recover it. 00:26:30.120 [2024-07-26 14:20:38.014371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.120 [2024-07-26 14:20:38.014399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.120 qpair failed and we were unable to recover it. 00:26:30.120 [2024-07-26 14:20:38.014510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.120 [2024-07-26 14:20:38.014544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.120 qpair failed and we were unable to recover it. 00:26:30.120 [2024-07-26 14:20:38.014637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.120 [2024-07-26 14:20:38.014664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.120 qpair failed and we were unable to recover it. 00:26:30.120 [2024-07-26 14:20:38.014748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.120 [2024-07-26 14:20:38.014775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.120 qpair failed and we were unable to recover it. 00:26:30.120 [2024-07-26 14:20:38.014864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.120 [2024-07-26 14:20:38.014895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.120 qpair failed and we were unable to recover it. 00:26:30.120 [2024-07-26 14:20:38.015006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.120 [2024-07-26 14:20:38.015033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.120 qpair failed and we were unable to recover it. 00:26:30.120 [2024-07-26 14:20:38.015126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.120 [2024-07-26 14:20:38.015152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.120 qpair failed and we were unable to recover it. 00:26:30.120 [2024-07-26 14:20:38.015286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.120 [2024-07-26 14:20:38.015313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.120 qpair failed and we were unable to recover it. 00:26:30.120 [2024-07-26 14:20:38.015393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.120 [2024-07-26 14:20:38.015428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.120 qpair failed and we were unable to recover it. 00:26:30.120 [2024-07-26 14:20:38.015549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.120 [2024-07-26 14:20:38.015577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.120 qpair failed and we were unable to recover it. 00:26:30.120 [2024-07-26 14:20:38.015666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.120 [2024-07-26 14:20:38.015694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.120 qpair failed and we were unable to recover it. 00:26:30.120 [2024-07-26 14:20:38.015798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.120 [2024-07-26 14:20:38.015830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.120 qpair failed and we were unable to recover it. 00:26:30.120 [2024-07-26 14:20:38.015928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.120 [2024-07-26 14:20:38.015955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.120 qpair failed and we were unable to recover it. 00:26:30.120 [2024-07-26 14:20:38.016060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.120 [2024-07-26 14:20:38.016089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.120 qpair failed and we were unable to recover it. 00:26:30.120 [2024-07-26 14:20:38.016217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.120 [2024-07-26 14:20:38.016264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.120 qpair failed and we were unable to recover it. 00:26:30.120 [2024-07-26 14:20:38.016371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.120 [2024-07-26 14:20:38.016399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.120 qpair failed and we were unable to recover it. 00:26:30.120 [2024-07-26 14:20:38.016514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.120 [2024-07-26 14:20:38.016549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.120 qpair failed and we were unable to recover it. 00:26:30.120 [2024-07-26 14:20:38.016635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.120 [2024-07-26 14:20:38.016659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.120 qpair failed and we were unable to recover it. 00:26:30.120 [2024-07-26 14:20:38.016748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.120 [2024-07-26 14:20:38.016773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.120 qpair failed and we were unable to recover it. 00:26:30.120 [2024-07-26 14:20:38.016861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.120 [2024-07-26 14:20:38.016887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.120 qpair failed and we were unable to recover it. 00:26:30.120 [2024-07-26 14:20:38.016971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.120 [2024-07-26 14:20:38.016998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.120 qpair failed and we were unable to recover it. 00:26:30.120 [2024-07-26 14:20:38.017094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.120 [2024-07-26 14:20:38.017124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.120 qpair failed and we were unable to recover it. 00:26:30.120 [2024-07-26 14:20:38.017222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.120 [2024-07-26 14:20:38.017250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.120 qpair failed and we were unable to recover it. 00:26:30.120 [2024-07-26 14:20:38.017339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.121 [2024-07-26 14:20:38.017367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.121 qpair failed and we were unable to recover it. 00:26:30.121 [2024-07-26 14:20:38.017450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.121 [2024-07-26 14:20:38.017475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.121 qpair failed and we were unable to recover it. 00:26:30.121 [2024-07-26 14:20:38.017590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.121 [2024-07-26 14:20:38.017619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.121 qpair failed and we were unable to recover it. 00:26:30.121 [2024-07-26 14:20:38.017705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.121 [2024-07-26 14:20:38.017732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.121 qpair failed and we were unable to recover it. 00:26:30.121 [2024-07-26 14:20:38.017834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.121 [2024-07-26 14:20:38.017861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.121 qpair failed and we were unable to recover it. 00:26:30.121 [2024-07-26 14:20:38.017949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.121 [2024-07-26 14:20:38.017976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.121 qpair failed and we were unable to recover it. 00:26:30.121 [2024-07-26 14:20:38.018060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.121 [2024-07-26 14:20:38.018087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.121 qpair failed and we were unable to recover it. 00:26:30.121 [2024-07-26 14:20:38.018172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.121 [2024-07-26 14:20:38.018201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.121 qpair failed and we were unable to recover it. 00:26:30.121 [2024-07-26 14:20:38.018292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.121 [2024-07-26 14:20:38.018320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.121 qpair failed and we were unable to recover it. 00:26:30.121 [2024-07-26 14:20:38.018429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.121 [2024-07-26 14:20:38.018468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.121 qpair failed and we were unable to recover it. 00:26:30.121 [2024-07-26 14:20:38.018584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.121 [2024-07-26 14:20:38.018611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.121 qpair failed and we were unable to recover it. 00:26:30.121 [2024-07-26 14:20:38.018728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.121 [2024-07-26 14:20:38.018755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.121 qpair failed and we were unable to recover it. 00:26:30.121 [2024-07-26 14:20:38.018856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.121 [2024-07-26 14:20:38.018883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.121 qpair failed and we were unable to recover it. 00:26:30.121 [2024-07-26 14:20:38.018997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.121 [2024-07-26 14:20:38.019023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.121 qpair failed and we were unable to recover it. 00:26:30.121 [2024-07-26 14:20:38.019102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.121 [2024-07-26 14:20:38.019129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.121 qpair failed and we were unable to recover it. 00:26:30.121 [2024-07-26 14:20:38.019207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.121 [2024-07-26 14:20:38.019234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.121 qpair failed and we were unable to recover it. 00:26:30.121 [2024-07-26 14:20:38.019313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.121 [2024-07-26 14:20:38.019339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.121 qpair failed and we were unable to recover it. 00:26:30.121 [2024-07-26 14:20:38.019454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.121 [2024-07-26 14:20:38.019480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.121 qpair failed and we were unable to recover it. 00:26:30.121 [2024-07-26 14:20:38.019576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.121 [2024-07-26 14:20:38.019603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.121 qpair failed and we were unable to recover it. 00:26:30.121 [2024-07-26 14:20:38.019704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.121 [2024-07-26 14:20:38.019733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.121 qpair failed and we were unable to recover it. 00:26:30.121 [2024-07-26 14:20:38.019823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.121 [2024-07-26 14:20:38.019870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.121 qpair failed and we were unable to recover it. 00:26:30.121 [2024-07-26 14:20:38.019978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.121 [2024-07-26 14:20:38.020010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.121 qpair failed and we were unable to recover it. 00:26:30.121 [2024-07-26 14:20:38.020109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.121 [2024-07-26 14:20:38.020137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.121 qpair failed and we were unable to recover it. 00:26:30.121 [2024-07-26 14:20:38.020225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.121 [2024-07-26 14:20:38.020252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.121 qpair failed and we were unable to recover it. 00:26:30.121 [2024-07-26 14:20:38.020332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.121 [2024-07-26 14:20:38.020359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.121 qpair failed and we were unable to recover it. 00:26:30.121 [2024-07-26 14:20:38.020446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.121 [2024-07-26 14:20:38.020472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.121 qpair failed and we were unable to recover it. 00:26:30.121 [2024-07-26 14:20:38.020578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.121 [2024-07-26 14:20:38.020607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.121 qpair failed and we were unable to recover it. 00:26:30.121 [2024-07-26 14:20:38.020698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.121 [2024-07-26 14:20:38.020724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.121 qpair failed and we were unable to recover it. 00:26:30.121 [2024-07-26 14:20:38.020811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.121 [2024-07-26 14:20:38.020848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.121 qpair failed and we were unable to recover it. 00:26:30.121 [2024-07-26 14:20:38.020927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.121 [2024-07-26 14:20:38.020953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.122 qpair failed and we were unable to recover it. 00:26:30.122 [2024-07-26 14:20:38.021060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.122 [2024-07-26 14:20:38.021085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.122 qpair failed and we were unable to recover it. 00:26:30.122 [2024-07-26 14:20:38.021162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.122 [2024-07-26 14:20:38.021188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.122 qpair failed and we were unable to recover it. 00:26:30.122 [2024-07-26 14:20:38.021277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.122 [2024-07-26 14:20:38.021305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.122 qpair failed and we were unable to recover it. 00:26:30.122 [2024-07-26 14:20:38.021399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.122 [2024-07-26 14:20:38.021425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.122 qpair failed and we were unable to recover it. 00:26:30.122 [2024-07-26 14:20:38.021517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.122 [2024-07-26 14:20:38.021551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.122 qpair failed and we were unable to recover it. 00:26:30.122 [2024-07-26 14:20:38.021638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.122 [2024-07-26 14:20:38.021665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.122 qpair failed and we were unable to recover it. 00:26:30.122 [2024-07-26 14:20:38.021754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.122 [2024-07-26 14:20:38.021780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.122 qpair failed and we were unable to recover it. 00:26:30.122 [2024-07-26 14:20:38.021865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.122 [2024-07-26 14:20:38.021891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.122 qpair failed and we were unable to recover it. 00:26:30.122 [2024-07-26 14:20:38.021972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.122 [2024-07-26 14:20:38.021999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.122 qpair failed and we were unable to recover it. 00:26:30.122 [2024-07-26 14:20:38.022095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.122 [2024-07-26 14:20:38.022121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.122 qpair failed and we were unable to recover it. 00:26:30.122 [2024-07-26 14:20:38.022206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.122 [2024-07-26 14:20:38.022232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.122 qpair failed and we were unable to recover it. 00:26:30.122 [2024-07-26 14:20:38.022315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.122 [2024-07-26 14:20:38.022342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.122 qpair failed and we were unable to recover it. 00:26:30.122 [2024-07-26 14:20:38.022432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.122 [2024-07-26 14:20:38.022460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.122 qpair failed and we were unable to recover it. 00:26:30.122 [2024-07-26 14:20:38.022561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.122 [2024-07-26 14:20:38.022589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.122 qpair failed and we were unable to recover it. 00:26:30.122 [2024-07-26 14:20:38.022672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.122 [2024-07-26 14:20:38.022699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.122 qpair failed and we were unable to recover it. 00:26:30.122 [2024-07-26 14:20:38.022793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.122 [2024-07-26 14:20:38.022827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.122 qpair failed and we were unable to recover it. 00:26:30.122 [2024-07-26 14:20:38.022931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.122 [2024-07-26 14:20:38.022960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.122 qpair failed and we were unable to recover it. 00:26:30.122 [2024-07-26 14:20:38.023051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.122 [2024-07-26 14:20:38.023080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.122 qpair failed and we were unable to recover it. 00:26:30.122 [2024-07-26 14:20:38.023159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.122 [2024-07-26 14:20:38.023190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.122 qpair failed and we were unable to recover it. 00:26:30.122 [2024-07-26 14:20:38.023274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.122 [2024-07-26 14:20:38.023299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.122 qpair failed and we were unable to recover it. 00:26:30.122 [2024-07-26 14:20:38.023382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.122 [2024-07-26 14:20:38.023408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.122 qpair failed and we were unable to recover it. 00:26:30.122 [2024-07-26 14:20:38.023517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.122 [2024-07-26 14:20:38.023551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.122 qpair failed and we were unable to recover it. 00:26:30.122 [2024-07-26 14:20:38.023637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.122 [2024-07-26 14:20:38.023663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.122 qpair failed and we were unable to recover it. 00:26:30.122 [2024-07-26 14:20:38.023771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.122 [2024-07-26 14:20:38.023797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.122 qpair failed and we were unable to recover it. 00:26:30.122 [2024-07-26 14:20:38.023900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.122 [2024-07-26 14:20:38.023926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.122 qpair failed and we were unable to recover it. 00:26:30.122 [2024-07-26 14:20:38.024011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.122 [2024-07-26 14:20:38.024039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.122 qpair failed and we were unable to recover it. 00:26:30.122 [2024-07-26 14:20:38.024125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.122 [2024-07-26 14:20:38.024152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.122 qpair failed and we were unable to recover it. 00:26:30.122 [2024-07-26 14:20:38.024243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.122 [2024-07-26 14:20:38.024284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.122 qpair failed and we were unable to recover it. 00:26:30.122 [2024-07-26 14:20:38.024380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.122 [2024-07-26 14:20:38.024409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.122 qpair failed and we were unable to recover it. 00:26:30.122 [2024-07-26 14:20:38.024512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.122 [2024-07-26 14:20:38.024548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.122 qpair failed and we were unable to recover it. 00:26:30.122 [2024-07-26 14:20:38.024639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.122 [2024-07-26 14:20:38.024667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.122 qpair failed and we were unable to recover it. 00:26:30.122 [2024-07-26 14:20:38.024756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.122 [2024-07-26 14:20:38.024783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.122 qpair failed and we were unable to recover it. 00:26:30.122 [2024-07-26 14:20:38.024886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.122 [2024-07-26 14:20:38.024913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.122 qpair failed and we were unable to recover it. 00:26:30.122 [2024-07-26 14:20:38.025024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.122 [2024-07-26 14:20:38.025051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.123 qpair failed and we were unable to recover it. 00:26:30.123 [2024-07-26 14:20:38.025136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.123 [2024-07-26 14:20:38.025163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.123 qpair failed and we were unable to recover it. 00:26:30.123 [2024-07-26 14:20:38.025242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.123 [2024-07-26 14:20:38.025268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.123 qpair failed and we were unable to recover it. 00:26:30.123 [2024-07-26 14:20:38.025350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.123 [2024-07-26 14:20:38.025377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.123 qpair failed and we were unable to recover it. 00:26:30.123 [2024-07-26 14:20:38.025465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.123 [2024-07-26 14:20:38.025493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.123 qpair failed and we were unable to recover it. 00:26:30.123 [2024-07-26 14:20:38.025592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.123 [2024-07-26 14:20:38.025622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.123 qpair failed and we were unable to recover it. 00:26:30.123 [2024-07-26 14:20:38.025714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.123 [2024-07-26 14:20:38.025743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.123 qpair failed and we were unable to recover it. 00:26:30.123 [2024-07-26 14:20:38.025824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.123 [2024-07-26 14:20:38.025854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.123 qpair failed and we were unable to recover it. 00:26:30.123 [2024-07-26 14:20:38.025935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.123 [2024-07-26 14:20:38.025961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.123 qpair failed and we were unable to recover it. 00:26:30.123 [2024-07-26 14:20:38.026047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.123 [2024-07-26 14:20:38.026073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.123 qpair failed and we were unable to recover it. 00:26:30.123 [2024-07-26 14:20:38.026167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.123 [2024-07-26 14:20:38.026195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.123 qpair failed and we were unable to recover it. 00:26:30.123 [2024-07-26 14:20:38.026282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.123 [2024-07-26 14:20:38.026311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.123 qpair failed and we were unable to recover it. 00:26:30.123 [2024-07-26 14:20:38.026397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.123 [2024-07-26 14:20:38.026424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.123 qpair failed and we were unable to recover it. 00:26:30.123 [2024-07-26 14:20:38.026510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.123 [2024-07-26 14:20:38.026547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.123 qpair failed and we were unable to recover it. 00:26:30.123 [2024-07-26 14:20:38.026664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.123 [2024-07-26 14:20:38.026691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.123 qpair failed and we were unable to recover it. 00:26:30.123 [2024-07-26 14:20:38.026768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.123 [2024-07-26 14:20:38.026794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.123 qpair failed and we were unable to recover it. 00:26:30.123 [2024-07-26 14:20:38.026915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.123 [2024-07-26 14:20:38.026943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.123 qpair failed and we were unable to recover it. 00:26:30.123 [2024-07-26 14:20:38.027042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.123 [2024-07-26 14:20:38.027067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.123 qpair failed and we were unable to recover it. 00:26:30.123 [2024-07-26 14:20:38.027199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.123 [2024-07-26 14:20:38.027226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.123 qpair failed and we were unable to recover it. 00:26:30.123 [2024-07-26 14:20:38.027338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.123 [2024-07-26 14:20:38.027364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.123 qpair failed and we were unable to recover it. 00:26:30.123 [2024-07-26 14:20:38.027495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.123 [2024-07-26 14:20:38.027547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.123 qpair failed and we were unable to recover it. 00:26:30.123 [2024-07-26 14:20:38.027643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.123 [2024-07-26 14:20:38.027672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.123 qpair failed and we were unable to recover it. 00:26:30.123 [2024-07-26 14:20:38.027794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.123 [2024-07-26 14:20:38.027821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.123 qpair failed and we were unable to recover it. 00:26:30.123 [2024-07-26 14:20:38.027903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.123 [2024-07-26 14:20:38.027930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.123 qpair failed and we were unable to recover it. 00:26:30.123 [2024-07-26 14:20:38.028017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.123 [2024-07-26 14:20:38.028045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.123 qpair failed and we were unable to recover it. 00:26:30.123 [2024-07-26 14:20:38.028140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.123 [2024-07-26 14:20:38.028173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.123 qpair failed and we were unable to recover it. 00:26:30.123 [2024-07-26 14:20:38.028265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.123 [2024-07-26 14:20:38.028292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.123 qpair failed and we were unable to recover it. 00:26:30.123 [2024-07-26 14:20:38.028378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.123 [2024-07-26 14:20:38.028404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.123 qpair failed and we were unable to recover it. 00:26:30.123 [2024-07-26 14:20:38.028489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.123 [2024-07-26 14:20:38.028534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.123 qpair failed and we were unable to recover it. 00:26:30.123 [2024-07-26 14:20:38.028645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.123 [2024-07-26 14:20:38.028671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.123 qpair failed and we were unable to recover it. 00:26:30.123 [2024-07-26 14:20:38.028760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.123 [2024-07-26 14:20:38.028786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.123 qpair failed and we were unable to recover it. 00:26:30.123 [2024-07-26 14:20:38.028927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.123 [2024-07-26 14:20:38.028953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.123 qpair failed and we were unable to recover it. 00:26:30.123 [2024-07-26 14:20:38.029031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.123 [2024-07-26 14:20:38.029056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.123 qpair failed and we were unable to recover it. 00:26:30.123 [2024-07-26 14:20:38.029155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.123 [2024-07-26 14:20:38.029185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.123 qpair failed and we were unable to recover it. 00:26:30.123 [2024-07-26 14:20:38.029271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.124 [2024-07-26 14:20:38.029298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.124 qpair failed and we were unable to recover it. 00:26:30.124 [2024-07-26 14:20:38.029379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.124 [2024-07-26 14:20:38.029406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.124 qpair failed and we were unable to recover it. 00:26:30.124 [2024-07-26 14:20:38.029487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.124 [2024-07-26 14:20:38.029513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.124 qpair failed and we were unable to recover it. 00:26:30.124 [2024-07-26 14:20:38.029610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.124 [2024-07-26 14:20:38.029638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.124 qpair failed and we were unable to recover it. 00:26:30.124 [2024-07-26 14:20:38.029731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.124 [2024-07-26 14:20:38.029760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.124 qpair failed and we were unable to recover it. 00:26:30.124 [2024-07-26 14:20:38.029863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.124 [2024-07-26 14:20:38.029891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.124 qpair failed and we were unable to recover it. 00:26:30.124 [2024-07-26 14:20:38.029982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.124 [2024-07-26 14:20:38.030009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.124 qpair failed and we were unable to recover it. 00:26:30.124 [2024-07-26 14:20:38.030094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.124 [2024-07-26 14:20:38.030120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.124 qpair failed and we were unable to recover it. 00:26:30.124 [2024-07-26 14:20:38.030207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.124 [2024-07-26 14:20:38.030233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.124 qpair failed and we were unable to recover it. 00:26:30.124 [2024-07-26 14:20:38.030324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.124 [2024-07-26 14:20:38.030351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.124 qpair failed and we were unable to recover it. 00:26:30.124 [2024-07-26 14:20:38.030436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.124 [2024-07-26 14:20:38.030462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.124 qpair failed and we were unable to recover it. 00:26:30.124 [2024-07-26 14:20:38.030555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.124 [2024-07-26 14:20:38.030582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.124 qpair failed and we were unable to recover it. 00:26:30.124 [2024-07-26 14:20:38.030672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.124 [2024-07-26 14:20:38.030698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.124 qpair failed and we were unable to recover it. 00:26:30.124 [2024-07-26 14:20:38.030779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.124 [2024-07-26 14:20:38.030806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.124 qpair failed and we were unable to recover it. 00:26:30.124 [2024-07-26 14:20:38.030881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.124 [2024-07-26 14:20:38.030906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.124 qpair failed and we were unable to recover it. 00:26:30.124 [2024-07-26 14:20:38.031017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.124 [2024-07-26 14:20:38.031042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.124 qpair failed and we were unable to recover it. 00:26:30.124 [2024-07-26 14:20:38.031120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.124 [2024-07-26 14:20:38.031146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.124 qpair failed and we were unable to recover it. 00:26:30.124 [2024-07-26 14:20:38.031223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.124 [2024-07-26 14:20:38.031249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.124 qpair failed and we were unable to recover it. 00:26:30.124 [2024-07-26 14:20:38.031334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.124 [2024-07-26 14:20:38.031369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.124 qpair failed and we were unable to recover it. 00:26:30.124 [2024-07-26 14:20:38.031452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.124 [2024-07-26 14:20:38.031480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.124 qpair failed and we were unable to recover it. 00:26:30.124 [2024-07-26 14:20:38.031585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.124 [2024-07-26 14:20:38.031613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.124 qpair failed and we were unable to recover it. 00:26:30.124 [2024-07-26 14:20:38.031726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.124 [2024-07-26 14:20:38.031754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.124 qpair failed and we were unable to recover it. 00:26:30.124 [2024-07-26 14:20:38.031856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.124 [2024-07-26 14:20:38.031894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.124 qpair failed and we were unable to recover it. 00:26:30.124 [2024-07-26 14:20:38.032014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.124 [2024-07-26 14:20:38.032040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.124 qpair failed and we were unable to recover it. 00:26:30.124 [2024-07-26 14:20:38.032131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.124 [2024-07-26 14:20:38.032158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.124 qpair failed and we were unable to recover it. 00:26:30.124 [2024-07-26 14:20:38.032253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.124 [2024-07-26 14:20:38.032281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.124 qpair failed and we were unable to recover it. 00:26:30.124 [2024-07-26 14:20:38.032373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.124 [2024-07-26 14:20:38.032399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.124 qpair failed and we were unable to recover it. 00:26:30.124 [2024-07-26 14:20:38.032487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.124 [2024-07-26 14:20:38.032521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.124 qpair failed and we were unable to recover it. 00:26:30.124 [2024-07-26 14:20:38.032628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.124 [2024-07-26 14:20:38.032654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.124 qpair failed and we were unable to recover it. 00:26:30.124 [2024-07-26 14:20:38.032738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.124 [2024-07-26 14:20:38.032765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.124 qpair failed and we were unable to recover it. 00:26:30.124 [2024-07-26 14:20:38.032859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.124 [2024-07-26 14:20:38.032892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.124 qpair failed and we were unable to recover it. 00:26:30.124 [2024-07-26 14:20:38.033008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.124 [2024-07-26 14:20:38.033035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.124 qpair failed and we were unable to recover it. 00:26:30.124 [2024-07-26 14:20:38.033127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.124 [2024-07-26 14:20:38.033154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.124 qpair failed and we were unable to recover it. 00:26:30.124 [2024-07-26 14:20:38.033231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.124 [2024-07-26 14:20:38.033268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.124 qpair failed and we were unable to recover it. 00:26:30.124 [2024-07-26 14:20:38.033378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.124 [2024-07-26 14:20:38.033404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.124 qpair failed and we were unable to recover it. 00:26:30.124 [2024-07-26 14:20:38.033488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.124 [2024-07-26 14:20:38.033513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.124 qpair failed and we were unable to recover it. 00:26:30.124 [2024-07-26 14:20:38.033603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.124 [2024-07-26 14:20:38.033630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.124 qpair failed and we were unable to recover it. 00:26:30.124 [2024-07-26 14:20:38.033716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.124 [2024-07-26 14:20:38.033740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.124 qpair failed and we were unable to recover it. 00:26:30.125 [2024-07-26 14:20:38.033819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.125 [2024-07-26 14:20:38.033843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.125 qpair failed and we were unable to recover it. 00:26:30.125 [2024-07-26 14:20:38.033922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.125 [2024-07-26 14:20:38.033946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.125 qpair failed and we were unable to recover it. 00:26:30.125 [2024-07-26 14:20:38.034029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.125 [2024-07-26 14:20:38.034056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.125 qpair failed and we were unable to recover it. 00:26:30.125 [2024-07-26 14:20:38.034157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.125 [2024-07-26 14:20:38.034195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.125 qpair failed and we were unable to recover it. 00:26:30.125 [2024-07-26 14:20:38.034290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.125 [2024-07-26 14:20:38.034316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.125 qpair failed and we were unable to recover it. 00:26:30.125 [2024-07-26 14:20:38.034405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.125 [2024-07-26 14:20:38.034430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.125 qpair failed and we were unable to recover it. 00:26:30.125 [2024-07-26 14:20:38.034568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.125 [2024-07-26 14:20:38.034594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.125 qpair failed and we were unable to recover it. 00:26:30.125 [2024-07-26 14:20:38.034699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.125 [2024-07-26 14:20:38.034727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.125 qpair failed and we were unable to recover it. 00:26:30.125 [2024-07-26 14:20:38.034816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.125 [2024-07-26 14:20:38.034849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.125 qpair failed and we were unable to recover it. 00:26:30.125 [2024-07-26 14:20:38.034957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.125 [2024-07-26 14:20:38.034983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.125 qpair failed and we were unable to recover it. 00:26:30.125 [2024-07-26 14:20:38.035070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.125 [2024-07-26 14:20:38.035095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.125 qpair failed and we were unable to recover it. 00:26:30.125 [2024-07-26 14:20:38.035183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.125 [2024-07-26 14:20:38.035208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.125 qpair failed and we were unable to recover it. 00:26:30.125 [2024-07-26 14:20:38.035328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.125 [2024-07-26 14:20:38.035352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.125 qpair failed and we were unable to recover it. 00:26:30.125 [2024-07-26 14:20:38.035433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.125 [2024-07-26 14:20:38.035459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.125 qpair failed and we were unable to recover it. 00:26:30.125 [2024-07-26 14:20:38.035566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.125 [2024-07-26 14:20:38.035593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.125 qpair failed and we were unable to recover it. 00:26:30.125 [2024-07-26 14:20:38.035682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.125 [2024-07-26 14:20:38.035706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.125 qpair failed and we were unable to recover it. 00:26:30.125 [2024-07-26 14:20:38.035818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.125 [2024-07-26 14:20:38.035844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.125 qpair failed and we were unable to recover it. 00:26:30.125 [2024-07-26 14:20:38.035929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.125 [2024-07-26 14:20:38.035954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.125 qpair failed and we were unable to recover it. 00:26:30.125 [2024-07-26 14:20:38.036068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.125 [2024-07-26 14:20:38.036095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.125 qpair failed and we were unable to recover it. 00:26:30.125 [2024-07-26 14:20:38.036186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.125 [2024-07-26 14:20:38.036213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.125 qpair failed and we were unable to recover it. 00:26:30.125 [2024-07-26 14:20:38.036311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.125 [2024-07-26 14:20:38.036342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.125 qpair failed and we were unable to recover it. 00:26:30.125 [2024-07-26 14:20:38.036455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.125 [2024-07-26 14:20:38.036482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.125 qpair failed and we were unable to recover it. 00:26:30.125 [2024-07-26 14:20:38.036580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.125 [2024-07-26 14:20:38.036608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.125 qpair failed and we were unable to recover it. 00:26:30.125 [2024-07-26 14:20:38.036696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.125 [2024-07-26 14:20:38.036722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.125 qpair failed and we were unable to recover it. 00:26:30.125 [2024-07-26 14:20:38.036810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.125 [2024-07-26 14:20:38.036839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.125 qpair failed and we were unable to recover it. 00:26:30.125 [2024-07-26 14:20:38.036954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.125 [2024-07-26 14:20:38.036979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.125 qpair failed and we were unable to recover it. 00:26:30.125 [2024-07-26 14:20:38.037071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.125 [2024-07-26 14:20:38.037100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.125 qpair failed and we were unable to recover it. 00:26:30.125 [2024-07-26 14:20:38.037196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.125 [2024-07-26 14:20:38.037236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.125 qpair failed and we were unable to recover it. 00:26:30.125 [2024-07-26 14:20:38.037359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.125 [2024-07-26 14:20:38.037387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.125 qpair failed and we were unable to recover it. 00:26:30.125 [2024-07-26 14:20:38.037501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.125 [2024-07-26 14:20:38.037541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.125 qpair failed and we were unable to recover it. 00:26:30.125 [2024-07-26 14:20:38.037635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.125 [2024-07-26 14:20:38.037662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.125 qpair failed and we were unable to recover it. 00:26:30.125 [2024-07-26 14:20:38.037742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.125 [2024-07-26 14:20:38.037769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.125 qpair failed and we were unable to recover it. 00:26:30.125 [2024-07-26 14:20:38.037863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.125 [2024-07-26 14:20:38.037889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.125 qpair failed and we were unable to recover it. 00:26:30.125 [2024-07-26 14:20:38.037974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.125 [2024-07-26 14:20:38.038001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.125 qpair failed and we were unable to recover it. 00:26:30.125 [2024-07-26 14:20:38.038089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.125 [2024-07-26 14:20:38.038116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.125 qpair failed and we were unable to recover it. 00:26:30.125 [2024-07-26 14:20:38.038208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.125 [2024-07-26 14:20:38.038234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.125 qpair failed and we were unable to recover it. 00:26:30.125 [2024-07-26 14:20:38.038314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.125 [2024-07-26 14:20:38.038340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.125 qpair failed and we were unable to recover it. 00:26:30.125 [2024-07-26 14:20:38.038452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.126 [2024-07-26 14:20:38.038480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.126 qpair failed and we were unable to recover it. 00:26:30.126 [2024-07-26 14:20:38.038574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.126 [2024-07-26 14:20:38.038602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.126 qpair failed and we were unable to recover it. 00:26:30.126 [2024-07-26 14:20:38.038689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.126 [2024-07-26 14:20:38.038715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.126 qpair failed and we were unable to recover it. 00:26:30.126 [2024-07-26 14:20:38.038822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.126 [2024-07-26 14:20:38.038848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.126 qpair failed and we were unable to recover it. 00:26:30.126 [2024-07-26 14:20:38.038928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.126 [2024-07-26 14:20:38.038954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.126 qpair failed and we were unable to recover it. 00:26:30.126 [2024-07-26 14:20:38.039037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.126 [2024-07-26 14:20:38.039063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.126 qpair failed and we were unable to recover it. 00:26:30.126 [2024-07-26 14:20:38.039155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.126 [2024-07-26 14:20:38.039181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.126 qpair failed and we were unable to recover it. 00:26:30.126 [2024-07-26 14:20:38.039272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.126 [2024-07-26 14:20:38.039299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.126 qpair failed and we were unable to recover it. 00:26:30.126 [2024-07-26 14:20:38.039424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.126 [2024-07-26 14:20:38.039464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.126 qpair failed and we were unable to recover it. 00:26:30.126 [2024-07-26 14:20:38.039567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.126 [2024-07-26 14:20:38.039598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.126 qpair failed and we were unable to recover it. 00:26:30.126 [2024-07-26 14:20:38.039687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.126 [2024-07-26 14:20:38.039719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.126 qpair failed and we were unable to recover it. 00:26:30.126 [2024-07-26 14:20:38.039807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.126 [2024-07-26 14:20:38.039837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.126 qpair failed and we were unable to recover it. 00:26:30.126 [2024-07-26 14:20:38.039953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.126 [2024-07-26 14:20:38.039979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.126 qpair failed and we were unable to recover it. 00:26:30.126 [2024-07-26 14:20:38.040065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.126 [2024-07-26 14:20:38.040092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.126 qpair failed and we were unable to recover it. 00:26:30.126 [2024-07-26 14:20:38.040189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.126 [2024-07-26 14:20:38.040216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.126 qpair failed and we were unable to recover it. 00:26:30.126 [2024-07-26 14:20:38.040306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.126 [2024-07-26 14:20:38.040336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.126 qpair failed and we were unable to recover it. 00:26:30.126 [2024-07-26 14:20:38.040423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.126 [2024-07-26 14:20:38.040451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.126 qpair failed and we were unable to recover it. 00:26:30.126 [2024-07-26 14:20:38.040548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.126 [2024-07-26 14:20:38.040576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.126 qpair failed and we were unable to recover it. 00:26:30.126 [2024-07-26 14:20:38.040681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.126 [2024-07-26 14:20:38.040707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.126 qpair failed and we were unable to recover it. 00:26:30.126 [2024-07-26 14:20:38.040868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.126 [2024-07-26 14:20:38.040894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.126 qpair failed and we were unable to recover it. 00:26:30.126 [2024-07-26 14:20:38.040984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.126 [2024-07-26 14:20:38.041012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.126 qpair failed and we were unable to recover it. 00:26:30.126 [2024-07-26 14:20:38.041100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.126 [2024-07-26 14:20:38.041129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.126 qpair failed and we were unable to recover it. 00:26:30.126 [2024-07-26 14:20:38.041216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.126 [2024-07-26 14:20:38.041243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.126 qpair failed and we were unable to recover it. 00:26:30.126 [2024-07-26 14:20:38.041327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.126 [2024-07-26 14:20:38.041354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.126 qpair failed and we were unable to recover it. 00:26:30.126 [2024-07-26 14:20:38.041446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.126 [2024-07-26 14:20:38.041473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.126 qpair failed and we were unable to recover it. 00:26:30.126 [2024-07-26 14:20:38.041569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.126 [2024-07-26 14:20:38.041597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.126 qpair failed and we were unable to recover it. 00:26:30.126 [2024-07-26 14:20:38.041679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.126 [2024-07-26 14:20:38.041705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.126 qpair failed and we were unable to recover it. 00:26:30.126 [2024-07-26 14:20:38.041797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.126 [2024-07-26 14:20:38.041827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.126 qpair failed and we were unable to recover it. 00:26:30.126 [2024-07-26 14:20:38.041931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.126 [2024-07-26 14:20:38.041957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.126 qpair failed and we were unable to recover it. 00:26:30.126 [2024-07-26 14:20:38.042052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.126 [2024-07-26 14:20:38.042078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.126 qpair failed and we were unable to recover it. 00:26:30.126 [2024-07-26 14:20:38.042166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.126 [2024-07-26 14:20:38.042196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.126 qpair failed and we were unable to recover it. 00:26:30.126 [2024-07-26 14:20:38.042282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.127 [2024-07-26 14:20:38.042309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.127 qpair failed and we were unable to recover it. 00:26:30.127 [2024-07-26 14:20:38.042405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.127 [2024-07-26 14:20:38.042444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.127 qpair failed and we were unable to recover it. 00:26:30.127 [2024-07-26 14:20:38.042551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.127 [2024-07-26 14:20:38.042578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.127 qpair failed and we were unable to recover it. 00:26:30.127 [2024-07-26 14:20:38.042679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.127 [2024-07-26 14:20:38.042719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.127 qpair failed and we were unable to recover it. 00:26:30.127 [2024-07-26 14:20:38.042842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.127 [2024-07-26 14:20:38.042870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.127 qpair failed and we were unable to recover it. 00:26:30.127 [2024-07-26 14:20:38.042955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.127 [2024-07-26 14:20:38.042981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.127 qpair failed and we were unable to recover it. 00:26:30.127 [2024-07-26 14:20:38.043105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.127 [2024-07-26 14:20:38.043132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.127 qpair failed and we were unable to recover it. 00:26:30.127 [2024-07-26 14:20:38.043215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.127 [2024-07-26 14:20:38.043241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.127 qpair failed and we were unable to recover it. 00:26:30.127 [2024-07-26 14:20:38.043359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.127 [2024-07-26 14:20:38.043388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.127 qpair failed and we were unable to recover it. 00:26:30.127 [2024-07-26 14:20:38.043472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.127 [2024-07-26 14:20:38.043499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.127 qpair failed and we were unable to recover it. 00:26:30.127 [2024-07-26 14:20:38.043615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.127 [2024-07-26 14:20:38.043644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.127 qpair failed and we were unable to recover it. 00:26:30.127 [2024-07-26 14:20:38.043732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.127 [2024-07-26 14:20:38.043759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.127 qpair failed and we were unable to recover it. 00:26:30.127 [2024-07-26 14:20:38.043888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.127 [2024-07-26 14:20:38.043915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.127 qpair failed and we were unable to recover it. 00:26:30.127 [2024-07-26 14:20:38.044000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.127 [2024-07-26 14:20:38.044026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.127 qpair failed and we were unable to recover it. 00:26:30.127 [2024-07-26 14:20:38.044116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.127 [2024-07-26 14:20:38.044142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.127 qpair failed and we were unable to recover it. 00:26:30.127 [2024-07-26 14:20:38.044235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.127 [2024-07-26 14:20:38.044264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.127 qpair failed and we were unable to recover it. 00:26:30.127 [2024-07-26 14:20:38.044367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.127 [2024-07-26 14:20:38.044407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.127 qpair failed and we were unable to recover it. 00:26:30.127 [2024-07-26 14:20:38.044495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.127 [2024-07-26 14:20:38.044540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.127 qpair failed and we were unable to recover it. 00:26:30.127 [2024-07-26 14:20:38.044623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.127 [2024-07-26 14:20:38.044651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.127 qpair failed and we were unable to recover it. 00:26:30.127 [2024-07-26 14:20:38.044736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.127 [2024-07-26 14:20:38.044768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.127 qpair failed and we were unable to recover it. 00:26:30.127 [2024-07-26 14:20:38.044859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.127 [2024-07-26 14:20:38.044886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.127 qpair failed and we were unable to recover it. 00:26:30.127 [2024-07-26 14:20:38.044974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.127 [2024-07-26 14:20:38.045001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.127 qpair failed and we were unable to recover it. 00:26:30.127 [2024-07-26 14:20:38.045118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.127 [2024-07-26 14:20:38.045145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.127 qpair failed and we were unable to recover it. 00:26:30.127 [2024-07-26 14:20:38.045260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.127 [2024-07-26 14:20:38.045287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.127 qpair failed and we were unable to recover it. 00:26:30.127 [2024-07-26 14:20:38.045375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.127 [2024-07-26 14:20:38.045403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.127 qpair failed and we were unable to recover it. 00:26:30.127 [2024-07-26 14:20:38.045485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.127 [2024-07-26 14:20:38.045512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.127 qpair failed and we were unable to recover it. 00:26:30.127 [2024-07-26 14:20:38.045629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.127 [2024-07-26 14:20:38.045669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.127 qpair failed and we were unable to recover it. 00:26:30.127 [2024-07-26 14:20:38.045758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.127 [2024-07-26 14:20:38.045786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.127 qpair failed and we were unable to recover it. 00:26:30.127 [2024-07-26 14:20:38.045872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.127 [2024-07-26 14:20:38.045898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.127 qpair failed and we were unable to recover it. 00:26:30.127 [2024-07-26 14:20:38.045992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.127 [2024-07-26 14:20:38.046019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.127 qpair failed and we were unable to recover it. 00:26:30.127 [2024-07-26 14:20:38.046106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.127 [2024-07-26 14:20:38.046132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.127 qpair failed and we were unable to recover it. 00:26:30.127 [2024-07-26 14:20:38.046244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.127 [2024-07-26 14:20:38.046284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.127 qpair failed and we were unable to recover it. 00:26:30.127 [2024-07-26 14:20:38.046375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.127 [2024-07-26 14:20:38.046404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.127 qpair failed and we were unable to recover it. 00:26:30.127 [2024-07-26 14:20:38.046496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.127 [2024-07-26 14:20:38.046544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.127 qpair failed and we were unable to recover it. 00:26:30.127 [2024-07-26 14:20:38.046642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.127 [2024-07-26 14:20:38.046668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.127 qpair failed and we were unable to recover it. 00:26:30.127 [2024-07-26 14:20:38.046751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.127 [2024-07-26 14:20:38.046778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.127 qpair failed and we were unable to recover it. 00:26:30.127 [2024-07-26 14:20:38.046879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.127 [2024-07-26 14:20:38.046906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.127 qpair failed and we were unable to recover it. 00:26:30.127 [2024-07-26 14:20:38.046993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.128 [2024-07-26 14:20:38.047021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.128 qpair failed and we were unable to recover it. 00:26:30.128 [2024-07-26 14:20:38.047126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.128 [2024-07-26 14:20:38.047155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.128 qpair failed and we were unable to recover it. 00:26:30.128 [2024-07-26 14:20:38.047248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.128 [2024-07-26 14:20:38.047276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.128 qpair failed and we were unable to recover it. 00:26:30.128 [2024-07-26 14:20:38.047362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.128 [2024-07-26 14:20:38.047388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.128 qpair failed and we were unable to recover it. 00:26:30.128 [2024-07-26 14:20:38.047486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.128 [2024-07-26 14:20:38.047519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.128 qpair failed and we were unable to recover it. 00:26:30.128 [2024-07-26 14:20:38.047623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.128 [2024-07-26 14:20:38.047652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.128 qpair failed and we were unable to recover it. 00:26:30.128 [2024-07-26 14:20:38.047769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.128 [2024-07-26 14:20:38.047796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.128 qpair failed and we were unable to recover it. 00:26:30.128 [2024-07-26 14:20:38.047884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.128 [2024-07-26 14:20:38.047911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.128 qpair failed and we were unable to recover it. 00:26:30.128 [2024-07-26 14:20:38.048024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.128 [2024-07-26 14:20:38.048051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.128 qpair failed and we were unable to recover it. 00:26:30.128 [2024-07-26 14:20:38.048140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.128 [2024-07-26 14:20:38.048168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.128 qpair failed and we were unable to recover it. 00:26:30.128 [2024-07-26 14:20:38.048258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.128 [2024-07-26 14:20:38.048284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.128 qpair failed and we were unable to recover it. 00:26:30.128 [2024-07-26 14:20:38.048373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.128 [2024-07-26 14:20:38.048399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.128 qpair failed and we were unable to recover it. 00:26:30.128 [2024-07-26 14:20:38.048478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.128 [2024-07-26 14:20:38.048505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.128 qpair failed and we were unable to recover it. 00:26:30.128 [2024-07-26 14:20:38.048600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.128 [2024-07-26 14:20:38.048629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.128 qpair failed and we were unable to recover it. 00:26:30.128 [2024-07-26 14:20:38.048710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.128 [2024-07-26 14:20:38.048737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.128 qpair failed and we were unable to recover it. 00:26:30.128 [2024-07-26 14:20:38.048819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.128 [2024-07-26 14:20:38.048846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.128 qpair failed and we were unable to recover it. 00:26:30.128 [2024-07-26 14:20:38.048956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.128 [2024-07-26 14:20:38.048982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.128 qpair failed and we were unable to recover it. 00:26:30.128 [2024-07-26 14:20:38.049064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.128 [2024-07-26 14:20:38.049092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.128 qpair failed and we were unable to recover it. 00:26:30.128 [2024-07-26 14:20:38.049175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.128 [2024-07-26 14:20:38.049203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.128 qpair failed and we were unable to recover it. 00:26:30.128 [2024-07-26 14:20:38.049292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.128 [2024-07-26 14:20:38.049319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.128 qpair failed and we were unable to recover it. 00:26:30.128 [2024-07-26 14:20:38.049405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.128 [2024-07-26 14:20:38.049432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.128 qpair failed and we were unable to recover it. 00:26:30.128 [2024-07-26 14:20:38.049539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.128 [2024-07-26 14:20:38.049567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.128 qpair failed and we were unable to recover it. 00:26:30.128 [2024-07-26 14:20:38.049652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.128 [2024-07-26 14:20:38.049683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.128 qpair failed and we were unable to recover it. 00:26:30.128 [2024-07-26 14:20:38.049798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.128 [2024-07-26 14:20:38.049825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.128 qpair failed and we were unable to recover it. 00:26:30.128 [2024-07-26 14:20:38.049909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.128 [2024-07-26 14:20:38.049935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.128 qpair failed and we were unable to recover it. 00:26:30.128 [2024-07-26 14:20:38.050029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.128 [2024-07-26 14:20:38.050056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.128 qpair failed and we were unable to recover it. 00:26:30.128 [2024-07-26 14:20:38.050176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.128 [2024-07-26 14:20:38.050203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.128 qpair failed and we were unable to recover it. 00:26:30.128 [2024-07-26 14:20:38.050297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.128 [2024-07-26 14:20:38.050324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.128 qpair failed and we were unable to recover it. 00:26:30.128 [2024-07-26 14:20:38.050435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.128 [2024-07-26 14:20:38.050462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.128 qpair failed and we were unable to recover it. 00:26:30.128 [2024-07-26 14:20:38.050552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.128 [2024-07-26 14:20:38.050579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.128 qpair failed and we were unable to recover it. 00:26:30.128 [2024-07-26 14:20:38.050659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.128 [2024-07-26 14:20:38.050685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.128 qpair failed and we were unable to recover it. 00:26:30.128 [2024-07-26 14:20:38.050767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.128 [2024-07-26 14:20:38.050793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.128 qpair failed and we were unable to recover it. 00:26:30.128 [2024-07-26 14:20:38.050869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.128 [2024-07-26 14:20:38.050895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.128 qpair failed and we were unable to recover it. 00:26:30.128 [2024-07-26 14:20:38.050982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.128 [2024-07-26 14:20:38.051010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.128 qpair failed and we were unable to recover it. 00:26:30.128 [2024-07-26 14:20:38.051097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.128 [2024-07-26 14:20:38.051125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.128 qpair failed and we were unable to recover it. 00:26:30.128 [2024-07-26 14:20:38.051211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.128 [2024-07-26 14:20:38.051237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.128 qpair failed and we were unable to recover it. 00:26:30.128 [2024-07-26 14:20:38.051357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.128 [2024-07-26 14:20:38.051384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.128 qpair failed and we were unable to recover it. 00:26:30.128 [2024-07-26 14:20:38.051472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.128 [2024-07-26 14:20:38.051499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.128 qpair failed and we were unable to recover it. 00:26:30.129 [2024-07-26 14:20:38.051594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.129 [2024-07-26 14:20:38.051621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.129 qpair failed and we were unable to recover it. 00:26:30.129 [2024-07-26 14:20:38.051710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.129 [2024-07-26 14:20:38.051737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.129 qpair failed and we were unable to recover it. 00:26:30.129 [2024-07-26 14:20:38.051833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.129 [2024-07-26 14:20:38.051859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.129 qpair failed and we were unable to recover it. 00:26:30.129 [2024-07-26 14:20:38.051945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.129 [2024-07-26 14:20:38.051971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.129 qpair failed and we were unable to recover it. 00:26:30.129 [2024-07-26 14:20:38.052058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.129 [2024-07-26 14:20:38.052084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.129 qpair failed and we were unable to recover it. 00:26:30.129 [2024-07-26 14:20:38.052171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.129 [2024-07-26 14:20:38.052197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.129 qpair failed and we were unable to recover it. 00:26:30.129 [2024-07-26 14:20:38.052301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.129 [2024-07-26 14:20:38.052329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.129 qpair failed and we were unable to recover it. 00:26:30.129 [2024-07-26 14:20:38.052443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.129 [2024-07-26 14:20:38.052470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.129 qpair failed and we were unable to recover it. 00:26:30.129 [2024-07-26 14:20:38.052558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.129 [2024-07-26 14:20:38.052586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.129 qpair failed and we were unable to recover it. 00:26:30.129 [2024-07-26 14:20:38.052667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.129 [2024-07-26 14:20:38.052693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.129 qpair failed and we were unable to recover it. 00:26:30.129 [2024-07-26 14:20:38.052778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.129 [2024-07-26 14:20:38.052805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.129 qpair failed and we were unable to recover it. 00:26:30.129 [2024-07-26 14:20:38.052885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.129 [2024-07-26 14:20:38.052915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.129 qpair failed and we were unable to recover it. 00:26:30.129 [2024-07-26 14:20:38.053011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.129 [2024-07-26 14:20:38.053039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.129 qpair failed and we were unable to recover it. 00:26:30.129 [2024-07-26 14:20:38.053129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.129 [2024-07-26 14:20:38.053155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.129 qpair failed and we were unable to recover it. 00:26:30.129 [2024-07-26 14:20:38.053237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.129 [2024-07-26 14:20:38.053266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.129 qpair failed and we were unable to recover it. 00:26:30.129 [2024-07-26 14:20:38.053353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.129 [2024-07-26 14:20:38.053380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.129 qpair failed and we were unable to recover it. 00:26:30.129 [2024-07-26 14:20:38.053472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.129 [2024-07-26 14:20:38.053499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.129 qpair failed and we were unable to recover it. 00:26:30.129 [2024-07-26 14:20:38.053604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.129 [2024-07-26 14:20:38.053632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.129 qpair failed and we were unable to recover it. 00:26:30.129 [2024-07-26 14:20:38.053722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.129 [2024-07-26 14:20:38.053748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.129 qpair failed and we were unable to recover it. 00:26:30.129 [2024-07-26 14:20:38.053827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.129 [2024-07-26 14:20:38.053854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.129 qpair failed and we were unable to recover it. 00:26:30.129 [2024-07-26 14:20:38.053946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.129 [2024-07-26 14:20:38.053973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.129 qpair failed and we were unable to recover it. 00:26:30.129 [2024-07-26 14:20:38.054058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.129 [2024-07-26 14:20:38.054085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.129 qpair failed and we were unable to recover it. 00:26:30.129 [2024-07-26 14:20:38.054163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.129 [2024-07-26 14:20:38.054190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.129 qpair failed and we were unable to recover it. 00:26:30.129 [2024-07-26 14:20:38.054278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.129 [2024-07-26 14:20:38.054304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.129 qpair failed and we were unable to recover it. 00:26:30.129 [2024-07-26 14:20:38.054418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.129 [2024-07-26 14:20:38.054445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.129 qpair failed and we were unable to recover it. 00:26:30.129 [2024-07-26 14:20:38.054548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.129 [2024-07-26 14:20:38.054577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.129 qpair failed and we were unable to recover it. 00:26:30.129 [2024-07-26 14:20:38.054688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.129 [2024-07-26 14:20:38.054714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.129 qpair failed and we were unable to recover it. 00:26:30.129 [2024-07-26 14:20:38.054800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.129 [2024-07-26 14:20:38.054827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.129 qpair failed and we were unable to recover it. 00:26:30.129 [2024-07-26 14:20:38.054949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.129 [2024-07-26 14:20:38.054975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.129 qpair failed and we were unable to recover it. 00:26:30.129 [2024-07-26 14:20:38.055052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.129 [2024-07-26 14:20:38.055078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.129 qpair failed and we were unable to recover it. 00:26:30.129 [2024-07-26 14:20:38.055174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.129 [2024-07-26 14:20:38.055200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.129 qpair failed and we were unable to recover it. 00:26:30.129 [2024-07-26 14:20:38.055280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.129 [2024-07-26 14:20:38.055307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.129 qpair failed and we were unable to recover it. 00:26:30.129 [2024-07-26 14:20:38.055394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.129 [2024-07-26 14:20:38.055421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.129 qpair failed and we were unable to recover it. 00:26:30.129 [2024-07-26 14:20:38.055508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.129 [2024-07-26 14:20:38.055543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.129 qpair failed and we were unable to recover it. 00:26:30.129 [2024-07-26 14:20:38.055635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.129 [2024-07-26 14:20:38.055662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.129 qpair failed and we were unable to recover it. 00:26:30.129 [2024-07-26 14:20:38.055747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.129 [2024-07-26 14:20:38.055773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.129 qpair failed and we were unable to recover it. 00:26:30.129 [2024-07-26 14:20:38.055898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.129 [2024-07-26 14:20:38.055925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.129 qpair failed and we were unable to recover it. 00:26:30.130 [2024-07-26 14:20:38.056127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.130 [2024-07-26 14:20:38.056155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.130 qpair failed and we were unable to recover it. 00:26:30.130 [2024-07-26 14:20:38.056241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.130 [2024-07-26 14:20:38.056268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.130 qpair failed and we were unable to recover it. 00:26:30.130 [2024-07-26 14:20:38.056382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.130 [2024-07-26 14:20:38.056408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.130 qpair failed and we were unable to recover it. 00:26:30.130 [2024-07-26 14:20:38.056485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.130 [2024-07-26 14:20:38.056512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.130 qpair failed and we were unable to recover it. 00:26:30.130 [2024-07-26 14:20:38.056621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.130 [2024-07-26 14:20:38.056661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.130 qpair failed and we were unable to recover it. 00:26:30.130 [2024-07-26 14:20:38.056758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.130 [2024-07-26 14:20:38.056788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.130 qpair failed and we were unable to recover it. 00:26:30.130 [2024-07-26 14:20:38.056880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.130 [2024-07-26 14:20:38.056906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.130 qpair failed and we were unable to recover it. 00:26:30.130 [2024-07-26 14:20:38.056988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.130 [2024-07-26 14:20:38.057015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.130 qpair failed and we were unable to recover it. 00:26:30.130 [2024-07-26 14:20:38.057128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.130 [2024-07-26 14:20:38.057155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.130 qpair failed and we were unable to recover it. 00:26:30.130 [2024-07-26 14:20:38.057234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.130 [2024-07-26 14:20:38.057260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.130 qpair failed and we were unable to recover it. 00:26:30.130 [2024-07-26 14:20:38.057344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.130 [2024-07-26 14:20:38.057372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.130 qpair failed and we were unable to recover it. 00:26:30.130 [2024-07-26 14:20:38.057485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.130 [2024-07-26 14:20:38.057512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.130 qpair failed and we were unable to recover it. 00:26:30.130 [2024-07-26 14:20:38.057601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.130 [2024-07-26 14:20:38.057628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.130 qpair failed and we were unable to recover it. 00:26:30.130 [2024-07-26 14:20:38.057717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.130 [2024-07-26 14:20:38.057744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.130 qpair failed and we were unable to recover it. 00:26:30.130 [2024-07-26 14:20:38.057832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.130 [2024-07-26 14:20:38.057863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.130 qpair failed and we were unable to recover it. 00:26:30.130 [2024-07-26 14:20:38.057950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.130 [2024-07-26 14:20:38.057977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.130 qpair failed and we were unable to recover it. 00:26:30.130 [2024-07-26 14:20:38.058060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.130 [2024-07-26 14:20:38.058089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.130 qpair failed and we were unable to recover it. 00:26:30.130 [2024-07-26 14:20:38.058180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.130 [2024-07-26 14:20:38.058209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.130 qpair failed and we were unable to recover it. 00:26:30.130 [2024-07-26 14:20:38.058297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.130 [2024-07-26 14:20:38.058323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.130 qpair failed and we were unable to recover it. 00:26:30.130 [2024-07-26 14:20:38.058438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.130 [2024-07-26 14:20:38.058465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.130 qpair failed and we were unable to recover it. 00:26:30.130 [2024-07-26 14:20:38.058578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.130 [2024-07-26 14:20:38.058604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.130 qpair failed and we were unable to recover it. 00:26:30.130 [2024-07-26 14:20:38.058696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.130 [2024-07-26 14:20:38.058722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.130 qpair failed and we were unable to recover it. 00:26:30.130 [2024-07-26 14:20:38.058806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.130 [2024-07-26 14:20:38.058832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.130 qpair failed and we were unable to recover it. 00:26:30.130 [2024-07-26 14:20:38.058917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.130 [2024-07-26 14:20:38.058942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.130 qpair failed and we were unable to recover it. 00:26:30.130 [2024-07-26 14:20:38.059017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.130 [2024-07-26 14:20:38.059043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.130 qpair failed and we were unable to recover it. 00:26:30.130 [2024-07-26 14:20:38.059127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.130 [2024-07-26 14:20:38.059152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.130 qpair failed and we were unable to recover it. 00:26:30.130 [2024-07-26 14:20:38.059228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.130 [2024-07-26 14:20:38.059253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.130 qpair failed and we were unable to recover it. 00:26:30.130 [2024-07-26 14:20:38.059337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.130 [2024-07-26 14:20:38.059363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.130 qpair failed and we were unable to recover it. 00:26:30.130 [2024-07-26 14:20:38.059461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.130 [2024-07-26 14:20:38.059490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.130 qpair failed and we were unable to recover it. 00:26:30.130 [2024-07-26 14:20:38.059590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.130 [2024-07-26 14:20:38.059617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.130 qpair failed and we were unable to recover it. 00:26:30.130 [2024-07-26 14:20:38.059699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.130 [2024-07-26 14:20:38.059726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.130 qpair failed and we were unable to recover it. 00:26:30.130 [2024-07-26 14:20:38.059808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.130 [2024-07-26 14:20:38.059835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.130 qpair failed and we were unable to recover it. 00:26:30.130 [2024-07-26 14:20:38.059923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.130 [2024-07-26 14:20:38.059950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.130 qpair failed and we were unable to recover it. 00:26:30.130 [2024-07-26 14:20:38.060033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.130 [2024-07-26 14:20:38.060059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.130 qpair failed and we were unable to recover it. 00:26:30.130 [2024-07-26 14:20:38.060140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.130 [2024-07-26 14:20:38.060168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.130 qpair failed and we were unable to recover it. 00:26:30.130 [2024-07-26 14:20:38.060263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.130 [2024-07-26 14:20:38.060302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.130 qpair failed and we were unable to recover it. 00:26:30.130 [2024-07-26 14:20:38.060397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.130 [2024-07-26 14:20:38.060424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.130 qpair failed and we were unable to recover it. 00:26:30.130 [2024-07-26 14:20:38.060509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.130 [2024-07-26 14:20:38.060542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.130 qpair failed and we were unable to recover it. 00:26:30.131 [2024-07-26 14:20:38.060626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.131 [2024-07-26 14:20:38.060652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.131 qpair failed and we were unable to recover it. 00:26:30.131 [2024-07-26 14:20:38.060731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.131 [2024-07-26 14:20:38.060758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.131 qpair failed and we were unable to recover it. 00:26:30.131 [2024-07-26 14:20:38.060842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.131 [2024-07-26 14:20:38.060870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.131 qpair failed and we were unable to recover it. 00:26:30.131 [2024-07-26 14:20:38.060954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.131 [2024-07-26 14:20:38.060986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.131 qpair failed and we were unable to recover it. 00:26:30.131 [2024-07-26 14:20:38.061068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.131 [2024-07-26 14:20:38.061094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.131 qpair failed and we were unable to recover it. 00:26:30.131 [2024-07-26 14:20:38.061182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.131 [2024-07-26 14:20:38.061209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.131 qpair failed and we were unable to recover it. 00:26:30.131 [2024-07-26 14:20:38.061297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.131 [2024-07-26 14:20:38.061323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.131 qpair failed and we were unable to recover it. 00:26:30.131 [2024-07-26 14:20:38.061404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.131 [2024-07-26 14:20:38.061430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.131 qpair failed and we were unable to recover it. 00:26:30.131 [2024-07-26 14:20:38.061520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.131 [2024-07-26 14:20:38.061552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.131 qpair failed and we were unable to recover it. 00:26:30.131 [2024-07-26 14:20:38.061630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.131 [2024-07-26 14:20:38.061656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.131 qpair failed and we were unable to recover it. 00:26:30.131 [2024-07-26 14:20:38.061749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.131 [2024-07-26 14:20:38.061776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.131 qpair failed and we were unable to recover it. 00:26:30.131 [2024-07-26 14:20:38.061860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.131 [2024-07-26 14:20:38.061887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.131 qpair failed and we were unable to recover it. 00:26:30.416 [2024-07-26 14:20:38.062002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.416 [2024-07-26 14:20:38.062029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.416 qpair failed and we were unable to recover it. 00:26:30.416 [2024-07-26 14:20:38.062120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.416 [2024-07-26 14:20:38.062147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.416 qpair failed and we were unable to recover it. 00:26:30.416 [2024-07-26 14:20:38.062235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.416 [2024-07-26 14:20:38.062262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.416 qpair failed and we were unable to recover it. 00:26:30.416 [2024-07-26 14:20:38.062342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.416 [2024-07-26 14:20:38.062368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.416 qpair failed and we were unable to recover it. 00:26:30.416 [2024-07-26 14:20:38.062451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.416 [2024-07-26 14:20:38.062477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.416 qpair failed and we were unable to recover it. 00:26:30.416 [2024-07-26 14:20:38.062576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.416 [2024-07-26 14:20:38.062602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.416 qpair failed and we were unable to recover it. 00:26:30.416 [2024-07-26 14:20:38.062688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.417 [2024-07-26 14:20:38.062715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.417 qpair failed and we were unable to recover it. 00:26:30.417 [2024-07-26 14:20:38.062800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.417 [2024-07-26 14:20:38.062826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.417 qpair failed and we were unable to recover it. 00:26:30.417 [2024-07-26 14:20:38.062910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.417 [2024-07-26 14:20:38.062936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.417 qpair failed and we were unable to recover it. 00:26:30.417 [2024-07-26 14:20:38.063022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.417 [2024-07-26 14:20:38.063049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.417 qpair failed and we were unable to recover it. 00:26:30.417 [2024-07-26 14:20:38.063132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.417 [2024-07-26 14:20:38.063160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.417 qpair failed and we were unable to recover it. 00:26:30.417 [2024-07-26 14:20:38.063241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.417 [2024-07-26 14:20:38.063268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.417 qpair failed and we were unable to recover it. 00:26:30.417 [2024-07-26 14:20:38.063367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.417 [2024-07-26 14:20:38.063406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.417 qpair failed and we were unable to recover it. 00:26:30.417 [2024-07-26 14:20:38.063517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.417 [2024-07-26 14:20:38.063564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.417 qpair failed and we were unable to recover it. 00:26:30.417 [2024-07-26 14:20:38.063686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.417 [2024-07-26 14:20:38.063716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.417 qpair failed and we were unable to recover it. 00:26:30.417 [2024-07-26 14:20:38.063796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.417 [2024-07-26 14:20:38.063823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.417 qpair failed and we were unable to recover it. 00:26:30.417 [2024-07-26 14:20:38.063910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.417 [2024-07-26 14:20:38.063936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.417 qpair failed and we were unable to recover it. 00:26:30.417 [2024-07-26 14:20:38.064016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.417 [2024-07-26 14:20:38.064043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.417 qpair failed and we were unable to recover it. 00:26:30.417 [2024-07-26 14:20:38.064142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.417 [2024-07-26 14:20:38.064169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.417 qpair failed and we were unable to recover it. 00:26:30.417 [2024-07-26 14:20:38.064254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.417 [2024-07-26 14:20:38.064283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.417 qpair failed and we were unable to recover it. 00:26:30.417 [2024-07-26 14:20:38.064359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.417 [2024-07-26 14:20:38.064386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.417 qpair failed and we were unable to recover it. 00:26:30.417 [2024-07-26 14:20:38.064476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.417 [2024-07-26 14:20:38.064503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.417 qpair failed and we were unable to recover it. 00:26:30.417 [2024-07-26 14:20:38.064608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.417 [2024-07-26 14:20:38.064634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.417 qpair failed and we were unable to recover it. 00:26:30.417 [2024-07-26 14:20:38.064750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.417 [2024-07-26 14:20:38.064776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.417 qpair failed and we were unable to recover it. 00:26:30.417 [2024-07-26 14:20:38.064867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.417 [2024-07-26 14:20:38.064894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.417 qpair failed and we were unable to recover it. 00:26:30.417 [2024-07-26 14:20:38.064979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.417 [2024-07-26 14:20:38.065006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.417 qpair failed and we were unable to recover it. 00:26:30.417 [2024-07-26 14:20:38.065087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.417 [2024-07-26 14:20:38.065113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.417 qpair failed and we were unable to recover it. 00:26:30.417 [2024-07-26 14:20:38.065206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.417 [2024-07-26 14:20:38.065232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.417 qpair failed and we were unable to recover it. 00:26:30.417 [2024-07-26 14:20:38.065321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.417 [2024-07-26 14:20:38.065347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.417 qpair failed and we were unable to recover it. 00:26:30.417 [2024-07-26 14:20:38.065451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.417 [2024-07-26 14:20:38.065477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.417 qpair failed and we were unable to recover it. 00:26:30.417 [2024-07-26 14:20:38.065577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.417 [2024-07-26 14:20:38.065616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.417 qpair failed and we were unable to recover it. 00:26:30.417 [2024-07-26 14:20:38.065707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.417 [2024-07-26 14:20:38.065735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.417 qpair failed and we were unable to recover it. 00:26:30.417 [2024-07-26 14:20:38.065845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.417 [2024-07-26 14:20:38.065885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.417 qpair failed and we were unable to recover it. 00:26:30.417 [2024-07-26 14:20:38.065986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.417 [2024-07-26 14:20:38.066013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.417 qpair failed and we were unable to recover it. 00:26:30.417 [2024-07-26 14:20:38.066097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.417 [2024-07-26 14:20:38.066123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.417 qpair failed and we were unable to recover it. 00:26:30.417 [2024-07-26 14:20:38.066239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.417 [2024-07-26 14:20:38.066268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.417 qpair failed and we were unable to recover it. 00:26:30.417 [2024-07-26 14:20:38.066351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.417 [2024-07-26 14:20:38.066379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.417 qpair failed and we were unable to recover it. 00:26:30.417 [2024-07-26 14:20:38.066462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.417 [2024-07-26 14:20:38.066489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.417 qpair failed and we were unable to recover it. 00:26:30.417 [2024-07-26 14:20:38.066578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.417 [2024-07-26 14:20:38.066605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.417 qpair failed and we were unable to recover it. 00:26:30.417 [2024-07-26 14:20:38.066687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.417 [2024-07-26 14:20:38.066714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.417 qpair failed and we were unable to recover it. 00:26:30.417 [2024-07-26 14:20:38.066802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.417 [2024-07-26 14:20:38.066828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.417 qpair failed and we were unable to recover it. 00:26:30.417 [2024-07-26 14:20:38.066906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.417 [2024-07-26 14:20:38.066933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.417 qpair failed and we were unable to recover it. 00:26:30.417 [2024-07-26 14:20:38.067029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.417 [2024-07-26 14:20:38.067055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.417 qpair failed and we were unable to recover it. 00:26:30.417 [2024-07-26 14:20:38.067141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.417 [2024-07-26 14:20:38.067168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.417 qpair failed and we were unable to recover it. 00:26:30.418 [2024-07-26 14:20:38.067263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.418 [2024-07-26 14:20:38.067303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.418 qpair failed and we were unable to recover it. 00:26:30.418 [2024-07-26 14:20:38.067412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.418 [2024-07-26 14:20:38.067441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.418 qpair failed and we were unable to recover it. 00:26:30.418 [2024-07-26 14:20:38.067534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.418 [2024-07-26 14:20:38.067562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.418 qpair failed and we were unable to recover it. 00:26:30.418 [2024-07-26 14:20:38.067644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.418 [2024-07-26 14:20:38.067671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.418 qpair failed and we were unable to recover it. 00:26:30.418 [2024-07-26 14:20:38.067758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.418 [2024-07-26 14:20:38.067785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.418 qpair failed and we were unable to recover it. 00:26:30.418 [2024-07-26 14:20:38.067875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.418 [2024-07-26 14:20:38.067902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.418 qpair failed and we were unable to recover it. 00:26:30.418 [2024-07-26 14:20:38.067987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.418 [2024-07-26 14:20:38.068014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.418 qpair failed and we were unable to recover it. 00:26:30.418 [2024-07-26 14:20:38.068093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.418 [2024-07-26 14:20:38.068119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.418 qpair failed and we were unable to recover it. 00:26:30.418 [2024-07-26 14:20:38.068230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.418 [2024-07-26 14:20:38.068257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.418 qpair failed and we were unable to recover it. 00:26:30.418 [2024-07-26 14:20:38.068358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.418 [2024-07-26 14:20:38.068385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.418 qpair failed and we were unable to recover it. 00:26:30.418 [2024-07-26 14:20:38.068461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.418 [2024-07-26 14:20:38.068488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.418 qpair failed and we were unable to recover it. 00:26:30.418 [2024-07-26 14:20:38.068581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.418 [2024-07-26 14:20:38.068608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.418 qpair failed and we were unable to recover it. 00:26:30.418 [2024-07-26 14:20:38.068695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.418 [2024-07-26 14:20:38.068721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.418 qpair failed and we were unable to recover it. 00:26:30.418 [2024-07-26 14:20:38.068832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.418 [2024-07-26 14:20:38.068858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.418 qpair failed and we were unable to recover it. 00:26:30.418 [2024-07-26 14:20:38.068935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.418 [2024-07-26 14:20:38.068966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.418 qpair failed and we were unable to recover it. 00:26:30.418 [2024-07-26 14:20:38.069042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.418 [2024-07-26 14:20:38.069067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.418 qpair failed and we were unable to recover it. 00:26:30.418 [2024-07-26 14:20:38.069190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.418 [2024-07-26 14:20:38.069221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.418 qpair failed and we were unable to recover it. 00:26:30.418 [2024-07-26 14:20:38.069353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.418 [2024-07-26 14:20:38.069393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.418 qpair failed and we were unable to recover it. 00:26:30.418 [2024-07-26 14:20:38.069491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.418 [2024-07-26 14:20:38.069520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.418 qpair failed and we were unable to recover it. 00:26:30.418 [2024-07-26 14:20:38.069613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.418 [2024-07-26 14:20:38.069641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.418 qpair failed and we were unable to recover it. 00:26:30.418 [2024-07-26 14:20:38.069723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.418 [2024-07-26 14:20:38.069751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.418 qpair failed and we were unable to recover it. 00:26:30.418 [2024-07-26 14:20:38.069859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.418 [2024-07-26 14:20:38.069886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.418 qpair failed and we were unable to recover it. 00:26:30.418 [2024-07-26 14:20:38.069970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.418 [2024-07-26 14:20:38.069997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.418 qpair failed and we were unable to recover it. 00:26:30.418 [2024-07-26 14:20:38.070087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.418 [2024-07-26 14:20:38.070116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.418 qpair failed and we were unable to recover it. 00:26:30.418 [2024-07-26 14:20:38.070216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.418 [2024-07-26 14:20:38.070245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.418 qpair failed and we were unable to recover it. 00:26:30.418 [2024-07-26 14:20:38.070353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.418 [2024-07-26 14:20:38.070380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.418 qpair failed and we were unable to recover it. 00:26:30.418 [2024-07-26 14:20:38.070466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.418 [2024-07-26 14:20:38.070493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.418 qpair failed and we were unable to recover it. 00:26:30.418 [2024-07-26 14:20:38.070581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.418 [2024-07-26 14:20:38.070608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.418 qpair failed and we were unable to recover it. 00:26:30.418 [2024-07-26 14:20:38.070689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.418 [2024-07-26 14:20:38.070715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.418 qpair failed and we were unable to recover it. 00:26:30.418 [2024-07-26 14:20:38.070805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.418 [2024-07-26 14:20:38.070831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.418 qpair failed and we were unable to recover it. 00:26:30.418 [2024-07-26 14:20:38.070921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.418 [2024-07-26 14:20:38.070947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.418 qpair failed and we were unable to recover it. 00:26:30.418 [2024-07-26 14:20:38.071035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.418 [2024-07-26 14:20:38.071061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.418 qpair failed and we were unable to recover it. 00:26:30.418 [2024-07-26 14:20:38.071178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.418 [2024-07-26 14:20:38.071204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.418 qpair failed and we were unable to recover it. 00:26:30.418 [2024-07-26 14:20:38.071279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.418 [2024-07-26 14:20:38.071305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.418 qpair failed and we were unable to recover it. 00:26:30.418 [2024-07-26 14:20:38.071394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.418 [2024-07-26 14:20:38.071423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.418 qpair failed and we were unable to recover it. 00:26:30.418 [2024-07-26 14:20:38.071515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.418 [2024-07-26 14:20:38.071550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.418 qpair failed and we were unable to recover it. 00:26:30.418 [2024-07-26 14:20:38.071635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.418 [2024-07-26 14:20:38.071662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.418 qpair failed and we were unable to recover it. 00:26:30.418 [2024-07-26 14:20:38.071757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.418 [2024-07-26 14:20:38.071783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.419 qpair failed and we were unable to recover it. 00:26:30.419 [2024-07-26 14:20:38.071869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.419 [2024-07-26 14:20:38.071895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.419 qpair failed and we were unable to recover it. 00:26:30.419 [2024-07-26 14:20:38.071984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.419 [2024-07-26 14:20:38.072010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.419 qpair failed and we were unable to recover it. 00:26:30.419 [2024-07-26 14:20:38.072094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.419 [2024-07-26 14:20:38.072122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.419 qpair failed and we were unable to recover it. 00:26:30.419 [2024-07-26 14:20:38.072237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.419 [2024-07-26 14:20:38.072270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.419 qpair failed and we were unable to recover it. 00:26:30.419 [2024-07-26 14:20:38.072363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.419 [2024-07-26 14:20:38.072389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.419 qpair failed and we were unable to recover it. 00:26:30.419 [2024-07-26 14:20:38.072473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.419 [2024-07-26 14:20:38.072501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.419 qpair failed and we were unable to recover it. 00:26:30.419 [2024-07-26 14:20:38.072596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.419 [2024-07-26 14:20:38.072622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.419 qpair failed and we were unable to recover it. 00:26:30.419 [2024-07-26 14:20:38.072702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.419 [2024-07-26 14:20:38.072728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.419 qpair failed and we were unable to recover it. 00:26:30.419 [2024-07-26 14:20:38.072811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.419 [2024-07-26 14:20:38.072837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.419 qpair failed and we were unable to recover it. 00:26:30.419 [2024-07-26 14:20:38.072919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.419 [2024-07-26 14:20:38.072945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.419 qpair failed and we were unable to recover it. 00:26:30.419 [2024-07-26 14:20:38.073039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.419 [2024-07-26 14:20:38.073065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.419 qpair failed and we were unable to recover it. 00:26:30.419 [2024-07-26 14:20:38.073146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.419 [2024-07-26 14:20:38.073171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.419 qpair failed and we were unable to recover it. 00:26:30.419 [2024-07-26 14:20:38.073369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.419 [2024-07-26 14:20:38.073398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.419 qpair failed and we were unable to recover it. 00:26:30.419 [2024-07-26 14:20:38.073486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.419 [2024-07-26 14:20:38.073512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.419 qpair failed and we were unable to recover it. 00:26:30.419 [2024-07-26 14:20:38.073607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.419 [2024-07-26 14:20:38.073636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.419 qpair failed and we were unable to recover it. 00:26:30.419 [2024-07-26 14:20:38.073723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.419 [2024-07-26 14:20:38.073750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.419 qpair failed and we were unable to recover it. 00:26:30.419 [2024-07-26 14:20:38.073866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.419 [2024-07-26 14:20:38.073895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.419 qpair failed and we were unable to recover it. 00:26:30.419 [2024-07-26 14:20:38.073980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.419 [2024-07-26 14:20:38.074007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.419 qpair failed and we were unable to recover it. 00:26:30.419 [2024-07-26 14:20:38.074106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.419 [2024-07-26 14:20:38.074134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.419 qpair failed and we were unable to recover it. 00:26:30.419 [2024-07-26 14:20:38.074228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.419 [2024-07-26 14:20:38.074254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.419 qpair failed and we were unable to recover it. 00:26:30.419 [2024-07-26 14:20:38.074342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.419 [2024-07-26 14:20:38.074370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.419 qpair failed and we were unable to recover it. 00:26:30.419 [2024-07-26 14:20:38.074450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.419 [2024-07-26 14:20:38.074476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.419 qpair failed and we were unable to recover it. 00:26:30.419 [2024-07-26 14:20:38.074578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.419 [2024-07-26 14:20:38.074619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.419 qpair failed and we were unable to recover it. 00:26:30.419 [2024-07-26 14:20:38.074747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.419 [2024-07-26 14:20:38.074775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.419 qpair failed and we were unable to recover it. 00:26:30.419 [2024-07-26 14:20:38.074895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.419 [2024-07-26 14:20:38.074922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.419 qpair failed and we were unable to recover it. 00:26:30.419 [2024-07-26 14:20:38.075048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.419 [2024-07-26 14:20:38.075075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.419 qpair failed and we were unable to recover it. 00:26:30.419 [2024-07-26 14:20:38.075159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.419 [2024-07-26 14:20:38.075187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.419 qpair failed and we were unable to recover it. 00:26:30.419 [2024-07-26 14:20:38.075264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.419 [2024-07-26 14:20:38.075290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.419 qpair failed and we were unable to recover it. 00:26:30.419 [2024-07-26 14:20:38.075382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.419 [2024-07-26 14:20:38.075410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.419 qpair failed and we were unable to recover it. 00:26:30.419 [2024-07-26 14:20:38.075500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.419 [2024-07-26 14:20:38.075535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.419 qpair failed and we were unable to recover it. 00:26:30.419 [2024-07-26 14:20:38.075633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.419 [2024-07-26 14:20:38.075674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.419 qpair failed and we were unable to recover it. 00:26:30.419 [2024-07-26 14:20:38.075792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.419 [2024-07-26 14:20:38.075820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.419 qpair failed and we were unable to recover it. 00:26:30.419 [2024-07-26 14:20:38.075901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.419 [2024-07-26 14:20:38.075928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.419 qpair failed and we were unable to recover it. 00:26:30.419 [2024-07-26 14:20:38.076021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.419 [2024-07-26 14:20:38.076048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.419 qpair failed and we were unable to recover it. 00:26:30.419 [2024-07-26 14:20:38.076133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.419 [2024-07-26 14:20:38.076161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.419 qpair failed and we were unable to recover it. 00:26:30.419 [2024-07-26 14:20:38.076271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.419 [2024-07-26 14:20:38.076297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.419 qpair failed and we were unable to recover it. 00:26:30.419 [2024-07-26 14:20:38.076381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.419 [2024-07-26 14:20:38.076408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.419 qpair failed and we were unable to recover it. 00:26:30.419 [2024-07-26 14:20:38.076503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.420 [2024-07-26 14:20:38.076538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.420 qpair failed and we were unable to recover it. 00:26:30.420 [2024-07-26 14:20:38.076621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.420 [2024-07-26 14:20:38.076648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.420 qpair failed and we were unable to recover it. 00:26:30.420 [2024-07-26 14:20:38.076747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.420 [2024-07-26 14:20:38.076774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.420 qpair failed and we were unable to recover it. 00:26:30.420 [2024-07-26 14:20:38.076856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.420 [2024-07-26 14:20:38.076883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.420 qpair failed and we were unable to recover it. 00:26:30.420 [2024-07-26 14:20:38.076973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.420 [2024-07-26 14:20:38.077000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.420 qpair failed and we were unable to recover it. 00:26:30.420 [2024-07-26 14:20:38.077089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.420 [2024-07-26 14:20:38.077115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.420 qpair failed and we were unable to recover it. 00:26:30.420 [2024-07-26 14:20:38.077236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.420 [2024-07-26 14:20:38.077263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.420 qpair failed and we were unable to recover it. 00:26:30.420 [2024-07-26 14:20:38.077348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.420 [2024-07-26 14:20:38.077375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.420 qpair failed and we were unable to recover it. 00:26:30.420 [2024-07-26 14:20:38.077455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.420 [2024-07-26 14:20:38.077483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.420 qpair failed and we were unable to recover it. 00:26:30.420 [2024-07-26 14:20:38.077595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.420 [2024-07-26 14:20:38.077634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.420 qpair failed and we were unable to recover it. 00:26:30.420 [2024-07-26 14:20:38.077734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.420 [2024-07-26 14:20:38.077763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.420 qpair failed and we were unable to recover it. 00:26:30.420 [2024-07-26 14:20:38.077851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.420 [2024-07-26 14:20:38.077878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.420 qpair failed and we were unable to recover it. 00:26:30.420 [2024-07-26 14:20:38.077961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.420 [2024-07-26 14:20:38.077987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.420 qpair failed and we were unable to recover it. 00:26:30.420 [2024-07-26 14:20:38.078067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.420 [2024-07-26 14:20:38.078093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.420 qpair failed and we were unable to recover it. 00:26:30.420 [2024-07-26 14:20:38.078179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.420 [2024-07-26 14:20:38.078206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.420 qpair failed and we were unable to recover it. 00:26:30.420 [2024-07-26 14:20:38.078296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.420 [2024-07-26 14:20:38.078335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.420 qpair failed and we were unable to recover it. 00:26:30.420 [2024-07-26 14:20:38.078440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.420 [2024-07-26 14:20:38.078480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.420 qpair failed and we were unable to recover it. 00:26:30.420 [2024-07-26 14:20:38.078591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.420 [2024-07-26 14:20:38.078621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.420 qpair failed and we were unable to recover it. 00:26:30.420 [2024-07-26 14:20:38.078711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.420 [2024-07-26 14:20:38.078738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.420 qpair failed and we were unable to recover it. 00:26:30.420 [2024-07-26 14:20:38.078829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.420 [2024-07-26 14:20:38.078856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.420 qpair failed and we were unable to recover it. 00:26:30.420 [2024-07-26 14:20:38.078956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.420 [2024-07-26 14:20:38.078985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.420 qpair failed and we were unable to recover it. 00:26:30.420 [2024-07-26 14:20:38.079072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.420 [2024-07-26 14:20:38.079099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.420 qpair failed and we were unable to recover it. 00:26:30.420 [2024-07-26 14:20:38.079186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.420 [2024-07-26 14:20:38.079212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.420 qpair failed and we were unable to recover it. 00:26:30.420 [2024-07-26 14:20:38.079307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.420 [2024-07-26 14:20:38.079333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.420 qpair failed and we were unable to recover it. 00:26:30.420 [2024-07-26 14:20:38.079422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.420 [2024-07-26 14:20:38.079450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.420 qpair failed and we were unable to recover it. 00:26:30.420 [2024-07-26 14:20:38.079551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.420 [2024-07-26 14:20:38.079582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.420 qpair failed and we were unable to recover it. 00:26:30.420 [2024-07-26 14:20:38.079678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.420 [2024-07-26 14:20:38.079706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.420 qpair failed and we were unable to recover it. 00:26:30.420 [2024-07-26 14:20:38.079797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.420 [2024-07-26 14:20:38.079823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.420 qpair failed and we were unable to recover it. 00:26:30.420 [2024-07-26 14:20:38.079907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.420 [2024-07-26 14:20:38.079933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.420 qpair failed and we were unable to recover it. 00:26:30.420 [2024-07-26 14:20:38.080021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.420 [2024-07-26 14:20:38.080050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.420 qpair failed and we were unable to recover it. 00:26:30.420 [2024-07-26 14:20:38.080166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.420 [2024-07-26 14:20:38.080194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.420 qpair failed and we were unable to recover it. 00:26:30.420 [2024-07-26 14:20:38.080287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.420 [2024-07-26 14:20:38.080314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.420 qpair failed and we were unable to recover it. 00:26:30.420 [2024-07-26 14:20:38.080400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.420 [2024-07-26 14:20:38.080426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.420 qpair failed and we were unable to recover it. 00:26:30.420 [2024-07-26 14:20:38.080509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.420 [2024-07-26 14:20:38.080547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.420 qpair failed and we were unable to recover it. 00:26:30.420 [2024-07-26 14:20:38.080631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.420 [2024-07-26 14:20:38.080657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.420 qpair failed and we were unable to recover it. 00:26:30.420 [2024-07-26 14:20:38.080737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.420 [2024-07-26 14:20:38.080763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.420 qpair failed and we were unable to recover it. 00:26:30.420 [2024-07-26 14:20:38.080841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.420 [2024-07-26 14:20:38.080868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.420 qpair failed and we were unable to recover it. 00:26:30.421 [2024-07-26 14:20:38.080969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.421 [2024-07-26 14:20:38.081005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.421 qpair failed and we were unable to recover it. 00:26:30.421 [2024-07-26 14:20:38.081090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.421 [2024-07-26 14:20:38.081116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.421 qpair failed and we were unable to recover it. 00:26:30.421 [2024-07-26 14:20:38.081195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.421 [2024-07-26 14:20:38.081220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.421 qpair failed and we were unable to recover it. 00:26:30.421 [2024-07-26 14:20:38.081309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.421 [2024-07-26 14:20:38.081335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.421 qpair failed and we were unable to recover it. 00:26:30.421 [2024-07-26 14:20:38.081411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.421 [2024-07-26 14:20:38.081437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.421 qpair failed and we were unable to recover it. 00:26:30.421 [2024-07-26 14:20:38.081549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.421 [2024-07-26 14:20:38.081576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.421 qpair failed and we were unable to recover it. 00:26:30.421 [2024-07-26 14:20:38.081662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.421 [2024-07-26 14:20:38.081688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.421 qpair failed and we were unable to recover it. 00:26:30.421 [2024-07-26 14:20:38.081780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.421 [2024-07-26 14:20:38.081807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.421 qpair failed and we were unable to recover it. 00:26:30.421 [2024-07-26 14:20:38.081898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.421 [2024-07-26 14:20:38.081924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.421 qpair failed and we were unable to recover it. 00:26:30.421 [2024-07-26 14:20:38.082006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.421 [2024-07-26 14:20:38.082034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.421 qpair failed and we were unable to recover it. 00:26:30.421 [2024-07-26 14:20:38.082153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.421 [2024-07-26 14:20:38.082182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.421 qpair failed and we were unable to recover it. 00:26:30.421 [2024-07-26 14:20:38.082277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.421 [2024-07-26 14:20:38.082303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.421 qpair failed and we were unable to recover it. 00:26:30.421 [2024-07-26 14:20:38.082383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.421 [2024-07-26 14:20:38.082410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.421 qpair failed and we were unable to recover it. 00:26:30.421 [2024-07-26 14:20:38.082525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.421 [2024-07-26 14:20:38.082559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.421 qpair failed and we were unable to recover it. 00:26:30.421 [2024-07-26 14:20:38.082648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.421 [2024-07-26 14:20:38.082674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.421 qpair failed and we were unable to recover it. 00:26:30.421 [2024-07-26 14:20:38.082762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.421 [2024-07-26 14:20:38.082789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.421 qpair failed and we were unable to recover it. 00:26:30.421 [2024-07-26 14:20:38.082875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.421 [2024-07-26 14:20:38.082901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.421 qpair failed and we were unable to recover it. 00:26:30.421 [2024-07-26 14:20:38.082985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.421 [2024-07-26 14:20:38.083011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.421 qpair failed and we were unable to recover it. 00:26:30.421 [2024-07-26 14:20:38.083091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.421 [2024-07-26 14:20:38.083117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.421 qpair failed and we were unable to recover it. 00:26:30.421 [2024-07-26 14:20:38.083198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.421 [2024-07-26 14:20:38.083224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.421 qpair failed and we were unable to recover it. 00:26:30.421 [2024-07-26 14:20:38.083335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.421 [2024-07-26 14:20:38.083374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.421 qpair failed and we were unable to recover it. 00:26:30.421 [2024-07-26 14:20:38.083502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.421 [2024-07-26 14:20:38.083537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.421 qpair failed and we were unable to recover it. 00:26:30.421 [2024-07-26 14:20:38.083651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.421 [2024-07-26 14:20:38.083678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.421 qpair failed and we were unable to recover it. 00:26:30.421 [2024-07-26 14:20:38.083764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.421 [2024-07-26 14:20:38.083795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.421 qpair failed and we were unable to recover it. 00:26:30.421 [2024-07-26 14:20:38.083884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.421 [2024-07-26 14:20:38.083910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.421 qpair failed and we were unable to recover it. 00:26:30.421 [2024-07-26 14:20:38.083996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.421 [2024-07-26 14:20:38.084022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.421 qpair failed and we were unable to recover it. 00:26:30.421 [2024-07-26 14:20:38.084106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.421 [2024-07-26 14:20:38.084132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.421 qpair failed and we were unable to recover it. 00:26:30.421 [2024-07-26 14:20:38.084213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.422 [2024-07-26 14:20:38.084239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.422 qpair failed and we were unable to recover it. 00:26:30.422 [2024-07-26 14:20:38.084315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.422 [2024-07-26 14:20:38.084341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.422 qpair failed and we were unable to recover it. 00:26:30.422 [2024-07-26 14:20:38.084434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.422 [2024-07-26 14:20:38.084460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.422 qpair failed and we were unable to recover it. 00:26:30.422 [2024-07-26 14:20:38.084541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.422 [2024-07-26 14:20:38.084568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.422 qpair failed and we were unable to recover it. 00:26:30.422 [2024-07-26 14:20:38.084656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.422 [2024-07-26 14:20:38.084683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.422 qpair failed and we were unable to recover it. 00:26:30.422 [2024-07-26 14:20:38.084765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.422 [2024-07-26 14:20:38.084792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.422 qpair failed and we were unable to recover it. 00:26:30.422 [2024-07-26 14:20:38.084875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.422 [2024-07-26 14:20:38.084901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.422 qpair failed and we were unable to recover it. 00:26:30.422 [2024-07-26 14:20:38.084980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.422 [2024-07-26 14:20:38.085006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.422 qpair failed and we were unable to recover it. 00:26:30.422 [2024-07-26 14:20:38.085090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.422 [2024-07-26 14:20:38.085116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.422 qpair failed and we were unable to recover it. 00:26:30.422 [2024-07-26 14:20:38.085240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.422 [2024-07-26 14:20:38.085280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.422 qpair failed and we were unable to recover it. 00:26:30.422 [2024-07-26 14:20:38.085402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.422 [2024-07-26 14:20:38.085442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.422 qpair failed and we were unable to recover it. 00:26:30.422 [2024-07-26 14:20:38.085532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.422 [2024-07-26 14:20:38.085561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.422 qpair failed and we were unable to recover it. 00:26:30.422 [2024-07-26 14:20:38.085653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.422 [2024-07-26 14:20:38.085680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.422 qpair failed and we were unable to recover it. 00:26:30.422 [2024-07-26 14:20:38.085767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.422 [2024-07-26 14:20:38.085794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.422 qpair failed and we were unable to recover it. 00:26:30.422 [2024-07-26 14:20:38.085877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.422 [2024-07-26 14:20:38.085902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.422 qpair failed and we were unable to recover it. 00:26:30.422 [2024-07-26 14:20:38.085979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.422 [2024-07-26 14:20:38.086005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.422 qpair failed and we were unable to recover it. 00:26:30.422 [2024-07-26 14:20:38.086087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.422 [2024-07-26 14:20:38.086112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.422 qpair failed and we were unable to recover it. 00:26:30.422 [2024-07-26 14:20:38.086199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.422 [2024-07-26 14:20:38.086228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.422 qpair failed and we were unable to recover it. 00:26:30.422 [2024-07-26 14:20:38.086308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.422 [2024-07-26 14:20:38.086335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.422 qpair failed and we were unable to recover it. 00:26:30.422 [2024-07-26 14:20:38.086419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.422 [2024-07-26 14:20:38.086444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.422 qpair failed and we were unable to recover it. 00:26:30.422 [2024-07-26 14:20:38.086521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.422 [2024-07-26 14:20:38.086555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.422 qpair failed and we were unable to recover it. 00:26:30.422 [2024-07-26 14:20:38.086646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.422 [2024-07-26 14:20:38.086672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.422 qpair failed and we were unable to recover it. 00:26:30.422 [2024-07-26 14:20:38.086783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.422 [2024-07-26 14:20:38.086809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.422 qpair failed and we were unable to recover it. 00:26:30.422 [2024-07-26 14:20:38.086888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.422 [2024-07-26 14:20:38.086919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.422 qpair failed and we were unable to recover it. 00:26:30.422 [2024-07-26 14:20:38.087004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.422 [2024-07-26 14:20:38.087030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.422 qpair failed and we were unable to recover it. 00:26:30.422 [2024-07-26 14:20:38.087136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.422 [2024-07-26 14:20:38.087163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.422 qpair failed and we were unable to recover it. 00:26:30.422 [2024-07-26 14:20:38.087257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.422 [2024-07-26 14:20:38.087285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.422 qpair failed and we were unable to recover it. 00:26:30.422 [2024-07-26 14:20:38.087404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.422 [2024-07-26 14:20:38.087433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.422 qpair failed and we were unable to recover it. 00:26:30.422 [2024-07-26 14:20:38.087520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.422 [2024-07-26 14:20:38.087553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.422 qpair failed and we were unable to recover it. 00:26:30.422 [2024-07-26 14:20:38.087633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.422 [2024-07-26 14:20:38.087659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.422 qpair failed and we were unable to recover it. 00:26:30.422 [2024-07-26 14:20:38.087749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.422 [2024-07-26 14:20:38.087775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.422 qpair failed and we were unable to recover it. 00:26:30.422 [2024-07-26 14:20:38.087854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.422 [2024-07-26 14:20:38.087880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.422 qpair failed and we were unable to recover it. 00:26:30.422 [2024-07-26 14:20:38.087959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.422 [2024-07-26 14:20:38.087986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.422 qpair failed and we were unable to recover it. 00:26:30.422 [2024-07-26 14:20:38.088059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.422 [2024-07-26 14:20:38.088085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.422 qpair failed and we were unable to recover it. 00:26:30.422 [2024-07-26 14:20:38.088185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.422 [2024-07-26 14:20:38.088224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.422 qpair failed and we were unable to recover it. 00:26:30.422 [2024-07-26 14:20:38.088316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.422 [2024-07-26 14:20:38.088343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.422 qpair failed and we were unable to recover it. 00:26:30.422 [2024-07-26 14:20:38.088470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.422 [2024-07-26 14:20:38.088499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.422 qpair failed and we were unable to recover it. 00:26:30.422 [2024-07-26 14:20:38.088598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.422 [2024-07-26 14:20:38.088626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.422 qpair failed and we were unable to recover it. 00:26:30.423 [2024-07-26 14:20:38.088708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.423 [2024-07-26 14:20:38.088735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.423 qpair failed and we were unable to recover it. 00:26:30.423 [2024-07-26 14:20:38.088823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.423 [2024-07-26 14:20:38.088850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.423 qpair failed and we were unable to recover it. 00:26:30.423 [2024-07-26 14:20:38.088940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.423 [2024-07-26 14:20:38.088966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.423 qpair failed and we were unable to recover it. 00:26:30.423 [2024-07-26 14:20:38.089056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.423 [2024-07-26 14:20:38.089085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.423 qpair failed and we were unable to recover it. 00:26:30.423 [2024-07-26 14:20:38.089177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.423 [2024-07-26 14:20:38.089205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.423 qpair failed and we were unable to recover it. 00:26:30.423 [2024-07-26 14:20:38.089294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.423 [2024-07-26 14:20:38.089322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.423 qpair failed and we were unable to recover it. 00:26:30.423 [2024-07-26 14:20:38.089403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.423 [2024-07-26 14:20:38.089429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.423 qpair failed and we were unable to recover it. 00:26:30.423 [2024-07-26 14:20:38.089513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.423 [2024-07-26 14:20:38.089548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.423 qpair failed and we were unable to recover it. 00:26:30.423 [2024-07-26 14:20:38.089633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.423 [2024-07-26 14:20:38.089659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.423 qpair failed and we were unable to recover it. 00:26:30.423 [2024-07-26 14:20:38.089744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.423 [2024-07-26 14:20:38.089770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.423 qpair failed and we were unable to recover it. 00:26:30.423 [2024-07-26 14:20:38.089851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.423 [2024-07-26 14:20:38.089876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.423 qpair failed and we were unable to recover it. 00:26:30.423 [2024-07-26 14:20:38.089990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.423 [2024-07-26 14:20:38.090017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.423 qpair failed and we were unable to recover it. 00:26:30.423 [2024-07-26 14:20:38.090100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.423 [2024-07-26 14:20:38.090127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.423 qpair failed and we were unable to recover it. 00:26:30.423 [2024-07-26 14:20:38.090212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.423 [2024-07-26 14:20:38.090240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.423 qpair failed and we were unable to recover it. 00:26:30.423 [2024-07-26 14:20:38.090354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.423 [2024-07-26 14:20:38.090381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.423 qpair failed and we were unable to recover it. 00:26:30.423 [2024-07-26 14:20:38.090521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.423 [2024-07-26 14:20:38.090553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.423 qpair failed and we were unable to recover it. 00:26:30.423 [2024-07-26 14:20:38.090686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.423 [2024-07-26 14:20:38.090712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.423 qpair failed and we were unable to recover it. 00:26:30.423 [2024-07-26 14:20:38.090797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.423 [2024-07-26 14:20:38.090831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.423 qpair failed and we were unable to recover it. 00:26:30.423 [2024-07-26 14:20:38.090918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.423 [2024-07-26 14:20:38.090944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.423 qpair failed and we were unable to recover it. 00:26:30.423 [2024-07-26 14:20:38.091022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.423 [2024-07-26 14:20:38.091048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.423 qpair failed and we were unable to recover it. 00:26:30.423 [2024-07-26 14:20:38.091123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.423 [2024-07-26 14:20:38.091150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.423 qpair failed and we were unable to recover it. 00:26:30.423 [2024-07-26 14:20:38.091235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.423 [2024-07-26 14:20:38.091263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.423 qpair failed and we were unable to recover it. 00:26:30.423 [2024-07-26 14:20:38.091351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.423 [2024-07-26 14:20:38.091379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.423 qpair failed and we were unable to recover it. 00:26:30.423 [2024-07-26 14:20:38.091474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.423 [2024-07-26 14:20:38.091500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.423 qpair failed and we were unable to recover it. 00:26:30.423 [2024-07-26 14:20:38.091606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.423 [2024-07-26 14:20:38.091634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.423 qpair failed and we were unable to recover it. 00:26:30.423 [2024-07-26 14:20:38.091729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.423 [2024-07-26 14:20:38.091763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.423 qpair failed and we were unable to recover it. 00:26:30.423 [2024-07-26 14:20:38.091856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.423 [2024-07-26 14:20:38.091883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.423 qpair failed and we were unable to recover it. 00:26:30.423 [2024-07-26 14:20:38.091974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.423 [2024-07-26 14:20:38.092001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.423 qpair failed and we were unable to recover it. 00:26:30.423 [2024-07-26 14:20:38.092090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.423 [2024-07-26 14:20:38.092117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.423 qpair failed and we were unable to recover it. 00:26:30.423 [2024-07-26 14:20:38.092195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.423 [2024-07-26 14:20:38.092221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.423 qpair failed and we were unable to recover it. 00:26:30.423 [2024-07-26 14:20:38.092329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.423 [2024-07-26 14:20:38.092355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.423 qpair failed and we were unable to recover it. 00:26:30.423 [2024-07-26 14:20:38.092444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.423 [2024-07-26 14:20:38.092472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.423 qpair failed and we were unable to recover it. 00:26:30.423 [2024-07-26 14:20:38.092580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.423 [2024-07-26 14:20:38.092620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.423 qpair failed and we were unable to recover it. 00:26:30.423 [2024-07-26 14:20:38.092726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.423 [2024-07-26 14:20:38.092766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.423 qpair failed and we were unable to recover it. 00:26:30.423 [2024-07-26 14:20:38.092854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.423 [2024-07-26 14:20:38.092881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.423 qpair failed and we were unable to recover it. 00:26:30.423 [2024-07-26 14:20:38.092965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.423 [2024-07-26 14:20:38.092992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.423 qpair failed and we were unable to recover it. 00:26:30.423 [2024-07-26 14:20:38.093070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.423 [2024-07-26 14:20:38.093096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.423 qpair failed and we were unable to recover it. 00:26:30.423 [2024-07-26 14:20:38.093178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.423 [2024-07-26 14:20:38.093204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.424 qpair failed and we were unable to recover it. 00:26:30.424 [2024-07-26 14:20:38.093288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.424 [2024-07-26 14:20:38.093314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.424 qpair failed and we were unable to recover it. 00:26:30.424 [2024-07-26 14:20:38.093410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.424 [2024-07-26 14:20:38.093436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.424 qpair failed and we were unable to recover it. 00:26:30.424 [2024-07-26 14:20:38.093524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.424 [2024-07-26 14:20:38.093559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.424 qpair failed and we were unable to recover it. 00:26:30.424 [2024-07-26 14:20:38.093665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.424 [2024-07-26 14:20:38.093692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.424 qpair failed and we were unable to recover it. 00:26:30.424 [2024-07-26 14:20:38.093775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.424 [2024-07-26 14:20:38.093803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.424 qpair failed and we were unable to recover it. 00:26:30.424 [2024-07-26 14:20:38.093886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.424 [2024-07-26 14:20:38.093913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.424 qpair failed and we were unable to recover it. 00:26:30.424 [2024-07-26 14:20:38.093999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.424 [2024-07-26 14:20:38.094027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.424 qpair failed and we were unable to recover it. 00:26:30.424 [2024-07-26 14:20:38.094168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.424 [2024-07-26 14:20:38.094199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.424 qpair failed and we were unable to recover it. 00:26:30.424 [2024-07-26 14:20:38.094292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.424 [2024-07-26 14:20:38.094319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.424 qpair failed and we were unable to recover it. 00:26:30.424 [2024-07-26 14:20:38.094426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.424 [2024-07-26 14:20:38.094453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.424 qpair failed and we were unable to recover it. 00:26:30.424 [2024-07-26 14:20:38.094540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.424 [2024-07-26 14:20:38.094567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.424 qpair failed and we were unable to recover it. 00:26:30.424 [2024-07-26 14:20:38.094664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.424 [2024-07-26 14:20:38.094691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.424 qpair failed and we were unable to recover it. 00:26:30.424 [2024-07-26 14:20:38.094776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.424 [2024-07-26 14:20:38.094804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.424 qpair failed and we were unable to recover it. 00:26:30.424 [2024-07-26 14:20:38.094890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.424 [2024-07-26 14:20:38.094917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.424 qpair failed and we were unable to recover it. 00:26:30.424 [2024-07-26 14:20:38.095014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.424 [2024-07-26 14:20:38.095046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.424 qpair failed and we were unable to recover it. 00:26:30.424 [2024-07-26 14:20:38.095126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.424 [2024-07-26 14:20:38.095152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.424 qpair failed and we were unable to recover it. 00:26:30.424 [2024-07-26 14:20:38.095242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.424 [2024-07-26 14:20:38.095267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.424 qpair failed and we were unable to recover it. 00:26:30.424 [2024-07-26 14:20:38.095351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.424 [2024-07-26 14:20:38.095377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.424 qpair failed and we were unable to recover it. 00:26:30.424 [2024-07-26 14:20:38.095462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.424 [2024-07-26 14:20:38.095488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.424 qpair failed and we were unable to recover it. 00:26:30.424 [2024-07-26 14:20:38.095575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.424 [2024-07-26 14:20:38.095603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.424 qpair failed and we were unable to recover it. 00:26:30.424 [2024-07-26 14:20:38.095684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.424 [2024-07-26 14:20:38.095710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.424 qpair failed and we were unable to recover it. 00:26:30.424 [2024-07-26 14:20:38.095796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.424 [2024-07-26 14:20:38.095822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.424 qpair failed and we were unable to recover it. 00:26:30.424 [2024-07-26 14:20:38.095894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.424 [2024-07-26 14:20:38.095919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.424 qpair failed and we were unable to recover it. 00:26:30.424 [2024-07-26 14:20:38.096004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.424 [2024-07-26 14:20:38.096033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.424 qpair failed and we were unable to recover it. 00:26:30.424 [2024-07-26 14:20:38.096122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.424 [2024-07-26 14:20:38.096151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.424 qpair failed and we were unable to recover it. 00:26:30.424 [2024-07-26 14:20:38.096233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.424 [2024-07-26 14:20:38.096259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.424 qpair failed and we were unable to recover it. 00:26:30.424 [2024-07-26 14:20:38.096339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.424 [2024-07-26 14:20:38.096365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.424 qpair failed and we were unable to recover it. 00:26:30.424 [2024-07-26 14:20:38.096449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.424 [2024-07-26 14:20:38.096476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.424 qpair failed and we were unable to recover it. 00:26:30.424 [2024-07-26 14:20:38.096566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.424 [2024-07-26 14:20:38.096593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.424 qpair failed and we were unable to recover it. 00:26:30.424 [2024-07-26 14:20:38.096675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.424 [2024-07-26 14:20:38.096702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.424 qpair failed and we were unable to recover it. 00:26:30.424 [2024-07-26 14:20:38.096791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.424 [2024-07-26 14:20:38.096816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.424 qpair failed and we were unable to recover it. 00:26:30.424 [2024-07-26 14:20:38.096902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.424 [2024-07-26 14:20:38.096927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.424 qpair failed and we were unable to recover it. 00:26:30.424 [2024-07-26 14:20:38.097014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.424 [2024-07-26 14:20:38.097040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.424 qpair failed and we were unable to recover it. 00:26:30.424 [2024-07-26 14:20:38.097121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.424 [2024-07-26 14:20:38.097149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.424 qpair failed and we were unable to recover it. 00:26:30.424 [2024-07-26 14:20:38.097237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.424 [2024-07-26 14:20:38.097264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.424 qpair failed and we were unable to recover it. 00:26:30.424 [2024-07-26 14:20:38.097345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.424 [2024-07-26 14:20:38.097373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.424 qpair failed and we were unable to recover it. 00:26:30.424 [2024-07-26 14:20:38.097455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.424 [2024-07-26 14:20:38.097482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.424 qpair failed and we were unable to recover it. 00:26:30.424 [2024-07-26 14:20:38.097573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.424 [2024-07-26 14:20:38.097600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.425 qpair failed and we were unable to recover it. 00:26:30.425 [2024-07-26 14:20:38.097685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.425 [2024-07-26 14:20:38.097713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.425 qpair failed and we were unable to recover it. 00:26:30.425 [2024-07-26 14:20:38.097791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.425 [2024-07-26 14:20:38.097818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.425 qpair failed and we were unable to recover it. 00:26:30.425 [2024-07-26 14:20:38.097901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.425 [2024-07-26 14:20:38.097927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.425 qpair failed and we were unable to recover it. 00:26:30.425 [2024-07-26 14:20:38.098032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.425 [2024-07-26 14:20:38.098060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.425 qpair failed and we were unable to recover it. 00:26:30.425 [2024-07-26 14:20:38.098151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.425 [2024-07-26 14:20:38.098179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.425 qpair failed and we were unable to recover it. 00:26:30.425 [2024-07-26 14:20:38.098275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.425 [2024-07-26 14:20:38.098314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.425 qpair failed and we were unable to recover it. 00:26:30.425 [2024-07-26 14:20:38.098436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.425 [2024-07-26 14:20:38.098464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.425 qpair failed and we were unable to recover it. 00:26:30.425 [2024-07-26 14:20:38.098560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.425 [2024-07-26 14:20:38.098587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.425 qpair failed and we were unable to recover it. 00:26:30.425 [2024-07-26 14:20:38.098697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.425 [2024-07-26 14:20:38.098724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.425 qpair failed and we were unable to recover it. 00:26:30.425 [2024-07-26 14:20:38.098806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.425 [2024-07-26 14:20:38.098834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.425 qpair failed and we were unable to recover it. 00:26:30.425 [2024-07-26 14:20:38.098918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.425 [2024-07-26 14:20:38.098945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.425 qpair failed and we were unable to recover it. 00:26:30.425 [2024-07-26 14:20:38.099030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.425 [2024-07-26 14:20:38.099058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.425 qpair failed and we were unable to recover it. 00:26:30.425 [2024-07-26 14:20:38.099144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.425 [2024-07-26 14:20:38.099170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.425 qpair failed and we were unable to recover it. 00:26:30.425 [2024-07-26 14:20:38.099251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.425 [2024-07-26 14:20:38.099276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.425 qpair failed and we were unable to recover it. 00:26:30.425 [2024-07-26 14:20:38.099357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.425 [2024-07-26 14:20:38.099383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.425 qpair failed and we were unable to recover it. 00:26:30.425 [2024-07-26 14:20:38.099459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.425 [2024-07-26 14:20:38.099484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.425 qpair failed and we were unable to recover it. 00:26:30.425 [2024-07-26 14:20:38.099583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.425 [2024-07-26 14:20:38.099609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.425 qpair failed and we were unable to recover it. 00:26:30.425 [2024-07-26 14:20:38.099723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.425 [2024-07-26 14:20:38.099749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.425 qpair failed and we were unable to recover it. 00:26:30.425 [2024-07-26 14:20:38.099827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.425 [2024-07-26 14:20:38.099853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.425 qpair failed and we were unable to recover it. 00:26:30.425 [2024-07-26 14:20:38.099962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.425 [2024-07-26 14:20:38.099988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.425 qpair failed and we were unable to recover it. 00:26:30.425 [2024-07-26 14:20:38.100073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.425 [2024-07-26 14:20:38.100102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.425 qpair failed and we were unable to recover it. 00:26:30.425 [2024-07-26 14:20:38.100208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.425 [2024-07-26 14:20:38.100236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.425 qpair failed and we were unable to recover it. 00:26:30.425 [2024-07-26 14:20:38.100332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.425 [2024-07-26 14:20:38.100361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.425 qpair failed and we were unable to recover it. 00:26:30.425 [2024-07-26 14:20:38.100445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.425 [2024-07-26 14:20:38.100473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.425 qpair failed and we were unable to recover it. 00:26:30.425 [2024-07-26 14:20:38.100560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.425 [2024-07-26 14:20:38.100588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.425 qpair failed and we were unable to recover it. 00:26:30.425 [2024-07-26 14:20:38.100682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.425 [2024-07-26 14:20:38.100709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.425 qpair failed and we were unable to recover it. 00:26:30.425 [2024-07-26 14:20:38.100797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.425 [2024-07-26 14:20:38.100823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.425 qpair failed and we were unable to recover it. 00:26:30.425 [2024-07-26 14:20:38.100910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.425 [2024-07-26 14:20:38.100936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.425 qpair failed and we were unable to recover it. 00:26:30.425 [2024-07-26 14:20:38.101051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.425 [2024-07-26 14:20:38.101079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.425 qpair failed and we were unable to recover it. 00:26:30.425 [2024-07-26 14:20:38.101166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.425 [2024-07-26 14:20:38.101193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.425 qpair failed and we were unable to recover it. 00:26:30.425 [2024-07-26 14:20:38.101280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.425 [2024-07-26 14:20:38.101308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.425 qpair failed and we were unable to recover it. 00:26:30.425 [2024-07-26 14:20:38.101387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.425 [2024-07-26 14:20:38.101413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.425 qpair failed and we were unable to recover it. 00:26:30.425 [2024-07-26 14:20:38.101538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.425 [2024-07-26 14:20:38.101566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.425 qpair failed and we were unable to recover it. 00:26:30.425 [2024-07-26 14:20:38.101644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.425 [2024-07-26 14:20:38.101671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.425 qpair failed and we were unable to recover it. 00:26:30.425 [2024-07-26 14:20:38.101752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.425 [2024-07-26 14:20:38.101779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.425 qpair failed and we were unable to recover it. 00:26:30.425 [2024-07-26 14:20:38.101884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.425 [2024-07-26 14:20:38.101911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.425 qpair failed and we were unable to recover it. 00:26:30.425 [2024-07-26 14:20:38.101990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.425 [2024-07-26 14:20:38.102026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.425 qpair failed and we were unable to recover it. 00:26:30.425 [2024-07-26 14:20:38.102109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.426 [2024-07-26 14:20:38.102138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.426 qpair failed and we were unable to recover it. 00:26:30.426 [2024-07-26 14:20:38.102239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.426 [2024-07-26 14:20:38.102278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.426 qpair failed and we were unable to recover it. 00:26:30.426 [2024-07-26 14:20:38.102381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.426 [2024-07-26 14:20:38.102420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.426 qpair failed and we were unable to recover it. 00:26:30.426 [2024-07-26 14:20:38.102514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.426 [2024-07-26 14:20:38.102548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.426 qpair failed and we were unable to recover it. 00:26:30.426 [2024-07-26 14:20:38.102636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.426 [2024-07-26 14:20:38.102662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.426 qpair failed and we were unable to recover it. 00:26:30.426 [2024-07-26 14:20:38.102738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.426 [2024-07-26 14:20:38.102764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.426 qpair failed and we were unable to recover it. 00:26:30.426 [2024-07-26 14:20:38.102846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.426 [2024-07-26 14:20:38.102877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.426 qpair failed and we were unable to recover it. 00:26:30.426 [2024-07-26 14:20:38.102956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.426 [2024-07-26 14:20:38.102982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.426 qpair failed and we were unable to recover it. 00:26:30.426 [2024-07-26 14:20:38.103061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.426 [2024-07-26 14:20:38.103087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.426 qpair failed and we were unable to recover it. 00:26:30.426 [2024-07-26 14:20:38.103196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.426 [2024-07-26 14:20:38.103222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.426 qpair failed and we were unable to recover it. 00:26:30.426 [2024-07-26 14:20:38.103309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.426 [2024-07-26 14:20:38.103335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.426 qpair failed and we were unable to recover it. 00:26:30.426 [2024-07-26 14:20:38.103420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.426 [2024-07-26 14:20:38.103449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.426 qpair failed and we were unable to recover it. 00:26:30.426 [2024-07-26 14:20:38.103541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.426 [2024-07-26 14:20:38.103570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.426 qpair failed and we were unable to recover it. 00:26:30.426 [2024-07-26 14:20:38.103660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.426 [2024-07-26 14:20:38.103687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.426 qpair failed and we were unable to recover it. 00:26:30.426 [2024-07-26 14:20:38.103777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.426 [2024-07-26 14:20:38.103804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.426 qpair failed and we were unable to recover it. 00:26:30.426 [2024-07-26 14:20:38.103883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.426 [2024-07-26 14:20:38.103910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.426 qpair failed and we were unable to recover it. 00:26:30.426 [2024-07-26 14:20:38.103995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.426 [2024-07-26 14:20:38.104021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.426 qpair failed and we were unable to recover it. 00:26:30.426 [2024-07-26 14:20:38.104108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.426 [2024-07-26 14:20:38.104135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.426 qpair failed and we were unable to recover it. 00:26:30.426 [2024-07-26 14:20:38.104223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.426 [2024-07-26 14:20:38.104251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.426 qpair failed and we were unable to recover it. 00:26:30.426 [2024-07-26 14:20:38.104340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.426 [2024-07-26 14:20:38.104366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.426 qpair failed and we were unable to recover it. 00:26:30.426 [2024-07-26 14:20:38.104459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.426 [2024-07-26 14:20:38.104487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.426 qpair failed and we were unable to recover it. 00:26:30.426 [2024-07-26 14:20:38.104573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.426 [2024-07-26 14:20:38.104600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.426 qpair failed and we were unable to recover it. 00:26:30.426 [2024-07-26 14:20:38.104706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.426 [2024-07-26 14:20:38.104732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.426 qpair failed and we were unable to recover it. 00:26:30.426 [2024-07-26 14:20:38.104815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.426 [2024-07-26 14:20:38.104841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.426 qpair failed and we were unable to recover it. 00:26:30.426 [2024-07-26 14:20:38.104922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.426 [2024-07-26 14:20:38.104949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.426 qpair failed and we were unable to recover it. 00:26:30.426 [2024-07-26 14:20:38.105033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.426 [2024-07-26 14:20:38.105059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.426 qpair failed and we were unable to recover it. 00:26:30.426 [2024-07-26 14:20:38.105133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.426 [2024-07-26 14:20:38.105158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.426 qpair failed and we were unable to recover it. 00:26:30.426 [2024-07-26 14:20:38.105238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.426 [2024-07-26 14:20:38.105264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.426 qpair failed and we were unable to recover it. 00:26:30.426 [2024-07-26 14:20:38.105349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.426 [2024-07-26 14:20:38.105375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.426 qpair failed and we were unable to recover it. 00:26:30.426 [2024-07-26 14:20:38.105463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.426 [2024-07-26 14:20:38.105491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.426 qpair failed and we were unable to recover it. 00:26:30.426 [2024-07-26 14:20:38.105585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.426 [2024-07-26 14:20:38.105613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.426 qpair failed and we were unable to recover it. 00:26:30.426 [2024-07-26 14:20:38.105692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.426 [2024-07-26 14:20:38.105719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.426 qpair failed and we were unable to recover it. 00:26:30.426 [2024-07-26 14:20:38.105798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.427 [2024-07-26 14:20:38.105825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.427 qpair failed and we were unable to recover it. 00:26:30.427 [2024-07-26 14:20:38.105901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.427 [2024-07-26 14:20:38.105932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.427 qpair failed and we were unable to recover it. 00:26:30.427 [2024-07-26 14:20:38.106018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.427 [2024-07-26 14:20:38.106044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.427 qpair failed and we were unable to recover it. 00:26:30.427 [2024-07-26 14:20:38.106148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.427 [2024-07-26 14:20:38.106174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.427 qpair failed and we were unable to recover it. 00:26:30.427 [2024-07-26 14:20:38.106254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.427 [2024-07-26 14:20:38.106280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.427 qpair failed and we were unable to recover it. 00:26:30.427 [2024-07-26 14:20:38.106390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.427 [2024-07-26 14:20:38.106419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.427 qpair failed and we were unable to recover it. 00:26:30.427 [2024-07-26 14:20:38.106512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.427 [2024-07-26 14:20:38.106554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.427 qpair failed and we were unable to recover it. 00:26:30.427 [2024-07-26 14:20:38.106646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.427 [2024-07-26 14:20:38.106672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.427 qpair failed and we were unable to recover it. 00:26:30.427 [2024-07-26 14:20:38.106749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.427 [2024-07-26 14:20:38.106775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.427 qpair failed and we were unable to recover it. 00:26:30.427 [2024-07-26 14:20:38.106865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.427 [2024-07-26 14:20:38.106891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.427 qpair failed and we were unable to recover it. 00:26:30.427 [2024-07-26 14:20:38.106975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.427 [2024-07-26 14:20:38.107001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.427 qpair failed and we were unable to recover it. 00:26:30.427 [2024-07-26 14:20:38.107077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.427 [2024-07-26 14:20:38.107103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.427 qpair failed and we were unable to recover it. 00:26:30.427 [2024-07-26 14:20:38.107181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.427 [2024-07-26 14:20:38.107206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.427 qpair failed and we were unable to recover it. 00:26:30.427 [2024-07-26 14:20:38.107295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.427 [2024-07-26 14:20:38.107321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.427 qpair failed and we were unable to recover it. 00:26:30.427 [2024-07-26 14:20:38.107397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.427 [2024-07-26 14:20:38.107423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.427 qpair failed and we were unable to recover it. 00:26:30.427 [2024-07-26 14:20:38.107512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.427 [2024-07-26 14:20:38.107547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.427 qpair failed and we were unable to recover it. 00:26:30.427 [2024-07-26 14:20:38.107637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.427 [2024-07-26 14:20:38.107665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.427 qpair failed and we were unable to recover it. 00:26:30.427 [2024-07-26 14:20:38.107747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.427 [2024-07-26 14:20:38.107773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.427 qpair failed and we were unable to recover it. 00:26:30.427 [2024-07-26 14:20:38.107852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.427 [2024-07-26 14:20:38.107879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.427 qpair failed and we were unable to recover it. 00:26:30.427 [2024-07-26 14:20:38.107962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.427 [2024-07-26 14:20:38.107989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.427 qpair failed and we were unable to recover it. 00:26:30.427 [2024-07-26 14:20:38.108106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.427 [2024-07-26 14:20:38.108131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.427 qpair failed and we were unable to recover it. 00:26:30.427 [2024-07-26 14:20:38.108217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.427 [2024-07-26 14:20:38.108243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.427 qpair failed and we were unable to recover it. 00:26:30.427 [2024-07-26 14:20:38.108338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.427 [2024-07-26 14:20:38.108364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.427 qpair failed and we were unable to recover it. 00:26:30.427 [2024-07-26 14:20:38.108449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.427 [2024-07-26 14:20:38.108478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.427 qpair failed and we were unable to recover it. 00:26:30.427 [2024-07-26 14:20:38.108567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.427 [2024-07-26 14:20:38.108595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.427 qpair failed and we were unable to recover it. 00:26:30.427 [2024-07-26 14:20:38.108693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.427 [2024-07-26 14:20:38.108732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.427 qpair failed and we were unable to recover it. 00:26:30.427 [2024-07-26 14:20:38.108827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.427 [2024-07-26 14:20:38.108855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.427 qpair failed and we were unable to recover it. 00:26:30.427 [2024-07-26 14:20:38.108965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.427 [2024-07-26 14:20:38.108992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.427 qpair failed and we were unable to recover it. 00:26:30.427 [2024-07-26 14:20:38.109076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.427 [2024-07-26 14:20:38.109104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.427 qpair failed and we were unable to recover it. 00:26:30.427 [2024-07-26 14:20:38.109188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.427 [2024-07-26 14:20:38.109214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.427 qpair failed and we were unable to recover it. 00:26:30.427 [2024-07-26 14:20:38.109297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.427 [2024-07-26 14:20:38.109323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.427 qpair failed and we were unable to recover it. 00:26:30.427 [2024-07-26 14:20:38.109404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.427 [2024-07-26 14:20:38.109430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.427 qpair failed and we were unable to recover it. 00:26:30.427 [2024-07-26 14:20:38.109509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.427 [2024-07-26 14:20:38.109541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.427 qpair failed and we were unable to recover it. 00:26:30.427 [2024-07-26 14:20:38.109622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.427 [2024-07-26 14:20:38.109647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.427 qpair failed and we were unable to recover it. 00:26:30.427 [2024-07-26 14:20:38.109729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.427 [2024-07-26 14:20:38.109756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.427 qpair failed and we were unable to recover it. 00:26:30.427 [2024-07-26 14:20:38.109836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.427 [2024-07-26 14:20:38.109862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.427 qpair failed and we were unable to recover it. 00:26:30.427 [2024-07-26 14:20:38.109943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.427 [2024-07-26 14:20:38.109969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.427 qpair failed and we were unable to recover it. 00:26:30.427 [2024-07-26 14:20:38.110086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.427 [2024-07-26 14:20:38.110114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.427 qpair failed and we were unable to recover it. 00:26:30.427 [2024-07-26 14:20:38.110199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.428 [2024-07-26 14:20:38.110228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.428 qpair failed and we were unable to recover it. 00:26:30.428 [2024-07-26 14:20:38.110313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.428 [2024-07-26 14:20:38.110340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.428 qpair failed and we were unable to recover it. 00:26:30.428 [2024-07-26 14:20:38.110423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.428 [2024-07-26 14:20:38.110449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.428 qpair failed and we were unable to recover it. 00:26:30.428 [2024-07-26 14:20:38.110541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.428 [2024-07-26 14:20:38.110568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.428 qpair failed and we were unable to recover it. 00:26:30.428 [2024-07-26 14:20:38.110684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.428 [2024-07-26 14:20:38.110710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.428 qpair failed and we were unable to recover it. 00:26:30.428 [2024-07-26 14:20:38.110820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.428 [2024-07-26 14:20:38.110847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.428 qpair failed and we were unable to recover it. 00:26:30.428 [2024-07-26 14:20:38.110928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.428 [2024-07-26 14:20:38.110955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.428 qpair failed and we were unable to recover it. 00:26:30.428 [2024-07-26 14:20:38.111064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.428 [2024-07-26 14:20:38.111093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.428 qpair failed and we were unable to recover it. 00:26:30.428 [2024-07-26 14:20:38.111206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.428 [2024-07-26 14:20:38.111233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.428 qpair failed and we were unable to recover it. 00:26:30.428 [2024-07-26 14:20:38.111317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.428 [2024-07-26 14:20:38.111344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.428 qpair failed and we were unable to recover it. 00:26:30.428 [2024-07-26 14:20:38.111454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.428 [2024-07-26 14:20:38.111480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.428 qpair failed and we were unable to recover it. 00:26:30.428 [2024-07-26 14:20:38.111570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.428 [2024-07-26 14:20:38.111597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.428 qpair failed and we were unable to recover it. 00:26:30.428 [2024-07-26 14:20:38.111686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.428 [2024-07-26 14:20:38.111712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.428 qpair failed and we were unable to recover it. 00:26:30.428 [2024-07-26 14:20:38.111795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.428 [2024-07-26 14:20:38.111821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.428 qpair failed and we were unable to recover it. 00:26:30.428 [2024-07-26 14:20:38.111911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.428 [2024-07-26 14:20:38.111937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.428 qpair failed and we were unable to recover it. 00:26:30.428 [2024-07-26 14:20:38.112027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.428 [2024-07-26 14:20:38.112053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.428 qpair failed and we were unable to recover it. 00:26:30.428 [2024-07-26 14:20:38.112139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.428 [2024-07-26 14:20:38.112164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.428 qpair failed and we were unable to recover it. 00:26:30.428 [2024-07-26 14:20:38.112256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.428 [2024-07-26 14:20:38.112282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.428 qpair failed and we were unable to recover it. 00:26:30.428 [2024-07-26 14:20:38.112363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.428 [2024-07-26 14:20:38.112390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.428 qpair failed and we were unable to recover it. 00:26:30.428 [2024-07-26 14:20:38.112505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.428 [2024-07-26 14:20:38.112536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.428 qpair failed and we were unable to recover it. 00:26:30.428 [2024-07-26 14:20:38.112616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.428 [2024-07-26 14:20:38.112642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.428 qpair failed and we were unable to recover it. 00:26:30.428 [2024-07-26 14:20:38.112727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.428 [2024-07-26 14:20:38.112754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.428 qpair failed and we were unable to recover it. 00:26:30.428 [2024-07-26 14:20:38.112832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.428 [2024-07-26 14:20:38.112858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.428 qpair failed and we were unable to recover it. 00:26:30.428 [2024-07-26 14:20:38.112962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.428 [2024-07-26 14:20:38.112988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.428 qpair failed and we were unable to recover it. 00:26:30.428 [2024-07-26 14:20:38.113072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.428 [2024-07-26 14:20:38.113099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.428 qpair failed and we were unable to recover it. 00:26:30.428 [2024-07-26 14:20:38.113193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.428 [2024-07-26 14:20:38.113219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.428 qpair failed and we were unable to recover it. 00:26:30.428 [2024-07-26 14:20:38.113300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.428 [2024-07-26 14:20:38.113327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.428 qpair failed and we were unable to recover it. 00:26:30.428 [2024-07-26 14:20:38.113415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.428 [2024-07-26 14:20:38.113442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.428 qpair failed and we were unable to recover it. 00:26:30.428 [2024-07-26 14:20:38.113520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.428 [2024-07-26 14:20:38.113552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.428 qpair failed and we were unable to recover it. 00:26:30.428 [2024-07-26 14:20:38.113639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.428 [2024-07-26 14:20:38.113666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.428 qpair failed and we were unable to recover it. 00:26:30.428 [2024-07-26 14:20:38.113764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.428 [2024-07-26 14:20:38.113809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.428 qpair failed and we were unable to recover it. 00:26:30.428 [2024-07-26 14:20:38.113951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.428 [2024-07-26 14:20:38.113979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.428 qpair failed and we were unable to recover it. 00:26:30.428 [2024-07-26 14:20:38.114089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.428 [2024-07-26 14:20:38.114115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.428 qpair failed and we were unable to recover it. 00:26:30.428 [2024-07-26 14:20:38.114202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.428 [2024-07-26 14:20:38.114228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.428 qpair failed and we were unable to recover it. 00:26:30.428 [2024-07-26 14:20:38.114322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.428 [2024-07-26 14:20:38.114361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.428 qpair failed and we were unable to recover it. 00:26:30.428 [2024-07-26 14:20:38.114456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.428 [2024-07-26 14:20:38.114484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.428 qpair failed and we were unable to recover it. 00:26:30.428 [2024-07-26 14:20:38.114586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.428 [2024-07-26 14:20:38.114614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.428 qpair failed and we were unable to recover it. 00:26:30.428 [2024-07-26 14:20:38.114705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.428 [2024-07-26 14:20:38.114731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.428 qpair failed and we were unable to recover it. 00:26:30.429 [2024-07-26 14:20:38.114820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.429 [2024-07-26 14:20:38.114848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.429 qpair failed and we were unable to recover it. 00:26:30.429 [2024-07-26 14:20:38.114931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.429 [2024-07-26 14:20:38.114957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.429 qpair failed and we were unable to recover it. 00:26:30.429 [2024-07-26 14:20:38.115046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.429 [2024-07-26 14:20:38.115075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.429 qpair failed and we were unable to recover it. 00:26:30.429 [2024-07-26 14:20:38.115169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.429 [2024-07-26 14:20:38.115196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.429 qpair failed and we were unable to recover it. 00:26:30.429 [2024-07-26 14:20:38.115283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.429 [2024-07-26 14:20:38.115311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.429 qpair failed and we were unable to recover it. 00:26:30.429 [2024-07-26 14:20:38.115419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.429 [2024-07-26 14:20:38.115446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.429 qpair failed and we were unable to recover it. 00:26:30.429 [2024-07-26 14:20:38.115565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.429 [2024-07-26 14:20:38.115593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.429 qpair failed and we were unable to recover it. 00:26:30.429 [2024-07-26 14:20:38.115678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.429 [2024-07-26 14:20:38.115704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.429 qpair failed and we were unable to recover it. 00:26:30.429 [2024-07-26 14:20:38.115781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.429 [2024-07-26 14:20:38.115808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.429 qpair failed and we were unable to recover it. 00:26:30.429 [2024-07-26 14:20:38.115930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.429 [2024-07-26 14:20:38.115956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.429 qpair failed and we were unable to recover it. 00:26:30.429 [2024-07-26 14:20:38.116047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.429 [2024-07-26 14:20:38.116073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.429 qpair failed and we were unable to recover it. 00:26:30.429 [2024-07-26 14:20:38.116188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.429 [2024-07-26 14:20:38.116217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.429 qpair failed and we were unable to recover it. 00:26:30.429 [2024-07-26 14:20:38.116359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.429 [2024-07-26 14:20:38.116399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.429 qpair failed and we were unable to recover it. 00:26:30.429 [2024-07-26 14:20:38.116489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.429 [2024-07-26 14:20:38.116518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.429 qpair failed and we were unable to recover it. 00:26:30.429 [2024-07-26 14:20:38.116615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.429 [2024-07-26 14:20:38.116642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.429 qpair failed and we were unable to recover it. 00:26:30.429 [2024-07-26 14:20:38.116727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.429 [2024-07-26 14:20:38.116754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.429 qpair failed and we were unable to recover it. 00:26:30.429 [2024-07-26 14:20:38.116833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.429 [2024-07-26 14:20:38.116860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.429 qpair failed and we were unable to recover it. 00:26:30.429 [2024-07-26 14:20:38.116948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.429 [2024-07-26 14:20:38.116975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.429 qpair failed and we were unable to recover it. 00:26:30.429 [2024-07-26 14:20:38.117056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.429 [2024-07-26 14:20:38.117083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.429 qpair failed and we were unable to recover it. 00:26:30.429 [2024-07-26 14:20:38.117175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.429 [2024-07-26 14:20:38.117204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.429 qpair failed and we were unable to recover it. 00:26:30.429 [2024-07-26 14:20:38.117290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.429 [2024-07-26 14:20:38.117317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.429 qpair failed and we were unable to recover it. 00:26:30.429 [2024-07-26 14:20:38.117402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.429 [2024-07-26 14:20:38.117429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.429 qpair failed and we were unable to recover it. 00:26:30.429 [2024-07-26 14:20:38.117510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.429 [2024-07-26 14:20:38.117547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.429 qpair failed and we were unable to recover it. 00:26:30.429 [2024-07-26 14:20:38.117642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.429 [2024-07-26 14:20:38.117668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.429 qpair failed and we were unable to recover it. 00:26:30.429 [2024-07-26 14:20:38.117746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.429 [2024-07-26 14:20:38.117772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.429 qpair failed and we were unable to recover it. 00:26:30.429 [2024-07-26 14:20:38.117856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.429 [2024-07-26 14:20:38.117882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.429 qpair failed and we were unable to recover it. 00:26:30.429 [2024-07-26 14:20:38.117968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.429 [2024-07-26 14:20:38.117996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.429 qpair failed and we were unable to recover it. 00:26:30.429 [2024-07-26 14:20:38.118081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.429 [2024-07-26 14:20:38.118108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.429 qpair failed and we were unable to recover it. 00:26:30.429 [2024-07-26 14:20:38.118198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.429 [2024-07-26 14:20:38.118237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.429 qpair failed and we were unable to recover it. 00:26:30.429 [2024-07-26 14:20:38.118432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.429 [2024-07-26 14:20:38.118458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.429 qpair failed and we were unable to recover it. 00:26:30.429 [2024-07-26 14:20:38.118556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.429 [2024-07-26 14:20:38.118584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.429 qpair failed and we were unable to recover it. 00:26:30.429 [2024-07-26 14:20:38.118662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.429 [2024-07-26 14:20:38.118688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.429 qpair failed and we were unable to recover it. 00:26:30.429 [2024-07-26 14:20:38.118764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.429 [2024-07-26 14:20:38.118790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.429 qpair failed and we were unable to recover it. 00:26:30.429 [2024-07-26 14:20:38.118878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.429 [2024-07-26 14:20:38.118904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.429 qpair failed and we were unable to recover it. 00:26:30.429 [2024-07-26 14:20:38.118989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.429 [2024-07-26 14:20:38.119015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.429 qpair failed and we were unable to recover it. 00:26:30.429 [2024-07-26 14:20:38.119098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.429 [2024-07-26 14:20:38.119126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.429 qpair failed and we were unable to recover it. 00:26:30.429 [2024-07-26 14:20:38.119208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.429 [2024-07-26 14:20:38.119237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.429 qpair failed and we were unable to recover it. 00:26:30.429 [2024-07-26 14:20:38.119332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.430 [2024-07-26 14:20:38.119359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.430 qpair failed and we were unable to recover it. 00:26:30.430 [2024-07-26 14:20:38.119435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.430 [2024-07-26 14:20:38.119461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.430 qpair failed and we were unable to recover it. 00:26:30.430 [2024-07-26 14:20:38.119593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.430 [2024-07-26 14:20:38.119620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.430 qpair failed and we were unable to recover it. 00:26:30.430 [2024-07-26 14:20:38.119702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.430 [2024-07-26 14:20:38.119729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.430 qpair failed and we were unable to recover it. 00:26:30.430 [2024-07-26 14:20:38.119814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.430 [2024-07-26 14:20:38.119841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.430 qpair failed and we were unable to recover it. 00:26:30.430 [2024-07-26 14:20:38.119924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.430 [2024-07-26 14:20:38.119951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.430 qpair failed and we were unable to recover it. 00:26:30.430 [2024-07-26 14:20:38.120058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.430 [2024-07-26 14:20:38.120084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.430 qpair failed and we were unable to recover it. 00:26:30.430 [2024-07-26 14:20:38.120166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.430 [2024-07-26 14:20:38.120193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.430 qpair failed and we were unable to recover it. 00:26:30.430 [2024-07-26 14:20:38.120280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.430 [2024-07-26 14:20:38.120308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.430 qpair failed and we were unable to recover it. 00:26:30.430 [2024-07-26 14:20:38.120392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.430 [2024-07-26 14:20:38.120421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.430 qpair failed and we were unable to recover it. 00:26:30.430 [2024-07-26 14:20:38.120503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.430 [2024-07-26 14:20:38.120535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.430 qpair failed and we were unable to recover it. 00:26:30.430 [2024-07-26 14:20:38.120626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.430 [2024-07-26 14:20:38.120654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.430 qpair failed and we were unable to recover it. 00:26:30.430 [2024-07-26 14:20:38.120762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.430 [2024-07-26 14:20:38.120789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.430 qpair failed and we were unable to recover it. 00:26:30.430 [2024-07-26 14:20:38.120862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.430 [2024-07-26 14:20:38.120888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.430 qpair failed and we were unable to recover it. 00:26:30.430 [2024-07-26 14:20:38.120970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.430 [2024-07-26 14:20:38.120996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.430 qpair failed and we were unable to recover it. 00:26:30.430 [2024-07-26 14:20:38.121075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.430 [2024-07-26 14:20:38.121102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.430 qpair failed and we were unable to recover it. 00:26:30.430 [2024-07-26 14:20:38.121198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.430 [2024-07-26 14:20:38.121226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.430 qpair failed and we were unable to recover it. 00:26:30.430 [2024-07-26 14:20:38.121326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.430 [2024-07-26 14:20:38.121353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.430 qpair failed and we were unable to recover it. 00:26:30.430 [2024-07-26 14:20:38.121438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.430 [2024-07-26 14:20:38.121464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.430 qpair failed and we were unable to recover it. 00:26:30.430 [2024-07-26 14:20:38.121560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.430 [2024-07-26 14:20:38.121587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.430 qpair failed and we were unable to recover it. 00:26:30.430 [2024-07-26 14:20:38.121662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.430 [2024-07-26 14:20:38.121688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.430 qpair failed and we were unable to recover it. 00:26:30.430 [2024-07-26 14:20:38.121777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.430 [2024-07-26 14:20:38.121804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.430 qpair failed and we were unable to recover it. 00:26:30.430 [2024-07-26 14:20:38.121916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.430 [2024-07-26 14:20:38.121947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.430 qpair failed and we were unable to recover it. 00:26:30.430 [2024-07-26 14:20:38.122031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.430 [2024-07-26 14:20:38.122057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.430 qpair failed and we were unable to recover it. 00:26:30.430 [2024-07-26 14:20:38.122138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.430 [2024-07-26 14:20:38.122164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.430 qpair failed and we were unable to recover it. 00:26:30.430 [2024-07-26 14:20:38.122242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.430 [2024-07-26 14:20:38.122268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.430 qpair failed and we were unable to recover it. 00:26:30.430 [2024-07-26 14:20:38.122366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.430 [2024-07-26 14:20:38.122406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.430 qpair failed and we were unable to recover it. 00:26:30.430 [2024-07-26 14:20:38.122493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.430 [2024-07-26 14:20:38.122519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.430 qpair failed and we were unable to recover it. 00:26:30.430 [2024-07-26 14:20:38.122616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.430 [2024-07-26 14:20:38.122643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.430 qpair failed and we were unable to recover it. 00:26:30.430 [2024-07-26 14:20:38.122728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.430 [2024-07-26 14:20:38.122754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.431 qpair failed and we were unable to recover it. 00:26:30.431 [2024-07-26 14:20:38.122841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.431 [2024-07-26 14:20:38.122868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.431 qpair failed and we were unable to recover it. 00:26:30.431 [2024-07-26 14:20:38.122946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.431 [2024-07-26 14:20:38.122972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.431 qpair failed and we were unable to recover it. 00:26:30.431 [2024-07-26 14:20:38.123050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.431 [2024-07-26 14:20:38.123076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.431 qpair failed and we were unable to recover it. 00:26:30.431 [2024-07-26 14:20:38.123152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.431 [2024-07-26 14:20:38.123177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.431 qpair failed and we were unable to recover it. 00:26:30.431 [2024-07-26 14:20:38.123278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.431 [2024-07-26 14:20:38.123318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.431 qpair failed and we were unable to recover it. 00:26:30.431 [2024-07-26 14:20:38.123406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.431 [2024-07-26 14:20:38.123434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.431 qpair failed and we were unable to recover it. 00:26:30.431 [2024-07-26 14:20:38.123553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.431 [2024-07-26 14:20:38.123584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.431 qpair failed and we were unable to recover it. 00:26:30.431 [2024-07-26 14:20:38.123683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.431 [2024-07-26 14:20:38.123710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.431 qpair failed and we were unable to recover it. 00:26:30.431 [2024-07-26 14:20:38.123791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.431 [2024-07-26 14:20:38.123818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.431 qpair failed and we were unable to recover it. 00:26:30.431 [2024-07-26 14:20:38.123903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.431 [2024-07-26 14:20:38.123929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.431 qpair failed and we were unable to recover it. 00:26:30.431 [2024-07-26 14:20:38.124019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.431 [2024-07-26 14:20:38.124045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.431 qpair failed and we were unable to recover it. 00:26:30.431 [2024-07-26 14:20:38.124160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.431 [2024-07-26 14:20:38.124186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.431 qpair failed and we were unable to recover it. 00:26:30.431 [2024-07-26 14:20:38.124270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.431 [2024-07-26 14:20:38.124297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.431 qpair failed and we were unable to recover it. 00:26:30.431 [2024-07-26 14:20:38.124380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.431 [2024-07-26 14:20:38.124409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.431 qpair failed and we were unable to recover it. 00:26:30.431 [2024-07-26 14:20:38.124499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.431 [2024-07-26 14:20:38.124536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.431 qpair failed and we were unable to recover it. 00:26:30.431 [2024-07-26 14:20:38.124660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.431 [2024-07-26 14:20:38.124689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.431 qpair failed and we were unable to recover it. 00:26:30.431 [2024-07-26 14:20:38.124781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.431 [2024-07-26 14:20:38.124808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.431 qpair failed and we were unable to recover it. 00:26:30.431 [2024-07-26 14:20:38.124921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.431 [2024-07-26 14:20:38.124948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.431 qpair failed and we were unable to recover it. 00:26:30.431 [2024-07-26 14:20:38.125038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.431 [2024-07-26 14:20:38.125065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.431 qpair failed and we were unable to recover it. 00:26:30.431 [2024-07-26 14:20:38.125153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.431 [2024-07-26 14:20:38.125181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.431 qpair failed and we were unable to recover it. 00:26:30.431 [2024-07-26 14:20:38.125262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.431 [2024-07-26 14:20:38.125289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.431 qpair failed and we were unable to recover it. 00:26:30.431 [2024-07-26 14:20:38.125378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.431 [2024-07-26 14:20:38.125404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.431 qpair failed and we were unable to recover it. 00:26:30.431 [2024-07-26 14:20:38.125490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.431 [2024-07-26 14:20:38.125517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.431 qpair failed and we were unable to recover it. 00:26:30.431 [2024-07-26 14:20:38.125633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.431 [2024-07-26 14:20:38.125660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.431 qpair failed and we were unable to recover it. 00:26:30.431 [2024-07-26 14:20:38.125742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.431 [2024-07-26 14:20:38.125769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.431 qpair failed and we were unable to recover it. 00:26:30.431 [2024-07-26 14:20:38.125855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.431 [2024-07-26 14:20:38.125882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.431 qpair failed and we were unable to recover it. 00:26:30.431 [2024-07-26 14:20:38.125970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.431 [2024-07-26 14:20:38.125996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.431 qpair failed and we were unable to recover it. 00:26:30.431 [2024-07-26 14:20:38.126080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.431 [2024-07-26 14:20:38.126106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.431 qpair failed and we were unable to recover it. 00:26:30.431 [2024-07-26 14:20:38.126188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.431 [2024-07-26 14:20:38.126214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.431 qpair failed and we were unable to recover it. 00:26:30.431 [2024-07-26 14:20:38.126295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.431 [2024-07-26 14:20:38.126322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.431 qpair failed and we were unable to recover it. 00:26:30.431 [2024-07-26 14:20:38.126417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.431 [2024-07-26 14:20:38.126457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.431 qpair failed and we were unable to recover it. 00:26:30.431 [2024-07-26 14:20:38.126591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.431 [2024-07-26 14:20:38.126621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.431 qpair failed and we were unable to recover it. 00:26:30.431 [2024-07-26 14:20:38.126718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.431 [2024-07-26 14:20:38.126749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.431 qpair failed and we were unable to recover it. 00:26:30.431 [2024-07-26 14:20:38.126831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.431 [2024-07-26 14:20:38.126858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.431 qpair failed and we were unable to recover it. 00:26:30.431 [2024-07-26 14:20:38.126945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.431 [2024-07-26 14:20:38.126972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.431 qpair failed and we were unable to recover it. 00:26:30.431 [2024-07-26 14:20:38.127085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.431 [2024-07-26 14:20:38.127112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.431 qpair failed and we were unable to recover it. 00:26:30.431 [2024-07-26 14:20:38.127195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.432 [2024-07-26 14:20:38.127221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.432 qpair failed and we were unable to recover it. 00:26:30.432 [2024-07-26 14:20:38.127295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.432 [2024-07-26 14:20:38.127322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.432 qpair failed and we were unable to recover it. 00:26:30.432 [2024-07-26 14:20:38.127403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.432 [2024-07-26 14:20:38.127430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.432 qpair failed and we were unable to recover it. 00:26:30.432 [2024-07-26 14:20:38.127507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.432 [2024-07-26 14:20:38.127539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.432 qpair failed and we were unable to recover it. 00:26:30.432 [2024-07-26 14:20:38.127625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.432 [2024-07-26 14:20:38.127651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.432 qpair failed and we were unable to recover it. 00:26:30.432 [2024-07-26 14:20:38.127738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.432 [2024-07-26 14:20:38.127765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.432 qpair failed and we were unable to recover it. 00:26:30.432 [2024-07-26 14:20:38.127850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.432 [2024-07-26 14:20:38.127876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.432 qpair failed and we were unable to recover it. 00:26:30.432 [2024-07-26 14:20:38.127954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.432 [2024-07-26 14:20:38.127980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.432 qpair failed and we were unable to recover it. 00:26:30.432 [2024-07-26 14:20:38.128057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.432 [2024-07-26 14:20:38.128083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.432 qpair failed and we were unable to recover it. 00:26:30.432 [2024-07-26 14:20:38.128168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.432 [2024-07-26 14:20:38.128194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.432 qpair failed and we were unable to recover it. 00:26:30.432 [2024-07-26 14:20:38.128282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.432 [2024-07-26 14:20:38.128308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.432 qpair failed and we were unable to recover it. 00:26:30.432 [2024-07-26 14:20:38.128396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.432 [2024-07-26 14:20:38.128422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.432 qpair failed and we were unable to recover it. 00:26:30.432 [2024-07-26 14:20:38.128503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.432 [2024-07-26 14:20:38.128536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.432 qpair failed and we were unable to recover it. 00:26:30.432 [2024-07-26 14:20:38.128630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.432 [2024-07-26 14:20:38.128656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.432 qpair failed and we were unable to recover it. 00:26:30.432 [2024-07-26 14:20:38.128737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.432 [2024-07-26 14:20:38.128762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.432 qpair failed and we were unable to recover it. 00:26:30.432 [2024-07-26 14:20:38.128843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.432 [2024-07-26 14:20:38.128870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.432 qpair failed and we were unable to recover it. 00:26:30.432 [2024-07-26 14:20:38.128981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.432 [2024-07-26 14:20:38.129007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.432 qpair failed and we were unable to recover it. 00:26:30.432 [2024-07-26 14:20:38.129098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.432 [2024-07-26 14:20:38.129127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.432 qpair failed and we were unable to recover it. 00:26:30.432 [2024-07-26 14:20:38.129212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.432 [2024-07-26 14:20:38.129241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.432 qpair failed and we were unable to recover it. 00:26:30.432 [2024-07-26 14:20:38.129317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.432 [2024-07-26 14:20:38.129344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.432 qpair failed and we were unable to recover it. 00:26:30.432 [2024-07-26 14:20:38.129429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.432 [2024-07-26 14:20:38.129456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.432 qpair failed and we were unable to recover it. 00:26:30.432 [2024-07-26 14:20:38.129538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.432 [2024-07-26 14:20:38.129564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.432 qpair failed and we were unable to recover it. 00:26:30.432 [2024-07-26 14:20:38.129644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.432 [2024-07-26 14:20:38.129670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.432 qpair failed and we were unable to recover it. 00:26:30.432 [2024-07-26 14:20:38.129753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.432 [2024-07-26 14:20:38.129784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.432 qpair failed and we were unable to recover it. 00:26:30.432 [2024-07-26 14:20:38.129870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.432 [2024-07-26 14:20:38.129897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.432 qpair failed and we were unable to recover it. 00:26:30.432 [2024-07-26 14:20:38.129997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.432 [2024-07-26 14:20:38.130024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.432 qpair failed and we were unable to recover it. 00:26:30.432 [2024-07-26 14:20:38.130133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.432 [2024-07-26 14:20:38.130160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.432 qpair failed and we were unable to recover it. 00:26:30.432 [2024-07-26 14:20:38.130265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.432 [2024-07-26 14:20:38.130292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.432 qpair failed and we were unable to recover it. 00:26:30.432 [2024-07-26 14:20:38.130485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.432 [2024-07-26 14:20:38.130512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.432 qpair failed and we were unable to recover it. 00:26:30.432 [2024-07-26 14:20:38.130605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.432 [2024-07-26 14:20:38.130632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.432 qpair failed and we were unable to recover it. 00:26:30.432 [2024-07-26 14:20:38.130713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.432 [2024-07-26 14:20:38.130740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.432 qpair failed and we were unable to recover it. 00:26:30.432 [2024-07-26 14:20:38.130821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.432 [2024-07-26 14:20:38.130847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.432 qpair failed and we were unable to recover it. 00:26:30.432 [2024-07-26 14:20:38.130928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.432 [2024-07-26 14:20:38.130956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.432 qpair failed and we were unable to recover it. 00:26:30.432 [2024-07-26 14:20:38.131037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.432 [2024-07-26 14:20:38.131064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.432 qpair failed and we were unable to recover it. 00:26:30.432 [2024-07-26 14:20:38.131169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.432 [2024-07-26 14:20:38.131194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.432 qpair failed and we were unable to recover it. 00:26:30.432 [2024-07-26 14:20:38.131278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.432 [2024-07-26 14:20:38.131304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.432 qpair failed and we were unable to recover it. 00:26:30.432 [2024-07-26 14:20:38.131386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.432 [2024-07-26 14:20:38.131415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.432 qpair failed and we were unable to recover it. 00:26:30.432 [2024-07-26 14:20:38.131509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.432 [2024-07-26 14:20:38.131547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.432 qpair failed and we were unable to recover it. 00:26:30.433 [2024-07-26 14:20:38.131642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.433 [2024-07-26 14:20:38.131669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.433 qpair failed and we were unable to recover it. 00:26:30.433 [2024-07-26 14:20:38.131756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.433 [2024-07-26 14:20:38.131781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.433 qpair failed and we were unable to recover it. 00:26:30.433 [2024-07-26 14:20:38.131867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.433 [2024-07-26 14:20:38.131891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.433 qpair failed and we were unable to recover it. 00:26:30.433 [2024-07-26 14:20:38.131983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.433 [2024-07-26 14:20:38.132010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.433 qpair failed and we were unable to recover it. 00:26:30.433 [2024-07-26 14:20:38.132101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.433 [2024-07-26 14:20:38.132126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.433 qpair failed and we were unable to recover it. 00:26:30.433 [2024-07-26 14:20:38.132215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.433 [2024-07-26 14:20:38.132240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.433 qpair failed and we were unable to recover it. 00:26:30.433 [2024-07-26 14:20:38.132351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.433 [2024-07-26 14:20:38.132375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.433 qpair failed and we were unable to recover it. 00:26:30.433 [2024-07-26 14:20:38.132460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.433 [2024-07-26 14:20:38.132484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.433 qpair failed and we were unable to recover it. 00:26:30.433 [2024-07-26 14:20:38.132570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.433 [2024-07-26 14:20:38.132595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.433 qpair failed and we were unable to recover it. 00:26:30.433 [2024-07-26 14:20:38.132673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.433 [2024-07-26 14:20:38.132697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.433 qpair failed and we were unable to recover it. 00:26:30.433 [2024-07-26 14:20:38.132783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.433 [2024-07-26 14:20:38.132808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.433 qpair failed and we were unable to recover it. 00:26:30.433 [2024-07-26 14:20:38.132891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.433 [2024-07-26 14:20:38.132915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.433 qpair failed and we were unable to recover it. 00:26:30.433 [2024-07-26 14:20:38.133009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.433 [2024-07-26 14:20:38.133040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.433 qpair failed and we were unable to recover it. 00:26:30.433 [2024-07-26 14:20:38.133124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.433 [2024-07-26 14:20:38.133151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.433 qpair failed and we were unable to recover it. 00:26:30.433 [2024-07-26 14:20:38.133248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.433 [2024-07-26 14:20:38.133286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.433 qpair failed and we were unable to recover it. 00:26:30.433 [2024-07-26 14:20:38.133380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.433 [2024-07-26 14:20:38.133407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.433 qpair failed and we were unable to recover it. 00:26:30.433 [2024-07-26 14:20:38.133493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.433 [2024-07-26 14:20:38.133518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.433 qpair failed and we were unable to recover it. 00:26:30.433 [2024-07-26 14:20:38.133635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.433 [2024-07-26 14:20:38.133664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.433 qpair failed and we were unable to recover it. 00:26:30.433 [2024-07-26 14:20:38.133748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.433 [2024-07-26 14:20:38.133774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.433 qpair failed and we were unable to recover it. 00:26:30.433 [2024-07-26 14:20:38.133879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.433 [2024-07-26 14:20:38.133904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.433 qpair failed and we were unable to recover it. 00:26:30.433 [2024-07-26 14:20:38.133989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.433 [2024-07-26 14:20:38.134015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.433 qpair failed and we were unable to recover it. 00:26:30.433 [2024-07-26 14:20:38.134099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.433 [2024-07-26 14:20:38.134127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.433 qpair failed and we were unable to recover it. 00:26:30.433 [2024-07-26 14:20:38.134214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.433 [2024-07-26 14:20:38.134242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.433 qpair failed and we were unable to recover it. 00:26:30.433 [2024-07-26 14:20:38.134321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.433 [2024-07-26 14:20:38.134348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.433 qpair failed and we were unable to recover it. 00:26:30.433 [2024-07-26 14:20:38.134425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.433 [2024-07-26 14:20:38.134452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.433 qpair failed and we were unable to recover it. 00:26:30.433 [2024-07-26 14:20:38.134534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.433 [2024-07-26 14:20:38.134561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.433 qpair failed and we were unable to recover it. 00:26:30.433 [2024-07-26 14:20:38.134649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.433 [2024-07-26 14:20:38.134676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.433 qpair failed and we were unable to recover it. 00:26:30.433 [2024-07-26 14:20:38.134765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.433 [2024-07-26 14:20:38.134792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.433 qpair failed and we were unable to recover it. 00:26:30.433 [2024-07-26 14:20:38.134873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.433 [2024-07-26 14:20:38.134899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.433 qpair failed and we were unable to recover it. 00:26:30.433 [2024-07-26 14:20:38.134988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.433 [2024-07-26 14:20:38.135014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.433 qpair failed and we were unable to recover it. 00:26:30.433 [2024-07-26 14:20:38.135110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.433 [2024-07-26 14:20:38.135139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.433 qpair failed and we were unable to recover it. 00:26:30.433 [2024-07-26 14:20:38.135226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.433 [2024-07-26 14:20:38.135253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.433 qpair failed and we were unable to recover it. 00:26:30.433 [2024-07-26 14:20:38.135374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.433 [2024-07-26 14:20:38.135413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.433 qpair failed and we were unable to recover it. 00:26:30.433 [2024-07-26 14:20:38.135512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.433 [2024-07-26 14:20:38.135547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.433 qpair failed and we were unable to recover it. 00:26:30.433 [2024-07-26 14:20:38.135629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.433 [2024-07-26 14:20:38.135656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.433 qpair failed and we were unable to recover it. 00:26:30.433 [2024-07-26 14:20:38.135744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.433 [2024-07-26 14:20:38.135770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.433 qpair failed and we were unable to recover it. 00:26:30.433 [2024-07-26 14:20:38.135868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.433 [2024-07-26 14:20:38.135905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.433 qpair failed and we were unable to recover it. 00:26:30.433 [2024-07-26 14:20:38.135999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.434 [2024-07-26 14:20:38.136027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.434 qpair failed and we were unable to recover it. 00:26:30.434 [2024-07-26 14:20:38.136110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.434 [2024-07-26 14:20:38.136137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.434 qpair failed and we were unable to recover it. 00:26:30.434 [2024-07-26 14:20:38.136231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.434 [2024-07-26 14:20:38.136260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.434 qpair failed and we were unable to recover it. 00:26:30.434 [2024-07-26 14:20:38.136369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.434 [2024-07-26 14:20:38.136408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.434 qpair failed and we were unable to recover it. 00:26:30.434 [2024-07-26 14:20:38.136502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.434 [2024-07-26 14:20:38.136535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.434 qpair failed and we were unable to recover it. 00:26:30.434 [2024-07-26 14:20:38.136614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.434 [2024-07-26 14:20:38.136640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.434 qpair failed and we were unable to recover it. 00:26:30.434 [2024-07-26 14:20:38.136720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.434 [2024-07-26 14:20:38.136746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.434 qpair failed and we were unable to recover it. 00:26:30.434 [2024-07-26 14:20:38.136833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.434 [2024-07-26 14:20:38.136859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.434 qpair failed and we were unable to recover it. 00:26:30.434 [2024-07-26 14:20:38.136945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.434 [2024-07-26 14:20:38.136971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.434 qpair failed and we were unable to recover it. 00:26:30.434 [2024-07-26 14:20:38.137047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.434 [2024-07-26 14:20:38.137073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.434 qpair failed and we were unable to recover it. 00:26:30.434 [2024-07-26 14:20:38.137156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.434 [2024-07-26 14:20:38.137182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.434 qpair failed and we were unable to recover it. 00:26:30.434 [2024-07-26 14:20:38.137267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.434 [2024-07-26 14:20:38.137295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.434 qpair failed and we were unable to recover it. 00:26:30.434 [2024-07-26 14:20:38.137389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.434 [2024-07-26 14:20:38.137417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.434 qpair failed and we were unable to recover it. 00:26:30.434 [2024-07-26 14:20:38.137507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.434 [2024-07-26 14:20:38.137541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.434 qpair failed and we were unable to recover it. 00:26:30.434 [2024-07-26 14:20:38.137624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.434 [2024-07-26 14:20:38.137651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.434 qpair failed and we were unable to recover it. 00:26:30.434 [2024-07-26 14:20:38.137731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.434 [2024-07-26 14:20:38.137762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.434 qpair failed and we were unable to recover it. 00:26:30.434 [2024-07-26 14:20:38.137836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.434 [2024-07-26 14:20:38.137863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.434 qpair failed and we were unable to recover it. 00:26:30.434 [2024-07-26 14:20:38.137942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.434 [2024-07-26 14:20:38.137967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.434 qpair failed and we were unable to recover it. 00:26:30.434 [2024-07-26 14:20:38.138048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.434 [2024-07-26 14:20:38.138075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.434 qpair failed and we were unable to recover it. 00:26:30.434 [2024-07-26 14:20:38.138150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.434 [2024-07-26 14:20:38.138176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.434 qpair failed and we were unable to recover it. 00:26:30.434 [2024-07-26 14:20:38.138257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.434 [2024-07-26 14:20:38.138285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.434 qpair failed and we were unable to recover it. 00:26:30.434 [2024-07-26 14:20:38.138377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.434 [2024-07-26 14:20:38.138406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.434 qpair failed and we were unable to recover it. 00:26:30.434 [2024-07-26 14:20:38.138491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.434 [2024-07-26 14:20:38.138520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.434 qpair failed and we were unable to recover it. 00:26:30.434 [2024-07-26 14:20:38.138617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.434 [2024-07-26 14:20:38.138645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.434 qpair failed and we were unable to recover it. 00:26:30.434 [2024-07-26 14:20:38.138740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.434 [2024-07-26 14:20:38.138766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.434 qpair failed and we were unable to recover it. 00:26:30.434 [2024-07-26 14:20:38.138844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.434 [2024-07-26 14:20:38.138870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.434 qpair failed and we were unable to recover it. 00:26:30.434 [2024-07-26 14:20:38.138956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.434 [2024-07-26 14:20:38.138983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.434 qpair failed and we were unable to recover it. 00:26:30.434 [2024-07-26 14:20:38.139064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.434 [2024-07-26 14:20:38.139090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.434 qpair failed and we were unable to recover it. 00:26:30.434 [2024-07-26 14:20:38.139172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.434 [2024-07-26 14:20:38.139203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.434 qpair failed and we were unable to recover it. 00:26:30.434 [2024-07-26 14:20:38.139292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.434 [2024-07-26 14:20:38.139319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.434 qpair failed and we were unable to recover it. 00:26:30.434 [2024-07-26 14:20:38.139404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.434 [2024-07-26 14:20:38.139432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.434 qpair failed and we were unable to recover it. 00:26:30.434 [2024-07-26 14:20:38.139539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.434 [2024-07-26 14:20:38.139568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.434 qpair failed and we were unable to recover it. 00:26:30.434 [2024-07-26 14:20:38.139652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.434 [2024-07-26 14:20:38.139679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.434 qpair failed and we were unable to recover it. 00:26:30.434 [2024-07-26 14:20:38.139755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.434 [2024-07-26 14:20:38.139781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.434 qpair failed and we were unable to recover it. 00:26:30.434 [2024-07-26 14:20:38.139870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.434 [2024-07-26 14:20:38.139897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.434 qpair failed and we were unable to recover it. 00:26:30.434 [2024-07-26 14:20:38.139990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.434 [2024-07-26 14:20:38.140018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.434 qpair failed and we were unable to recover it. 00:26:30.434 [2024-07-26 14:20:38.140116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.434 [2024-07-26 14:20:38.140143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.434 qpair failed and we were unable to recover it. 00:26:30.434 [2024-07-26 14:20:38.140255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.434 [2024-07-26 14:20:38.140281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.434 qpair failed and we were unable to recover it. 00:26:30.434 [2024-07-26 14:20:38.140364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.435 [2024-07-26 14:20:38.140392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.435 qpair failed and we were unable to recover it. 00:26:30.435 [2024-07-26 14:20:38.140481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.435 [2024-07-26 14:20:38.140508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.435 qpair failed and we were unable to recover it. 00:26:30.435 [2024-07-26 14:20:38.140603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.435 [2024-07-26 14:20:38.140631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.435 qpair failed and we were unable to recover it. 00:26:30.435 [2024-07-26 14:20:38.140710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.435 [2024-07-26 14:20:38.140736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.435 qpair failed and we were unable to recover it. 00:26:30.435 [2024-07-26 14:20:38.140814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.435 [2024-07-26 14:20:38.140845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.435 qpair failed and we were unable to recover it. 00:26:30.435 [2024-07-26 14:20:38.140924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.435 [2024-07-26 14:20:38.140950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.435 qpair failed and we were unable to recover it. 00:26:30.435 [2024-07-26 14:20:38.141041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.435 [2024-07-26 14:20:38.141069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.435 qpair failed and we were unable to recover it. 00:26:30.435 [2024-07-26 14:20:38.141153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.435 [2024-07-26 14:20:38.141180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.435 qpair failed and we were unable to recover it. 00:26:30.435 [2024-07-26 14:20:38.141266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.435 [2024-07-26 14:20:38.141292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.435 qpair failed and we were unable to recover it. 00:26:30.435 [2024-07-26 14:20:38.141379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.435 [2024-07-26 14:20:38.141405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.435 qpair failed and we were unable to recover it. 00:26:30.435 [2024-07-26 14:20:38.141538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.435 [2024-07-26 14:20:38.141566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.435 qpair failed and we were unable to recover it. 00:26:30.435 [2024-07-26 14:20:38.141649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.435 [2024-07-26 14:20:38.141675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.435 qpair failed and we were unable to recover it. 00:26:30.435 [2024-07-26 14:20:38.141759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.435 [2024-07-26 14:20:38.141785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.435 qpair failed and we were unable to recover it. 00:26:30.435 [2024-07-26 14:20:38.141860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.435 [2024-07-26 14:20:38.141887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.435 qpair failed and we were unable to recover it. 00:26:30.435 [2024-07-26 14:20:38.141963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.435 [2024-07-26 14:20:38.141989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.435 qpair failed and we were unable to recover it. 00:26:30.435 [2024-07-26 14:20:38.142074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.435 [2024-07-26 14:20:38.142101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.435 qpair failed and we were unable to recover it. 00:26:30.435 [2024-07-26 14:20:38.142184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.435 [2024-07-26 14:20:38.142210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.435 qpair failed and we were unable to recover it. 00:26:30.435 [2024-07-26 14:20:38.142296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.435 [2024-07-26 14:20:38.142322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.435 qpair failed and we were unable to recover it. 00:26:30.435 [2024-07-26 14:20:38.142404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.435 [2024-07-26 14:20:38.142430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.435 qpair failed and we were unable to recover it. 00:26:30.435 [2024-07-26 14:20:38.142538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.435 [2024-07-26 14:20:38.142565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.435 qpair failed and we were unable to recover it. 00:26:30.435 [2024-07-26 14:20:38.142649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.435 [2024-07-26 14:20:38.142676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.435 qpair failed and we were unable to recover it. 00:26:30.435 [2024-07-26 14:20:38.142781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.435 [2024-07-26 14:20:38.142807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.435 qpair failed and we were unable to recover it. 00:26:30.435 [2024-07-26 14:20:38.142890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.435 [2024-07-26 14:20:38.142916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.435 qpair failed and we were unable to recover it. 00:26:30.435 [2024-07-26 14:20:38.143004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.435 [2024-07-26 14:20:38.143030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.435 qpair failed and we were unable to recover it. 00:26:30.435 [2024-07-26 14:20:38.143114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.435 [2024-07-26 14:20:38.143142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.435 qpair failed and we were unable to recover it. 00:26:30.435 [2024-07-26 14:20:38.143255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.435 [2024-07-26 14:20:38.143283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.435 qpair failed and we were unable to recover it. 00:26:30.435 [2024-07-26 14:20:38.143371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.435 [2024-07-26 14:20:38.143399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.435 qpair failed and we were unable to recover it. 00:26:30.435 [2024-07-26 14:20:38.143479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.435 [2024-07-26 14:20:38.143505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.435 qpair failed and we were unable to recover it. 00:26:30.435 [2024-07-26 14:20:38.143598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.435 [2024-07-26 14:20:38.143626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.435 qpair failed and we were unable to recover it. 00:26:30.435 [2024-07-26 14:20:38.143753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.435 [2024-07-26 14:20:38.143792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.435 qpair failed and we were unable to recover it. 00:26:30.435 [2024-07-26 14:20:38.143917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.435 [2024-07-26 14:20:38.143944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.435 qpair failed and we were unable to recover it. 00:26:30.435 [2024-07-26 14:20:38.144029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.435 [2024-07-26 14:20:38.144059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.435 qpair failed and we were unable to recover it. 00:26:30.435 [2024-07-26 14:20:38.144145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.435 [2024-07-26 14:20:38.144172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.435 qpair failed and we were unable to recover it. 00:26:30.436 [2024-07-26 14:20:38.144254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.436 [2024-07-26 14:20:38.144280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.436 qpair failed and we were unable to recover it. 00:26:30.436 [2024-07-26 14:20:38.144363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.436 [2024-07-26 14:20:38.144389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.436 qpair failed and we were unable to recover it. 00:26:30.436 [2024-07-26 14:20:38.144469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.436 [2024-07-26 14:20:38.144497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.436 qpair failed and we were unable to recover it. 00:26:30.436 [2024-07-26 14:20:38.144590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.436 [2024-07-26 14:20:38.144619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.436 qpair failed and we were unable to recover it. 00:26:30.436 [2024-07-26 14:20:38.144698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.436 [2024-07-26 14:20:38.144724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.436 qpair failed and we were unable to recover it. 00:26:30.436 [2024-07-26 14:20:38.144812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.436 [2024-07-26 14:20:38.144838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.436 qpair failed and we were unable to recover it. 00:26:30.436 [2024-07-26 14:20:38.144915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.436 [2024-07-26 14:20:38.144941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.436 qpair failed and we were unable to recover it. 00:26:30.436 [2024-07-26 14:20:38.145026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.436 [2024-07-26 14:20:38.145054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.436 qpair failed and we were unable to recover it. 00:26:30.436 [2024-07-26 14:20:38.145141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.436 [2024-07-26 14:20:38.145169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.436 qpair failed and we were unable to recover it. 00:26:30.436 [2024-07-26 14:20:38.145251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.436 [2024-07-26 14:20:38.145279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.436 qpair failed and we were unable to recover it. 00:26:30.436 [2024-07-26 14:20:38.145369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.436 [2024-07-26 14:20:38.145396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.436 qpair failed and we were unable to recover it. 00:26:30.436 [2024-07-26 14:20:38.145484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.436 [2024-07-26 14:20:38.145510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.436 qpair failed and we were unable to recover it. 00:26:30.436 [2024-07-26 14:20:38.145602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.436 [2024-07-26 14:20:38.145630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.436 qpair failed and we were unable to recover it. 00:26:30.436 [2024-07-26 14:20:38.145718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.436 [2024-07-26 14:20:38.145744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.436 qpair failed and we were unable to recover it. 00:26:30.436 [2024-07-26 14:20:38.145824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.436 [2024-07-26 14:20:38.145850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.436 qpair failed and we were unable to recover it. 00:26:30.436 [2024-07-26 14:20:38.145932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.436 [2024-07-26 14:20:38.145958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.436 qpair failed and we were unable to recover it. 00:26:30.436 [2024-07-26 14:20:38.146047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.436 [2024-07-26 14:20:38.146073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.436 qpair failed and we were unable to recover it. 00:26:30.436 [2024-07-26 14:20:38.146148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.436 [2024-07-26 14:20:38.146174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.436 qpair failed and we were unable to recover it. 00:26:30.436 [2024-07-26 14:20:38.146254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.436 [2024-07-26 14:20:38.146282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.436 qpair failed and we were unable to recover it. 00:26:30.436 [2024-07-26 14:20:38.146362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.436 [2024-07-26 14:20:38.146389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.436 qpair failed and we were unable to recover it. 00:26:30.436 [2024-07-26 14:20:38.146476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.436 [2024-07-26 14:20:38.146504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.436 qpair failed and we were unable to recover it. 00:26:30.436 [2024-07-26 14:20:38.146630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.436 [2024-07-26 14:20:38.146657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.436 qpair failed and we were unable to recover it. 00:26:30.436 [2024-07-26 14:20:38.146734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.436 [2024-07-26 14:20:38.146760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.436 qpair failed and we were unable to recover it. 00:26:30.436 [2024-07-26 14:20:38.146836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.436 [2024-07-26 14:20:38.146862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.436 qpair failed and we were unable to recover it. 00:26:30.436 [2024-07-26 14:20:38.146944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.436 [2024-07-26 14:20:38.146971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.436 qpair failed and we were unable to recover it. 00:26:30.436 [2024-07-26 14:20:38.147066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.436 [2024-07-26 14:20:38.147105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.436 qpair failed and we were unable to recover it. 00:26:30.436 [2024-07-26 14:20:38.147205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.436 [2024-07-26 14:20:38.147233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.436 qpair failed and we were unable to recover it. 00:26:30.436 [2024-07-26 14:20:38.147323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.436 [2024-07-26 14:20:38.147351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.436 qpair failed and we were unable to recover it. 00:26:30.436 [2024-07-26 14:20:38.147434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.436 [2024-07-26 14:20:38.147461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.436 qpair failed and we were unable to recover it. 00:26:30.436 [2024-07-26 14:20:38.147556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.436 [2024-07-26 14:20:38.147582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.436 qpair failed and we were unable to recover it. 00:26:30.436 [2024-07-26 14:20:38.147669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.436 [2024-07-26 14:20:38.147696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.436 qpair failed and we were unable to recover it. 00:26:30.436 [2024-07-26 14:20:38.147780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.436 [2024-07-26 14:20:38.147807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.436 qpair failed and we were unable to recover it. 00:26:30.436 [2024-07-26 14:20:38.147921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.436 [2024-07-26 14:20:38.147947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.436 qpair failed and we were unable to recover it. 00:26:30.436 [2024-07-26 14:20:38.148036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.436 [2024-07-26 14:20:38.148062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.436 qpair failed and we were unable to recover it. 00:26:30.436 [2024-07-26 14:20:38.148153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.436 [2024-07-26 14:20:38.148181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.436 qpair failed and we were unable to recover it. 00:26:30.436 [2024-07-26 14:20:38.148260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.436 [2024-07-26 14:20:38.148286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.436 qpair failed and we were unable to recover it. 00:26:30.436 [2024-07-26 14:20:38.148368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.436 [2024-07-26 14:20:38.148396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.436 qpair failed and we were unable to recover it. 00:26:30.436 [2024-07-26 14:20:38.148481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.436 [2024-07-26 14:20:38.148507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.436 qpair failed and we were unable to recover it. 00:26:30.437 [2024-07-26 14:20:38.148595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.437 [2024-07-26 14:20:38.148627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.437 qpair failed and we were unable to recover it. 00:26:30.437 [2024-07-26 14:20:38.148708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.437 [2024-07-26 14:20:38.148734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.437 qpair failed and we were unable to recover it. 00:26:30.437 [2024-07-26 14:20:38.148810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.437 [2024-07-26 14:20:38.148837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.437 qpair failed and we were unable to recover it. 00:26:30.437 [2024-07-26 14:20:38.149030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.437 [2024-07-26 14:20:38.149056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.437 qpair failed and we were unable to recover it. 00:26:30.437 [2024-07-26 14:20:38.149140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.437 [2024-07-26 14:20:38.149167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.437 qpair failed and we were unable to recover it. 00:26:30.437 [2024-07-26 14:20:38.149250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.437 [2024-07-26 14:20:38.149276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.437 qpair failed and we were unable to recover it. 00:26:30.437 [2024-07-26 14:20:38.149355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.437 [2024-07-26 14:20:38.149382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.437 qpair failed and we were unable to recover it. 00:26:30.437 [2024-07-26 14:20:38.149466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.437 [2024-07-26 14:20:38.149492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.437 qpair failed and we were unable to recover it. 00:26:30.437 [2024-07-26 14:20:38.149587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.437 [2024-07-26 14:20:38.149615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.437 qpair failed and we were unable to recover it. 00:26:30.437 [2024-07-26 14:20:38.149709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.437 [2024-07-26 14:20:38.149737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.437 qpair failed and we were unable to recover it. 00:26:30.437 [2024-07-26 14:20:38.149821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.437 [2024-07-26 14:20:38.149847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.437 qpair failed and we were unable to recover it. 00:26:30.437 [2024-07-26 14:20:38.149926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.437 [2024-07-26 14:20:38.149952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.437 qpair failed and we were unable to recover it. 00:26:30.437 [2024-07-26 14:20:38.150073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.437 [2024-07-26 14:20:38.150113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.437 qpair failed and we were unable to recover it. 00:26:30.437 [2024-07-26 14:20:38.150207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.437 [2024-07-26 14:20:38.150235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.437 qpair failed and we were unable to recover it. 00:26:30.437 [2024-07-26 14:20:38.150326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.437 [2024-07-26 14:20:38.150354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.437 qpair failed and we were unable to recover it. 00:26:30.437 [2024-07-26 14:20:38.150435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.437 [2024-07-26 14:20:38.150462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.437 qpair failed and we were unable to recover it. 00:26:30.437 [2024-07-26 14:20:38.150552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.437 [2024-07-26 14:20:38.150581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.437 qpair failed and we were unable to recover it. 00:26:30.437 [2024-07-26 14:20:38.150662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.437 [2024-07-26 14:20:38.150688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.437 qpair failed and we were unable to recover it. 00:26:30.437 [2024-07-26 14:20:38.150801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.437 [2024-07-26 14:20:38.150828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.437 qpair failed and we were unable to recover it. 00:26:30.437 [2024-07-26 14:20:38.150911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.437 [2024-07-26 14:20:38.150937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.437 qpair failed and we were unable to recover it. 00:26:30.437 [2024-07-26 14:20:38.151023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.437 [2024-07-26 14:20:38.151052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.437 qpair failed and we were unable to recover it. 00:26:30.437 [2024-07-26 14:20:38.151135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.437 [2024-07-26 14:20:38.151161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.437 qpair failed and we were unable to recover it. 00:26:30.437 [2024-07-26 14:20:38.151240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.437 [2024-07-26 14:20:38.151267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.437 qpair failed and we were unable to recover it. 00:26:30.437 [2024-07-26 14:20:38.151375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.437 [2024-07-26 14:20:38.151401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.437 qpair failed and we were unable to recover it. 00:26:30.437 [2024-07-26 14:20:38.151486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.437 [2024-07-26 14:20:38.151512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.437 qpair failed and we were unable to recover it. 00:26:30.437 [2024-07-26 14:20:38.151625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.437 [2024-07-26 14:20:38.151652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.437 qpair failed and we were unable to recover it. 00:26:30.437 [2024-07-26 14:20:38.151737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.437 [2024-07-26 14:20:38.151763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.437 qpair failed and we were unable to recover it. 00:26:30.437 [2024-07-26 14:20:38.151845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.437 [2024-07-26 14:20:38.151875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.437 qpair failed and we were unable to recover it. 00:26:30.437 [2024-07-26 14:20:38.151950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.437 [2024-07-26 14:20:38.151977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.437 qpair failed and we were unable to recover it. 00:26:30.437 [2024-07-26 14:20:38.152061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.437 [2024-07-26 14:20:38.152089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.437 qpair failed and we were unable to recover it. 00:26:30.437 [2024-07-26 14:20:38.152169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.437 [2024-07-26 14:20:38.152195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.437 qpair failed and we were unable to recover it. 00:26:30.437 [2024-07-26 14:20:38.152287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.437 [2024-07-26 14:20:38.152315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.437 qpair failed and we were unable to recover it. 00:26:30.437 [2024-07-26 14:20:38.152396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.437 [2024-07-26 14:20:38.152423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.437 qpair failed and we were unable to recover it. 00:26:30.437 [2024-07-26 14:20:38.152501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.437 [2024-07-26 14:20:38.152540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.437 qpair failed and we were unable to recover it. 00:26:30.437 [2024-07-26 14:20:38.152626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.437 [2024-07-26 14:20:38.152652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.437 qpair failed and we were unable to recover it. 00:26:30.437 [2024-07-26 14:20:38.152734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.437 [2024-07-26 14:20:38.152762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.437 qpair failed and we were unable to recover it. 00:26:30.437 [2024-07-26 14:20:38.152873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.437 [2024-07-26 14:20:38.152899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.437 qpair failed and we were unable to recover it. 00:26:30.438 [2024-07-26 14:20:38.152980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.438 [2024-07-26 14:20:38.153006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.438 qpair failed and we were unable to recover it. 00:26:30.438 [2024-07-26 14:20:38.153098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.438 [2024-07-26 14:20:38.153126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.438 qpair failed and we were unable to recover it. 00:26:30.438 [2024-07-26 14:20:38.153208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.438 [2024-07-26 14:20:38.153240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.438 qpair failed and we were unable to recover it. 00:26:30.438 [2024-07-26 14:20:38.153332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.438 [2024-07-26 14:20:38.153361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.438 qpair failed and we were unable to recover it. 00:26:30.438 [2024-07-26 14:20:38.153450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.438 [2024-07-26 14:20:38.153477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.438 qpair failed and we were unable to recover it. 00:26:30.438 [2024-07-26 14:20:38.153569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.438 [2024-07-26 14:20:38.153595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.438 qpair failed and we were unable to recover it. 00:26:30.438 [2024-07-26 14:20:38.153682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.438 [2024-07-26 14:20:38.153709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.438 qpair failed and we were unable to recover it. 00:26:30.438 [2024-07-26 14:20:38.153790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.438 [2024-07-26 14:20:38.153816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.438 qpair failed and we were unable to recover it. 00:26:30.438 [2024-07-26 14:20:38.153896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.438 [2024-07-26 14:20:38.153923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.438 qpair failed and we were unable to recover it. 00:26:30.438 [2024-07-26 14:20:38.154008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.438 [2024-07-26 14:20:38.154034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.438 qpair failed and we were unable to recover it. 00:26:30.438 [2024-07-26 14:20:38.154146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.438 [2024-07-26 14:20:38.154173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.438 qpair failed and we were unable to recover it. 00:26:30.438 [2024-07-26 14:20:38.154261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.438 [2024-07-26 14:20:38.154288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.438 qpair failed and we were unable to recover it. 00:26:30.438 [2024-07-26 14:20:38.154372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.438 [2024-07-26 14:20:38.154398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.438 qpair failed and we were unable to recover it. 00:26:30.438 [2024-07-26 14:20:38.154486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.438 [2024-07-26 14:20:38.154514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.438 qpair failed and we were unable to recover it. 00:26:30.438 [2024-07-26 14:20:38.154626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.438 [2024-07-26 14:20:38.154665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.438 qpair failed and we were unable to recover it. 00:26:30.438 [2024-07-26 14:20:38.154787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.438 [2024-07-26 14:20:38.154816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.438 qpair failed and we were unable to recover it. 00:26:30.438 [2024-07-26 14:20:38.154905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.438 [2024-07-26 14:20:38.154932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.438 qpair failed and we were unable to recover it. 00:26:30.438 [2024-07-26 14:20:38.155015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.438 [2024-07-26 14:20:38.155043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.438 qpair failed and we were unable to recover it. 00:26:30.438 [2024-07-26 14:20:38.155123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.438 [2024-07-26 14:20:38.155150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.438 qpair failed and we were unable to recover it. 00:26:30.438 [2024-07-26 14:20:38.155231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.438 [2024-07-26 14:20:38.155258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.438 qpair failed and we were unable to recover it. 00:26:30.438 [2024-07-26 14:20:38.155369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.438 [2024-07-26 14:20:38.155395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.438 qpair failed and we were unable to recover it. 00:26:30.438 [2024-07-26 14:20:38.155479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.438 [2024-07-26 14:20:38.155506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.438 qpair failed and we were unable to recover it. 00:26:30.438 [2024-07-26 14:20:38.155716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.438 [2024-07-26 14:20:38.155744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.438 qpair failed and we were unable to recover it. 00:26:30.438 [2024-07-26 14:20:38.155826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.438 [2024-07-26 14:20:38.155853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.438 qpair failed and we were unable to recover it. 00:26:30.438 [2024-07-26 14:20:38.155938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.438 [2024-07-26 14:20:38.155964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.438 qpair failed and we were unable to recover it. 00:26:30.438 [2024-07-26 14:20:38.156047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.438 [2024-07-26 14:20:38.156073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.438 qpair failed and we were unable to recover it. 00:26:30.438 [2024-07-26 14:20:38.156162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.438 [2024-07-26 14:20:38.156189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.438 qpair failed and we were unable to recover it. 00:26:30.438 [2024-07-26 14:20:38.156285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.438 [2024-07-26 14:20:38.156324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.438 qpair failed and we were unable to recover it. 00:26:30.438 [2024-07-26 14:20:38.156424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.438 [2024-07-26 14:20:38.156451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.438 qpair failed and we were unable to recover it. 00:26:30.438 [2024-07-26 14:20:38.156569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.438 [2024-07-26 14:20:38.156598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.438 qpair failed and we were unable to recover it. 00:26:30.438 [2024-07-26 14:20:38.156675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.438 [2024-07-26 14:20:38.156706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.438 qpair failed and we were unable to recover it. 00:26:30.438 [2024-07-26 14:20:38.156793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.438 [2024-07-26 14:20:38.156820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.438 qpair failed and we were unable to recover it. 00:26:30.438 [2024-07-26 14:20:38.156914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.438 [2024-07-26 14:20:38.156941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.438 qpair failed and we were unable to recover it. 00:26:30.438 [2024-07-26 14:20:38.157023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.438 [2024-07-26 14:20:38.157051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.438 qpair failed and we were unable to recover it. 00:26:30.438 [2024-07-26 14:20:38.157146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.438 [2024-07-26 14:20:38.157172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.438 qpair failed and we were unable to recover it. 00:26:30.438 [2024-07-26 14:20:38.157283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.438 [2024-07-26 14:20:38.157308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.438 qpair failed and we were unable to recover it. 00:26:30.438 [2024-07-26 14:20:38.157391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.438 [2024-07-26 14:20:38.157418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.438 qpair failed and we were unable to recover it. 00:26:30.438 [2024-07-26 14:20:38.157494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.439 [2024-07-26 14:20:38.157523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.439 qpair failed and we were unable to recover it. 00:26:30.439 [2024-07-26 14:20:38.157620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.439 [2024-07-26 14:20:38.157647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.439 qpair failed and we were unable to recover it. 00:26:30.439 [2024-07-26 14:20:38.157735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.439 [2024-07-26 14:20:38.157761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.439 qpair failed and we were unable to recover it. 00:26:30.439 [2024-07-26 14:20:38.157841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.439 [2024-07-26 14:20:38.157867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.439 qpair failed and we were unable to recover it. 00:26:30.439 [2024-07-26 14:20:38.157974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.439 [2024-07-26 14:20:38.158000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.439 qpair failed and we were unable to recover it. 00:26:30.439 [2024-07-26 14:20:38.158115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.439 [2024-07-26 14:20:38.158142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.439 qpair failed and we were unable to recover it. 00:26:30.439 [2024-07-26 14:20:38.158226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.439 [2024-07-26 14:20:38.158253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.439 qpair failed and we were unable to recover it. 00:26:30.439 [2024-07-26 14:20:38.158352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.439 [2024-07-26 14:20:38.158391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.439 qpair failed and we were unable to recover it. 00:26:30.439 [2024-07-26 14:20:38.158483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.439 [2024-07-26 14:20:38.158512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.439 qpair failed and we were unable to recover it. 00:26:30.439 [2024-07-26 14:20:38.158606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.439 [2024-07-26 14:20:38.158632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.439 qpair failed and we were unable to recover it. 00:26:30.439 [2024-07-26 14:20:38.158717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.439 [2024-07-26 14:20:38.158743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.439 qpair failed and we were unable to recover it. 00:26:30.439 [2024-07-26 14:20:38.158853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.439 [2024-07-26 14:20:38.158880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.439 qpair failed and we were unable to recover it. 00:26:30.439 [2024-07-26 14:20:38.159076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.439 [2024-07-26 14:20:38.159104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.439 qpair failed and we were unable to recover it. 00:26:30.439 [2024-07-26 14:20:38.159194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.439 [2024-07-26 14:20:38.159221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.439 qpair failed and we were unable to recover it. 00:26:30.439 [2024-07-26 14:20:38.159305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.439 [2024-07-26 14:20:38.159330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.439 qpair failed and we were unable to recover it. 00:26:30.439 [2024-07-26 14:20:38.159408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.439 [2024-07-26 14:20:38.159434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.439 qpair failed and we were unable to recover it. 00:26:30.439 [2024-07-26 14:20:38.159509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.439 [2024-07-26 14:20:38.159540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.439 qpair failed and we were unable to recover it. 00:26:30.439 [2024-07-26 14:20:38.159623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.439 [2024-07-26 14:20:38.159649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.439 qpair failed and we were unable to recover it. 00:26:30.439 [2024-07-26 14:20:38.159742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.439 [2024-07-26 14:20:38.159770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.439 qpair failed and we were unable to recover it. 00:26:30.439 [2024-07-26 14:20:38.159857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.439 [2024-07-26 14:20:38.159885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.439 qpair failed and we were unable to recover it. 00:26:30.439 [2024-07-26 14:20:38.159975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.439 [2024-07-26 14:20:38.160002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.439 qpair failed and we were unable to recover it. 00:26:30.439 [2024-07-26 14:20:38.160092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.439 [2024-07-26 14:20:38.160118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.439 qpair failed and we were unable to recover it. 00:26:30.439 [2024-07-26 14:20:38.160206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.439 [2024-07-26 14:20:38.160234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.439 qpair failed and we were unable to recover it. 00:26:30.439 [2024-07-26 14:20:38.160319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.439 [2024-07-26 14:20:38.160346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.439 qpair failed and we were unable to recover it. 00:26:30.439 [2024-07-26 14:20:38.160435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.439 [2024-07-26 14:20:38.160461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.439 qpair failed and we were unable to recover it. 00:26:30.439 [2024-07-26 14:20:38.160553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.439 [2024-07-26 14:20:38.160580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.439 qpair failed and we were unable to recover it. 00:26:30.439 [2024-07-26 14:20:38.160659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.439 [2024-07-26 14:20:38.160686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.439 qpair failed and we were unable to recover it. 00:26:30.439 [2024-07-26 14:20:38.160763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.439 [2024-07-26 14:20:38.160789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.439 qpair failed and we were unable to recover it. 00:26:30.439 [2024-07-26 14:20:38.160898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.439 [2024-07-26 14:20:38.160924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.439 qpair failed and we were unable to recover it. 00:26:30.439 [2024-07-26 14:20:38.161018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.439 [2024-07-26 14:20:38.161044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.439 qpair failed and we were unable to recover it. 00:26:30.439 [2024-07-26 14:20:38.161136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.439 [2024-07-26 14:20:38.161163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.439 qpair failed and we were unable to recover it. 00:26:30.439 [2024-07-26 14:20:38.161241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.439 [2024-07-26 14:20:38.161268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.439 qpair failed and we were unable to recover it. 00:26:30.439 [2024-07-26 14:20:38.161359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.439 [2024-07-26 14:20:38.161386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.439 qpair failed and we were unable to recover it. 00:26:30.439 [2024-07-26 14:20:38.161473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.439 [2024-07-26 14:20:38.161505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.439 qpair failed and we were unable to recover it. 00:26:30.439 [2024-07-26 14:20:38.161631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.439 [2024-07-26 14:20:38.161671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.439 qpair failed and we were unable to recover it. 00:26:30.439 [2024-07-26 14:20:38.161771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.439 [2024-07-26 14:20:38.161811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.439 qpair failed and we were unable to recover it. 00:26:30.439 [2024-07-26 14:20:38.161899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.439 [2024-07-26 14:20:38.161926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.439 qpair failed and we were unable to recover it. 00:26:30.439 [2024-07-26 14:20:38.162030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.439 [2024-07-26 14:20:38.162057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.439 qpair failed and we were unable to recover it. 00:26:30.439 [2024-07-26 14:20:38.162153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.440 [2024-07-26 14:20:38.162180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.440 qpair failed and we were unable to recover it. 00:26:30.440 [2024-07-26 14:20:38.162259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.440 [2024-07-26 14:20:38.162288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.440 qpair failed and we were unable to recover it. 00:26:30.440 [2024-07-26 14:20:38.162372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.440 [2024-07-26 14:20:38.162400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.440 qpair failed and we were unable to recover it. 00:26:30.440 [2024-07-26 14:20:38.162489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.440 [2024-07-26 14:20:38.162517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.440 qpair failed and we were unable to recover it. 00:26:30.440 [2024-07-26 14:20:38.162607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.440 [2024-07-26 14:20:38.162635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.440 qpair failed and we were unable to recover it. 00:26:30.440 [2024-07-26 14:20:38.162718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.440 [2024-07-26 14:20:38.162744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.440 qpair failed and we were unable to recover it. 00:26:30.440 [2024-07-26 14:20:38.162820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.440 [2024-07-26 14:20:38.162846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.440 qpair failed and we were unable to recover it. 00:26:30.440 [2024-07-26 14:20:38.162965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.440 [2024-07-26 14:20:38.162991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.440 qpair failed and we were unable to recover it. 00:26:30.440 [2024-07-26 14:20:38.163083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.440 [2024-07-26 14:20:38.163109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.440 qpair failed and we were unable to recover it. 00:26:30.440 [2024-07-26 14:20:38.163197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.440 [2024-07-26 14:20:38.163223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.440 qpair failed and we were unable to recover it. 00:26:30.440 [2024-07-26 14:20:38.163307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.440 [2024-07-26 14:20:38.163333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.440 qpair failed and we were unable to recover it. 00:26:30.440 [2024-07-26 14:20:38.163443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.440 [2024-07-26 14:20:38.163469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.440 qpair failed and we were unable to recover it. 00:26:30.440 [2024-07-26 14:20:38.163544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.440 [2024-07-26 14:20:38.163571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.440 qpair failed and we were unable to recover it. 00:26:30.440 [2024-07-26 14:20:38.163657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.440 [2024-07-26 14:20:38.163683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.440 qpair failed and we were unable to recover it. 00:26:30.440 [2024-07-26 14:20:38.163767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.440 [2024-07-26 14:20:38.163794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.440 qpair failed and we were unable to recover it. 00:26:30.440 [2024-07-26 14:20:38.163908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.440 [2024-07-26 14:20:38.163936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.440 qpair failed and we were unable to recover it. 00:26:30.440 [2024-07-26 14:20:38.164046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.440 [2024-07-26 14:20:38.164072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.440 qpair failed and we were unable to recover it. 00:26:30.440 [2024-07-26 14:20:38.164154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.440 [2024-07-26 14:20:38.164181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.440 qpair failed and we were unable to recover it. 00:26:30.440 [2024-07-26 14:20:38.164272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.440 [2024-07-26 14:20:38.164301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.440 qpair failed and we were unable to recover it. 00:26:30.440 [2024-07-26 14:20:38.164385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.440 [2024-07-26 14:20:38.164413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.440 qpair failed and we were unable to recover it. 00:26:30.440 [2024-07-26 14:20:38.164496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.440 [2024-07-26 14:20:38.164523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.440 qpair failed and we were unable to recover it. 00:26:30.440 [2024-07-26 14:20:38.164628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.440 [2024-07-26 14:20:38.164656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.440 qpair failed and we were unable to recover it. 00:26:30.440 [2024-07-26 14:20:38.164742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.440 [2024-07-26 14:20:38.164774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.440 qpair failed and we were unable to recover it. 00:26:30.440 [2024-07-26 14:20:38.164861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.440 [2024-07-26 14:20:38.164889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.440 qpair failed and we were unable to recover it. 00:26:30.440 [2024-07-26 14:20:38.164978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.440 [2024-07-26 14:20:38.165004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.440 qpair failed and we were unable to recover it. 00:26:30.440 [2024-07-26 14:20:38.165078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.440 [2024-07-26 14:20:38.165104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.440 qpair failed and we were unable to recover it. 00:26:30.440 [2024-07-26 14:20:38.165239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.440 [2024-07-26 14:20:38.165265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.440 qpair failed and we were unable to recover it. 00:26:30.440 [2024-07-26 14:20:38.165352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.440 [2024-07-26 14:20:38.165380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.440 qpair failed and we were unable to recover it. 00:26:30.440 [2024-07-26 14:20:38.165464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.440 [2024-07-26 14:20:38.165489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.440 qpair failed and we were unable to recover it. 00:26:30.440 [2024-07-26 14:20:38.165581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.440 [2024-07-26 14:20:38.165609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.440 qpair failed and we were unable to recover it. 00:26:30.440 [2024-07-26 14:20:38.165705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.440 [2024-07-26 14:20:38.165732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.440 qpair failed and we were unable to recover it. 00:26:30.440 [2024-07-26 14:20:38.165838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.440 [2024-07-26 14:20:38.165865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.440 qpair failed and we were unable to recover it. 00:26:30.441 [2024-07-26 14:20:38.165965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.441 [2024-07-26 14:20:38.165991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.441 qpair failed and we were unable to recover it. 00:26:30.441 [2024-07-26 14:20:38.166068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.441 [2024-07-26 14:20:38.166096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.441 qpair failed and we were unable to recover it. 00:26:30.441 [2024-07-26 14:20:38.166188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.441 [2024-07-26 14:20:38.166215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.441 qpair failed and we were unable to recover it. 00:26:30.441 [2024-07-26 14:20:38.166296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.441 [2024-07-26 14:20:38.166322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.441 qpair failed and we were unable to recover it. 00:26:30.441 [2024-07-26 14:20:38.166436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.441 [2024-07-26 14:20:38.166464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.441 qpair failed and we were unable to recover it. 00:26:30.441 [2024-07-26 14:20:38.166578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.441 [2024-07-26 14:20:38.166605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.441 qpair failed and we were unable to recover it. 00:26:30.441 [2024-07-26 14:20:38.166697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.441 [2024-07-26 14:20:38.166724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.441 qpair failed and we were unable to recover it. 00:26:30.441 [2024-07-26 14:20:38.166811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.441 [2024-07-26 14:20:38.166837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.441 qpair failed and we were unable to recover it. 00:26:30.441 [2024-07-26 14:20:38.166915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.441 [2024-07-26 14:20:38.166941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.441 qpair failed and we were unable to recover it. 00:26:30.441 [2024-07-26 14:20:38.167027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.441 [2024-07-26 14:20:38.167054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.441 qpair failed and we were unable to recover it. 00:26:30.441 [2024-07-26 14:20:38.167167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.441 [2024-07-26 14:20:38.167194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.441 qpair failed and we were unable to recover it. 00:26:30.441 [2024-07-26 14:20:38.167299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.441 [2024-07-26 14:20:38.167338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.441 qpair failed and we were unable to recover it. 00:26:30.441 [2024-07-26 14:20:38.167423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.441 [2024-07-26 14:20:38.167450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.441 qpair failed and we were unable to recover it. 00:26:30.441 [2024-07-26 14:20:38.167544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.441 [2024-07-26 14:20:38.167571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.441 qpair failed and we were unable to recover it. 00:26:30.441 [2024-07-26 14:20:38.167657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.441 [2024-07-26 14:20:38.167683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.441 qpair failed and we were unable to recover it. 00:26:30.441 [2024-07-26 14:20:38.167774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.441 [2024-07-26 14:20:38.167800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.441 qpair failed and we were unable to recover it. 00:26:30.441 [2024-07-26 14:20:38.167883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.441 [2024-07-26 14:20:38.167910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.441 qpair failed and we were unable to recover it. 00:26:30.441 [2024-07-26 14:20:38.167996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.441 [2024-07-26 14:20:38.168022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.441 qpair failed and we were unable to recover it. 00:26:30.441 [2024-07-26 14:20:38.168102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.441 [2024-07-26 14:20:38.168128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.441 qpair failed and we were unable to recover it. 00:26:30.441 [2024-07-26 14:20:38.168228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.441 [2024-07-26 14:20:38.168268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.441 qpair failed and we were unable to recover it. 00:26:30.441 [2024-07-26 14:20:38.168355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.441 [2024-07-26 14:20:38.168382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.441 qpair failed and we were unable to recover it. 00:26:30.441 [2024-07-26 14:20:38.168478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.441 [2024-07-26 14:20:38.168518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.441 qpair failed and we were unable to recover it. 00:26:30.441 [2024-07-26 14:20:38.168621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.441 [2024-07-26 14:20:38.168651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.441 qpair failed and we were unable to recover it. 00:26:30.441 [2024-07-26 14:20:38.168764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.441 [2024-07-26 14:20:38.168791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.441 qpair failed and we were unable to recover it. 00:26:30.441 [2024-07-26 14:20:38.168876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.441 [2024-07-26 14:20:38.168903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.441 qpair failed and we were unable to recover it. 00:26:30.441 [2024-07-26 14:20:38.168988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.441 [2024-07-26 14:20:38.169014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.441 qpair failed and we were unable to recover it. 00:26:30.441 [2024-07-26 14:20:38.169096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.441 [2024-07-26 14:20:38.169123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.441 qpair failed and we were unable to recover it. 00:26:30.441 [2024-07-26 14:20:38.169204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.441 [2024-07-26 14:20:38.169230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.441 qpair failed and we were unable to recover it. 00:26:30.441 [2024-07-26 14:20:38.169341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.441 [2024-07-26 14:20:38.169370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.441 qpair failed and we were unable to recover it. 00:26:30.441 [2024-07-26 14:20:38.169460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.441 [2024-07-26 14:20:38.169489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.441 qpair failed and we were unable to recover it. 00:26:30.441 [2024-07-26 14:20:38.169584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.441 [2024-07-26 14:20:38.169616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.441 qpair failed and we were unable to recover it. 00:26:30.441 [2024-07-26 14:20:38.169708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.441 [2024-07-26 14:20:38.169735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.441 qpair failed and we were unable to recover it. 00:26:30.441 [2024-07-26 14:20:38.169871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.441 [2024-07-26 14:20:38.169897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.441 qpair failed and we were unable to recover it. 00:26:30.441 [2024-07-26 14:20:38.170011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.441 [2024-07-26 14:20:38.170037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.441 qpair failed and we were unable to recover it. 00:26:30.441 [2024-07-26 14:20:38.170121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.441 [2024-07-26 14:20:38.170149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.441 qpair failed and we were unable to recover it. 00:26:30.441 [2024-07-26 14:20:38.170237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.441 [2024-07-26 14:20:38.170263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.441 qpair failed and we were unable to recover it. 00:26:30.441 [2024-07-26 14:20:38.170357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.441 [2024-07-26 14:20:38.170397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.441 qpair failed and we were unable to recover it. 00:26:30.441 [2024-07-26 14:20:38.170488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.442 [2024-07-26 14:20:38.170517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.442 qpair failed and we were unable to recover it. 00:26:30.442 [2024-07-26 14:20:38.170723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.442 [2024-07-26 14:20:38.170750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.442 qpair failed and we were unable to recover it. 00:26:30.442 [2024-07-26 14:20:38.170875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.442 [2024-07-26 14:20:38.170901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.442 qpair failed and we were unable to recover it. 00:26:30.442 [2024-07-26 14:20:38.170980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.442 [2024-07-26 14:20:38.171006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.442 qpair failed and we were unable to recover it. 00:26:30.442 [2024-07-26 14:20:38.171098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.442 [2024-07-26 14:20:38.171124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.442 qpair failed and we were unable to recover it. 00:26:30.442 [2024-07-26 14:20:38.171211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.442 [2024-07-26 14:20:38.171238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.442 qpair failed and we were unable to recover it. 00:26:30.442 [2024-07-26 14:20:38.171316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.442 [2024-07-26 14:20:38.171341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.442 qpair failed and we were unable to recover it. 00:26:30.442 [2024-07-26 14:20:38.171490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.442 [2024-07-26 14:20:38.171536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.442 qpair failed and we were unable to recover it. 00:26:30.442 [2024-07-26 14:20:38.171624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.442 [2024-07-26 14:20:38.171652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.442 qpair failed and we were unable to recover it. 00:26:30.442 [2024-07-26 14:20:38.171745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.442 [2024-07-26 14:20:38.171773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.442 qpair failed and we were unable to recover it. 00:26:30.442 [2024-07-26 14:20:38.171854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.442 [2024-07-26 14:20:38.171881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.442 qpair failed and we were unable to recover it. 00:26:30.442 [2024-07-26 14:20:38.171962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.442 [2024-07-26 14:20:38.171988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.442 qpair failed and we were unable to recover it. 00:26:30.442 [2024-07-26 14:20:38.172101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.442 [2024-07-26 14:20:38.172127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.442 qpair failed and we were unable to recover it. 00:26:30.442 [2024-07-26 14:20:38.172239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.442 [2024-07-26 14:20:38.172266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.442 qpair failed and we were unable to recover it. 00:26:30.442 [2024-07-26 14:20:38.172358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.442 [2024-07-26 14:20:38.172387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.442 qpair failed and we were unable to recover it. 00:26:30.442 [2024-07-26 14:20:38.172516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.442 [2024-07-26 14:20:38.172552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.442 qpair failed and we were unable to recover it. 00:26:30.442 [2024-07-26 14:20:38.172669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.442 [2024-07-26 14:20:38.172696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.442 qpair failed and we were unable to recover it. 00:26:30.442 [2024-07-26 14:20:38.172777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.442 [2024-07-26 14:20:38.172805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.442 qpair failed and we were unable to recover it. 00:26:30.442 [2024-07-26 14:20:38.172888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.442 [2024-07-26 14:20:38.172914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.442 qpair failed and we were unable to recover it. 00:26:30.442 [2024-07-26 14:20:38.173021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.442 [2024-07-26 14:20:38.173048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.442 qpair failed and we were unable to recover it. 00:26:30.442 [2024-07-26 14:20:38.173164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.442 [2024-07-26 14:20:38.173192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.442 qpair failed and we were unable to recover it. 00:26:30.442 [2024-07-26 14:20:38.173289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.442 [2024-07-26 14:20:38.173328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.442 qpair failed and we were unable to recover it. 00:26:30.442 [2024-07-26 14:20:38.173422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.442 [2024-07-26 14:20:38.173450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.442 qpair failed and we were unable to recover it. 00:26:30.442 [2024-07-26 14:20:38.173563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.442 [2024-07-26 14:20:38.173590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.442 qpair failed and we were unable to recover it. 00:26:30.442 [2024-07-26 14:20:38.173678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.442 [2024-07-26 14:20:38.173705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.442 qpair failed and we were unable to recover it. 00:26:30.442 [2024-07-26 14:20:38.173796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.442 [2024-07-26 14:20:38.173822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.442 qpair failed and we were unable to recover it. 00:26:30.442 [2024-07-26 14:20:38.173906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.442 [2024-07-26 14:20:38.173935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.442 qpair failed and we were unable to recover it. 00:26:30.442 [2024-07-26 14:20:38.174028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.442 [2024-07-26 14:20:38.174055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.442 qpair failed and we were unable to recover it. 00:26:30.442 [2024-07-26 14:20:38.174148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.442 [2024-07-26 14:20:38.174175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.442 qpair failed and we were unable to recover it. 00:26:30.442 [2024-07-26 14:20:38.174288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.442 [2024-07-26 14:20:38.174315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.442 qpair failed and we were unable to recover it. 00:26:30.442 [2024-07-26 14:20:38.174403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.442 [2024-07-26 14:20:38.174429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.442 qpair failed and we were unable to recover it. 00:26:30.442 [2024-07-26 14:20:38.174515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.442 [2024-07-26 14:20:38.174547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.442 qpair failed and we were unable to recover it. 00:26:30.442 [2024-07-26 14:20:38.174630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.442 [2024-07-26 14:20:38.174657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.442 qpair failed and we were unable to recover it. 00:26:30.442 [2024-07-26 14:20:38.174745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.442 [2024-07-26 14:20:38.174777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.442 qpair failed and we were unable to recover it. 00:26:30.442 [2024-07-26 14:20:38.174861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.442 [2024-07-26 14:20:38.174888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.442 qpair failed and we were unable to recover it. 00:26:30.442 [2024-07-26 14:20:38.174967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.442 [2024-07-26 14:20:38.174993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.442 qpair failed and we were unable to recover it. 00:26:30.442 [2024-07-26 14:20:38.175082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.442 [2024-07-26 14:20:38.175110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.442 qpair failed and we were unable to recover it. 00:26:30.442 [2024-07-26 14:20:38.175212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.442 [2024-07-26 14:20:38.175250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.442 qpair failed and we were unable to recover it. 00:26:30.443 [2024-07-26 14:20:38.175333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.443 [2024-07-26 14:20:38.175360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.443 qpair failed and we were unable to recover it. 00:26:30.443 [2024-07-26 14:20:38.175433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.443 [2024-07-26 14:20:38.175460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.443 qpair failed and we were unable to recover it. 00:26:30.443 [2024-07-26 14:20:38.175547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.443 [2024-07-26 14:20:38.175574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.443 qpair failed and we were unable to recover it. 00:26:30.443 [2024-07-26 14:20:38.175661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.443 [2024-07-26 14:20:38.175687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.443 qpair failed and we were unable to recover it. 00:26:30.443 [2024-07-26 14:20:38.175769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.443 [2024-07-26 14:20:38.175795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.443 qpair failed and we were unable to recover it. 00:26:30.443 [2024-07-26 14:20:38.175883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.443 [2024-07-26 14:20:38.175909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.443 qpair failed and we were unable to recover it. 00:26:30.443 [2024-07-26 14:20:38.175993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.443 [2024-07-26 14:20:38.176021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.443 qpair failed and we were unable to recover it. 00:26:30.443 [2024-07-26 14:20:38.176100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.443 [2024-07-26 14:20:38.176127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.443 qpair failed and we were unable to recover it. 00:26:30.443 [2024-07-26 14:20:38.176210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.443 [2024-07-26 14:20:38.176239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.443 qpair failed and we were unable to recover it. 00:26:30.443 [2024-07-26 14:20:38.176328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.443 [2024-07-26 14:20:38.176355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.443 qpair failed and we were unable to recover it. 00:26:30.443 [2024-07-26 14:20:38.176436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.443 [2024-07-26 14:20:38.176463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.443 qpair failed and we were unable to recover it. 00:26:30.443 [2024-07-26 14:20:38.176559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.443 [2024-07-26 14:20:38.176586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.443 qpair failed and we were unable to recover it. 00:26:30.443 [2024-07-26 14:20:38.176674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.443 [2024-07-26 14:20:38.176703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.443 qpair failed and we were unable to recover it. 00:26:30.443 [2024-07-26 14:20:38.176815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.443 [2024-07-26 14:20:38.176842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.443 qpair failed and we were unable to recover it. 00:26:30.443 [2024-07-26 14:20:38.176926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.443 [2024-07-26 14:20:38.176953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.443 qpair failed and we were unable to recover it. 00:26:30.443 [2024-07-26 14:20:38.177041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.443 [2024-07-26 14:20:38.177067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.443 qpair failed and we were unable to recover it. 00:26:30.443 [2024-07-26 14:20:38.177160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.443 [2024-07-26 14:20:38.177186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.443 qpair failed and we were unable to recover it. 00:26:30.443 [2024-07-26 14:20:38.177272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.443 [2024-07-26 14:20:38.177300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.443 qpair failed and we were unable to recover it. 00:26:30.443 [2024-07-26 14:20:38.177388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.443 [2024-07-26 14:20:38.177415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.443 qpair failed and we were unable to recover it. 00:26:30.443 [2024-07-26 14:20:38.177524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.443 [2024-07-26 14:20:38.177557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.443 qpair failed and we were unable to recover it. 00:26:30.443 [2024-07-26 14:20:38.177665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.443 [2024-07-26 14:20:38.177692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.443 qpair failed and we were unable to recover it. 00:26:30.443 [2024-07-26 14:20:38.177776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.443 [2024-07-26 14:20:38.177802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.443 qpair failed and we were unable to recover it. 00:26:30.443 [2024-07-26 14:20:38.177884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.443 [2024-07-26 14:20:38.177915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.443 qpair failed and we were unable to recover it. 00:26:30.443 [2024-07-26 14:20:38.177996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.443 [2024-07-26 14:20:38.178023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.443 qpair failed and we were unable to recover it. 00:26:30.443 [2024-07-26 14:20:38.178106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.443 [2024-07-26 14:20:38.178134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.443 qpair failed and we were unable to recover it. 00:26:30.443 [2024-07-26 14:20:38.178233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.443 [2024-07-26 14:20:38.178272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.443 qpair failed and we were unable to recover it. 00:26:30.443 [2024-07-26 14:20:38.178361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.443 [2024-07-26 14:20:38.178389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.443 qpair failed and we were unable to recover it. 00:26:30.443 [2024-07-26 14:20:38.178504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.443 [2024-07-26 14:20:38.178536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.443 qpair failed and we were unable to recover it. 00:26:30.443 [2024-07-26 14:20:38.178620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.443 [2024-07-26 14:20:38.178646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.443 qpair failed and we were unable to recover it. 00:26:30.443 [2024-07-26 14:20:38.178743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.443 [2024-07-26 14:20:38.178769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.443 qpair failed and we were unable to recover it. 00:26:30.443 [2024-07-26 14:20:38.178847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.443 [2024-07-26 14:20:38.178872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.443 qpair failed and we were unable to recover it. 00:26:30.443 [2024-07-26 14:20:38.178961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.443 [2024-07-26 14:20:38.178986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.443 qpair failed and we were unable to recover it. 00:26:30.443 [2024-07-26 14:20:38.179065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.443 [2024-07-26 14:20:38.179091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.443 qpair failed and we were unable to recover it. 00:26:30.443 [2024-07-26 14:20:38.179173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.443 [2024-07-26 14:20:38.179199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.443 qpair failed and we were unable to recover it. 00:26:30.443 [2024-07-26 14:20:38.179278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.443 [2024-07-26 14:20:38.179303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.443 qpair failed and we were unable to recover it. 00:26:30.443 [2024-07-26 14:20:38.179425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.444 [2024-07-26 14:20:38.179455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.444 qpair failed and we were unable to recover it. 00:26:30.444 [2024-07-26 14:20:38.179546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.444 [2024-07-26 14:20:38.179574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.444 qpair failed and we were unable to recover it. 00:26:30.444 [2024-07-26 14:20:38.179667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.444 [2024-07-26 14:20:38.179694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.444 qpair failed and we were unable to recover it. 00:26:30.444 [2024-07-26 14:20:38.179779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.444 [2024-07-26 14:20:38.179806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.444 qpair failed and we were unable to recover it. 00:26:30.444 [2024-07-26 14:20:38.179893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.444 [2024-07-26 14:20:38.179919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.444 qpair failed and we were unable to recover it. 00:26:30.444 [2024-07-26 14:20:38.180007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.444 [2024-07-26 14:20:38.180035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.444 qpair failed and we were unable to recover it. 00:26:30.444 [2024-07-26 14:20:38.180147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.444 [2024-07-26 14:20:38.180174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.444 qpair failed and we were unable to recover it. 00:26:30.444 [2024-07-26 14:20:38.180254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.444 [2024-07-26 14:20:38.180280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.444 qpair failed and we were unable to recover it. 00:26:30.444 [2024-07-26 14:20:38.180365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.444 [2024-07-26 14:20:38.180392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.444 qpair failed and we were unable to recover it. 00:26:30.444 [2024-07-26 14:20:38.180476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.444 [2024-07-26 14:20:38.180503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.444 qpair failed and we were unable to recover it. 00:26:30.444 [2024-07-26 14:20:38.180596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.444 [2024-07-26 14:20:38.180623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.444 qpair failed and we were unable to recover it. 00:26:30.444 [2024-07-26 14:20:38.180710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.444 [2024-07-26 14:20:38.180736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.444 qpair failed and we were unable to recover it. 00:26:30.444 [2024-07-26 14:20:38.180823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.444 [2024-07-26 14:20:38.180848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.444 qpair failed and we were unable to recover it. 00:26:30.444 [2024-07-26 14:20:38.180926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.444 [2024-07-26 14:20:38.180951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.444 qpair failed and we were unable to recover it. 00:26:30.444 [2024-07-26 14:20:38.181120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.444 [2024-07-26 14:20:38.181147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.444 qpair failed and we were unable to recover it. 00:26:30.444 [2024-07-26 14:20:38.181236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.444 [2024-07-26 14:20:38.181262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.444 qpair failed and we were unable to recover it. 00:26:30.444 [2024-07-26 14:20:38.181360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.444 [2024-07-26 14:20:38.181400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.444 qpair failed and we were unable to recover it. 00:26:30.444 [2024-07-26 14:20:38.181514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.444 [2024-07-26 14:20:38.181555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.444 qpair failed and we were unable to recover it. 00:26:30.444 [2024-07-26 14:20:38.181642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.444 [2024-07-26 14:20:38.181668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.444 qpair failed and we were unable to recover it. 00:26:30.444 [2024-07-26 14:20:38.181747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.444 [2024-07-26 14:20:38.181773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.444 qpair failed and we were unable to recover it. 00:26:30.444 [2024-07-26 14:20:38.181869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.444 [2024-07-26 14:20:38.181895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.444 qpair failed and we were unable to recover it. 00:26:30.444 [2024-07-26 14:20:38.181971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.444 [2024-07-26 14:20:38.181998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.444 qpair failed and we were unable to recover it. 00:26:30.444 [2024-07-26 14:20:38.182078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.444 [2024-07-26 14:20:38.182105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.444 qpair failed and we were unable to recover it. 00:26:30.444 [2024-07-26 14:20:38.182193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.444 [2024-07-26 14:20:38.182218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.444 qpair failed and we were unable to recover it. 00:26:30.444 [2024-07-26 14:20:38.182289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.444 [2024-07-26 14:20:38.182315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.444 qpair failed and we were unable to recover it. 00:26:30.444 [2024-07-26 14:20:38.182392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.444 [2024-07-26 14:20:38.182418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.444 qpair failed and we were unable to recover it. 00:26:30.444 [2024-07-26 14:20:38.182494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.444 [2024-07-26 14:20:38.182519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.444 qpair failed and we were unable to recover it. 00:26:30.444 [2024-07-26 14:20:38.182602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.444 [2024-07-26 14:20:38.182627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.444 qpair failed and we were unable to recover it. 00:26:30.444 [2024-07-26 14:20:38.182722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.444 [2024-07-26 14:20:38.182747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.444 qpair failed and we were unable to recover it. 00:26:30.444 [2024-07-26 14:20:38.182829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.444 [2024-07-26 14:20:38.182855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.444 qpair failed and we were unable to recover it. 00:26:30.444 [2024-07-26 14:20:38.182933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.444 [2024-07-26 14:20:38.182959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.444 qpair failed and we were unable to recover it. 00:26:30.444 [2024-07-26 14:20:38.183038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.444 [2024-07-26 14:20:38.183064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.444 qpair failed and we were unable to recover it. 00:26:30.444 [2024-07-26 14:20:38.183139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.444 [2024-07-26 14:20:38.183165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.444 qpair failed and we were unable to recover it. 00:26:30.444 [2024-07-26 14:20:38.183242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.444 [2024-07-26 14:20:38.183268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.444 qpair failed and we were unable to recover it. 00:26:30.445 [2024-07-26 14:20:38.183346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.445 [2024-07-26 14:20:38.183371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.445 qpair failed and we were unable to recover it. 00:26:30.445 [2024-07-26 14:20:38.183479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.445 [2024-07-26 14:20:38.183507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.445 qpair failed and we were unable to recover it. 00:26:30.445 [2024-07-26 14:20:38.183604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.445 [2024-07-26 14:20:38.183631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.445 qpair failed and we were unable to recover it. 00:26:30.445 [2024-07-26 14:20:38.183708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.445 [2024-07-26 14:20:38.183734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.445 qpair failed and we were unable to recover it. 00:26:30.445 [2024-07-26 14:20:38.183821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.445 [2024-07-26 14:20:38.183847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.445 qpair failed and we were unable to recover it. 00:26:30.445 [2024-07-26 14:20:38.183934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.445 [2024-07-26 14:20:38.183960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.445 qpair failed and we were unable to recover it. 00:26:30.445 [2024-07-26 14:20:38.184042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.445 [2024-07-26 14:20:38.184067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.445 qpair failed and we were unable to recover it. 00:26:30.445 [2024-07-26 14:20:38.184166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.445 [2024-07-26 14:20:38.184192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.445 qpair failed and we were unable to recover it. 00:26:30.445 [2024-07-26 14:20:38.184278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.445 [2024-07-26 14:20:38.184304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.445 qpair failed and we were unable to recover it. 00:26:30.445 [2024-07-26 14:20:38.184386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.445 [2024-07-26 14:20:38.184412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.445 qpair failed and we were unable to recover it. 00:26:30.445 [2024-07-26 14:20:38.184491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.445 [2024-07-26 14:20:38.184517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.445 qpair failed and we were unable to recover it. 00:26:30.445 [2024-07-26 14:20:38.184607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.445 [2024-07-26 14:20:38.184633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.445 qpair failed and we were unable to recover it. 00:26:30.445 [2024-07-26 14:20:38.184725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.445 [2024-07-26 14:20:38.184751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.445 qpair failed and we were unable to recover it. 00:26:30.445 [2024-07-26 14:20:38.184831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.445 [2024-07-26 14:20:38.184857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.445 qpair failed and we were unable to recover it. 00:26:30.445 [2024-07-26 14:20:38.184938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.445 [2024-07-26 14:20:38.184964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.445 qpair failed and we were unable to recover it. 00:26:30.445 [2024-07-26 14:20:38.185045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.445 [2024-07-26 14:20:38.185072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.445 qpair failed and we were unable to recover it. 00:26:30.445 [2024-07-26 14:20:38.185147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.445 [2024-07-26 14:20:38.185173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.445 qpair failed and we were unable to recover it. 00:26:30.445 [2024-07-26 14:20:38.185271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.445 [2024-07-26 14:20:38.185311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.445 qpair failed and we were unable to recover it. 00:26:30.445 [2024-07-26 14:20:38.185403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.445 [2024-07-26 14:20:38.185432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.445 qpair failed and we were unable to recover it. 00:26:30.445 [2024-07-26 14:20:38.185520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.445 [2024-07-26 14:20:38.185561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.445 qpair failed and we were unable to recover it. 00:26:30.445 [2024-07-26 14:20:38.185642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.445 [2024-07-26 14:20:38.185668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.445 qpair failed and we were unable to recover it. 00:26:30.445 [2024-07-26 14:20:38.185751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.445 [2024-07-26 14:20:38.185777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.445 qpair failed and we were unable to recover it. 00:26:30.445 [2024-07-26 14:20:38.185860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.445 [2024-07-26 14:20:38.185886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.445 qpair failed and we were unable to recover it. 00:26:30.445 [2024-07-26 14:20:38.185999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.445 [2024-07-26 14:20:38.186025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.445 qpair failed and we were unable to recover it. 00:26:30.445 [2024-07-26 14:20:38.186113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.445 [2024-07-26 14:20:38.186139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.445 qpair failed and we were unable to recover it. 00:26:30.445 [2024-07-26 14:20:38.186218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.445 [2024-07-26 14:20:38.186243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.445 qpair failed and we were unable to recover it. 00:26:30.445 [2024-07-26 14:20:38.186361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.445 [2024-07-26 14:20:38.186389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.445 qpair failed and we were unable to recover it. 00:26:30.445 [2024-07-26 14:20:38.186472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.445 [2024-07-26 14:20:38.186498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.445 qpair failed and we were unable to recover it. 00:26:30.445 [2024-07-26 14:20:38.186591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.445 [2024-07-26 14:20:38.186620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.445 qpair failed and we were unable to recover it. 00:26:30.445 [2024-07-26 14:20:38.186705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.445 [2024-07-26 14:20:38.186732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.445 qpair failed and we were unable to recover it. 00:26:30.445 [2024-07-26 14:20:38.186814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.445 [2024-07-26 14:20:38.186840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.445 qpair failed and we were unable to recover it. 00:26:30.445 [2024-07-26 14:20:38.186924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.445 [2024-07-26 14:20:38.186950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.445 qpair failed and we were unable to recover it. 00:26:30.445 [2024-07-26 14:20:38.187028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.445 [2024-07-26 14:20:38.187054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.445 qpair failed and we were unable to recover it. 00:26:30.445 [2024-07-26 14:20:38.187136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.445 [2024-07-26 14:20:38.187162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.445 qpair failed and we were unable to recover it. 00:26:30.445 [2024-07-26 14:20:38.187253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.446 [2024-07-26 14:20:38.187281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.446 qpair failed and we were unable to recover it. 00:26:30.446 [2024-07-26 14:20:38.187358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.446 [2024-07-26 14:20:38.187384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.446 qpair failed and we were unable to recover it. 00:26:30.446 [2024-07-26 14:20:38.187496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.446 [2024-07-26 14:20:38.187523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.446 qpair failed and we were unable to recover it. 00:26:30.446 [2024-07-26 14:20:38.187612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.446 [2024-07-26 14:20:38.187638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.446 qpair failed and we were unable to recover it. 00:26:30.446 [2024-07-26 14:20:38.187720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.446 [2024-07-26 14:20:38.187747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.446 qpair failed and we were unable to recover it. 00:26:30.446 [2024-07-26 14:20:38.187831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.446 [2024-07-26 14:20:38.187861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.446 qpair failed and we were unable to recover it. 00:26:30.446 [2024-07-26 14:20:38.187979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.446 [2024-07-26 14:20:38.188007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.446 qpair failed and we were unable to recover it. 00:26:30.446 [2024-07-26 14:20:38.188088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.446 [2024-07-26 14:20:38.188115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.446 qpair failed and we were unable to recover it. 00:26:30.446 [2024-07-26 14:20:38.188194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.446 [2024-07-26 14:20:38.188220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.446 qpair failed and we were unable to recover it. 00:26:30.446 [2024-07-26 14:20:38.188308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.446 [2024-07-26 14:20:38.188335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.446 qpair failed and we were unable to recover it. 00:26:30.446 [2024-07-26 14:20:38.188430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.446 [2024-07-26 14:20:38.188458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.446 qpair failed and we were unable to recover it. 00:26:30.446 [2024-07-26 14:20:38.188551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.446 [2024-07-26 14:20:38.188577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.446 qpair failed and we were unable to recover it. 00:26:30.446 [2024-07-26 14:20:38.188658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.446 [2024-07-26 14:20:38.188685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.446 qpair failed and we were unable to recover it. 00:26:30.446 [2024-07-26 14:20:38.188764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.446 [2024-07-26 14:20:38.188795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.446 qpair failed and we were unable to recover it. 00:26:30.446 [2024-07-26 14:20:38.188916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.446 [2024-07-26 14:20:38.188942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.446 qpair failed and we were unable to recover it. 00:26:30.446 [2024-07-26 14:20:38.189023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.446 [2024-07-26 14:20:38.189050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.446 qpair failed and we were unable to recover it. 00:26:30.446 [2024-07-26 14:20:38.189135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.446 [2024-07-26 14:20:38.189161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.446 qpair failed and we were unable to recover it. 00:26:30.446 [2024-07-26 14:20:38.189242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.446 [2024-07-26 14:20:38.189268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.446 qpair failed and we were unable to recover it. 00:26:30.446 [2024-07-26 14:20:38.189365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.446 [2024-07-26 14:20:38.189405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.446 qpair failed and we were unable to recover it. 00:26:30.446 [2024-07-26 14:20:38.189499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.446 [2024-07-26 14:20:38.189535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.446 qpair failed and we were unable to recover it. 00:26:30.446 [2024-07-26 14:20:38.189627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.446 [2024-07-26 14:20:38.189653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.446 qpair failed and we were unable to recover it. 00:26:30.446 [2024-07-26 14:20:38.189735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.446 [2024-07-26 14:20:38.189761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.446 qpair failed and we were unable to recover it. 00:26:30.446 [2024-07-26 14:20:38.189855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.446 [2024-07-26 14:20:38.189881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.446 qpair failed and we were unable to recover it. 00:26:30.446 [2024-07-26 14:20:38.189991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.446 [2024-07-26 14:20:38.190017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.446 qpair failed and we were unable to recover it. 00:26:30.446 [2024-07-26 14:20:38.190101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.446 [2024-07-26 14:20:38.190128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.446 qpair failed and we were unable to recover it. 00:26:30.446 [2024-07-26 14:20:38.190233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.446 [2024-07-26 14:20:38.190259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.446 qpair failed and we were unable to recover it. 00:26:30.446 [2024-07-26 14:20:38.190337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.446 [2024-07-26 14:20:38.190363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.446 qpair failed and we were unable to recover it. 00:26:30.446 [2024-07-26 14:20:38.190466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.446 [2024-07-26 14:20:38.190493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.446 qpair failed and we were unable to recover it. 00:26:30.446 [2024-07-26 14:20:38.190602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.446 [2024-07-26 14:20:38.190642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.446 qpair failed and we were unable to recover it. 00:26:30.446 [2024-07-26 14:20:38.190732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.446 [2024-07-26 14:20:38.190761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.446 qpair failed and we were unable to recover it. 00:26:30.446 [2024-07-26 14:20:38.190846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.446 [2024-07-26 14:20:38.190873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.446 qpair failed and we were unable to recover it. 00:26:30.446 [2024-07-26 14:20:38.190951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.446 [2024-07-26 14:20:38.190978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.446 qpair failed and we were unable to recover it. 00:26:30.446 [2024-07-26 14:20:38.191066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.446 [2024-07-26 14:20:38.191093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.446 qpair failed and we were unable to recover it. 00:26:30.446 [2024-07-26 14:20:38.191170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.446 [2024-07-26 14:20:38.191196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.446 qpair failed and we were unable to recover it. 00:26:30.446 [2024-07-26 14:20:38.191278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.446 [2024-07-26 14:20:38.191306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.446 qpair failed and we were unable to recover it. 00:26:30.446 [2024-07-26 14:20:38.191390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.446 [2024-07-26 14:20:38.191417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.446 qpair failed and we were unable to recover it. 00:26:30.446 [2024-07-26 14:20:38.191505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.446 [2024-07-26 14:20:38.191538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.446 qpair failed and we were unable to recover it. 00:26:30.446 [2024-07-26 14:20:38.191623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.446 [2024-07-26 14:20:38.191651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.446 qpair failed and we were unable to recover it. 00:26:30.447 [2024-07-26 14:20:38.191739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.447 [2024-07-26 14:20:38.191766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.447 qpair failed and we were unable to recover it. 00:26:30.447 [2024-07-26 14:20:38.191876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.447 [2024-07-26 14:20:38.191903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.447 qpair failed and we were unable to recover it. 00:26:30.447 [2024-07-26 14:20:38.191997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.447 [2024-07-26 14:20:38.192023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.447 qpair failed and we were unable to recover it. 00:26:30.447 [2024-07-26 14:20:38.192159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.447 [2024-07-26 14:20:38.192185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.447 qpair failed and we were unable to recover it. 00:26:30.447 [2024-07-26 14:20:38.192263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.447 [2024-07-26 14:20:38.192289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.447 qpair failed and we were unable to recover it. 00:26:30.447 [2024-07-26 14:20:38.192369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.447 [2024-07-26 14:20:38.192396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.447 qpair failed and we were unable to recover it. 00:26:30.447 [2024-07-26 14:20:38.192487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.447 [2024-07-26 14:20:38.192513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.447 qpair failed and we were unable to recover it. 00:26:30.447 [2024-07-26 14:20:38.192597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.447 [2024-07-26 14:20:38.192623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.447 qpair failed and we were unable to recover it. 00:26:30.447 [2024-07-26 14:20:38.192702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.447 [2024-07-26 14:20:38.192728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.447 qpair failed and we were unable to recover it. 00:26:30.447 [2024-07-26 14:20:38.192804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.447 [2024-07-26 14:20:38.192829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.447 qpair failed and we were unable to recover it. 00:26:30.447 [2024-07-26 14:20:38.192909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.447 [2024-07-26 14:20:38.192935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.447 qpair failed and we were unable to recover it. 00:26:30.447 [2024-07-26 14:20:38.193043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.447 [2024-07-26 14:20:38.193070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.447 qpair failed and we were unable to recover it. 00:26:30.447 [2024-07-26 14:20:38.193154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.447 [2024-07-26 14:20:38.193180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.447 qpair failed and we were unable to recover it. 00:26:30.447 [2024-07-26 14:20:38.193269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.447 [2024-07-26 14:20:38.193296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.447 qpair failed and we were unable to recover it. 00:26:30.447 [2024-07-26 14:20:38.193404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.447 [2024-07-26 14:20:38.193443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.447 qpair failed and we were unable to recover it. 00:26:30.447 [2024-07-26 14:20:38.193538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.447 [2024-07-26 14:20:38.193566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.447 qpair failed and we were unable to recover it. 00:26:30.447 [2024-07-26 14:20:38.193663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.447 [2024-07-26 14:20:38.193690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.447 qpair failed and we were unable to recover it. 00:26:30.447 [2024-07-26 14:20:38.193779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.447 [2024-07-26 14:20:38.193805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.447 qpair failed and we were unable to recover it. 00:26:30.447 [2024-07-26 14:20:38.193885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.447 [2024-07-26 14:20:38.193912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.447 qpair failed and we were unable to recover it. 00:26:30.447 [2024-07-26 14:20:38.194002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.447 [2024-07-26 14:20:38.194030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.447 qpair failed and we were unable to recover it. 00:26:30.447 [2024-07-26 14:20:38.194111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.447 [2024-07-26 14:20:38.194138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.447 qpair failed and we were unable to recover it. 00:26:30.447 [2024-07-26 14:20:38.194231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.447 [2024-07-26 14:20:38.194271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.447 qpair failed and we were unable to recover it. 00:26:30.447 [2024-07-26 14:20:38.194396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.447 [2024-07-26 14:20:38.194424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.447 qpair failed and we were unable to recover it. 00:26:30.447 [2024-07-26 14:20:38.194513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.447 [2024-07-26 14:20:38.194553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.447 qpair failed and we were unable to recover it. 00:26:30.447 [2024-07-26 14:20:38.194685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.447 [2024-07-26 14:20:38.194712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.447 qpair failed and we were unable to recover it. 00:26:30.447 [2024-07-26 14:20:38.194797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.447 [2024-07-26 14:20:38.194824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.447 qpair failed and we were unable to recover it. 00:26:30.447 [2024-07-26 14:20:38.194914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.447 [2024-07-26 14:20:38.194941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.447 qpair failed and we were unable to recover it. 00:26:30.447 [2024-07-26 14:20:38.195057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.447 [2024-07-26 14:20:38.195085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.447 qpair failed and we were unable to recover it. 00:26:30.447 [2024-07-26 14:20:38.195169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.447 [2024-07-26 14:20:38.195195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.447 qpair failed and we were unable to recover it. 00:26:30.447 [2024-07-26 14:20:38.195283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.447 [2024-07-26 14:20:38.195311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.447 qpair failed and we were unable to recover it. 00:26:30.447 [2024-07-26 14:20:38.195396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.447 [2024-07-26 14:20:38.195423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.447 qpair failed and we were unable to recover it. 00:26:30.447 [2024-07-26 14:20:38.195535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.447 [2024-07-26 14:20:38.195563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.447 qpair failed and we were unable to recover it. 00:26:30.447 [2024-07-26 14:20:38.195655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.447 [2024-07-26 14:20:38.195682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.447 qpair failed and we were unable to recover it. 00:26:30.447 [2024-07-26 14:20:38.195767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.447 [2024-07-26 14:20:38.195793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.447 qpair failed and we were unable to recover it. 00:26:30.447 [2024-07-26 14:20:38.195877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.447 [2024-07-26 14:20:38.195903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.447 qpair failed and we were unable to recover it. 00:26:30.447 [2024-07-26 14:20:38.195994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.447 [2024-07-26 14:20:38.196021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.447 qpair failed and we were unable to recover it. 00:26:30.447 [2024-07-26 14:20:38.196106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.447 [2024-07-26 14:20:38.196133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.447 qpair failed and we were unable to recover it. 00:26:30.447 [2024-07-26 14:20:38.196220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.447 [2024-07-26 14:20:38.196247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.448 qpair failed and we were unable to recover it. 00:26:30.448 [2024-07-26 14:20:38.196352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.448 [2024-07-26 14:20:38.196378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.448 qpair failed and we were unable to recover it. 00:26:30.448 [2024-07-26 14:20:38.196492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.448 [2024-07-26 14:20:38.196518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.448 qpair failed and we were unable to recover it. 00:26:30.448 [2024-07-26 14:20:38.196611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.448 [2024-07-26 14:20:38.196638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.448 qpair failed and we were unable to recover it. 00:26:30.448 [2024-07-26 14:20:38.196725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.448 [2024-07-26 14:20:38.196752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.448 qpair failed and we were unable to recover it. 00:26:30.448 [2024-07-26 14:20:38.196835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.448 [2024-07-26 14:20:38.196866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.448 qpair failed and we were unable to recover it. 00:26:30.448 [2024-07-26 14:20:38.196983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.448 [2024-07-26 14:20:38.197009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.448 qpair failed and we were unable to recover it. 00:26:30.448 [2024-07-26 14:20:38.197090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.448 [2024-07-26 14:20:38.197117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.448 qpair failed and we were unable to recover it. 00:26:30.448 [2024-07-26 14:20:38.197230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.448 [2024-07-26 14:20:38.197258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.448 qpair failed and we were unable to recover it. 00:26:30.448 [2024-07-26 14:20:38.197351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.448 [2024-07-26 14:20:38.197391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.448 qpair failed and we were unable to recover it. 00:26:30.448 [2024-07-26 14:20:38.197494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.448 [2024-07-26 14:20:38.197540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.448 qpair failed and we were unable to recover it. 00:26:30.448 [2024-07-26 14:20:38.197646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.448 [2024-07-26 14:20:38.197674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.448 qpair failed and we were unable to recover it. 00:26:30.448 [2024-07-26 14:20:38.197783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.448 [2024-07-26 14:20:38.197809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.448 qpair failed and we were unable to recover it. 00:26:30.448 [2024-07-26 14:20:38.197895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.448 [2024-07-26 14:20:38.197920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.448 qpair failed and we were unable to recover it. 00:26:30.448 [2024-07-26 14:20:38.198007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.448 [2024-07-26 14:20:38.198033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.448 qpair failed and we were unable to recover it. 00:26:30.448 [2024-07-26 14:20:38.198152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.448 [2024-07-26 14:20:38.198179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.448 qpair failed and we were unable to recover it. 00:26:30.448 [2024-07-26 14:20:38.198281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.448 [2024-07-26 14:20:38.198320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.448 qpair failed and we were unable to recover it. 00:26:30.448 [2024-07-26 14:20:38.198417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.448 [2024-07-26 14:20:38.198445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.448 qpair failed and we were unable to recover it. 00:26:30.448 [2024-07-26 14:20:38.198542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.448 [2024-07-26 14:20:38.198568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.448 qpair failed and we were unable to recover it. 00:26:30.448 [2024-07-26 14:20:38.198653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.448 [2024-07-26 14:20:38.198679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.448 qpair failed and we were unable to recover it. 00:26:30.448 [2024-07-26 14:20:38.198773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.448 [2024-07-26 14:20:38.198800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.448 qpair failed and we were unable to recover it. 00:26:30.448 [2024-07-26 14:20:38.198889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.448 [2024-07-26 14:20:38.198917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.448 qpair failed and we were unable to recover it. 00:26:30.448 [2024-07-26 14:20:38.198995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.448 [2024-07-26 14:20:38.199023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.448 qpair failed and we were unable to recover it. 00:26:30.448 [2024-07-26 14:20:38.199106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.448 [2024-07-26 14:20:38.199132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.448 qpair failed and we were unable to recover it. 00:26:30.448 [2024-07-26 14:20:38.199214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.448 [2024-07-26 14:20:38.199240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.448 qpair failed and we were unable to recover it. 00:26:30.448 [2024-07-26 14:20:38.199316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.448 [2024-07-26 14:20:38.199343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.448 qpair failed and we were unable to recover it. 00:26:30.448 [2024-07-26 14:20:38.199424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.448 [2024-07-26 14:20:38.199451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.448 qpair failed and we were unable to recover it. 00:26:30.448 [2024-07-26 14:20:38.199560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.448 [2024-07-26 14:20:38.199588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.448 qpair failed and we were unable to recover it. 00:26:30.448 [2024-07-26 14:20:38.199671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.448 [2024-07-26 14:20:38.199697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.448 qpair failed and we were unable to recover it. 00:26:30.448 [2024-07-26 14:20:38.199780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.448 [2024-07-26 14:20:38.199806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.448 qpair failed and we were unable to recover it. 00:26:30.448 [2024-07-26 14:20:38.199886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.448 [2024-07-26 14:20:38.199913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.448 qpair failed and we were unable to recover it. 00:26:30.448 [2024-07-26 14:20:38.200019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.448 [2024-07-26 14:20:38.200046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.448 qpair failed and we were unable to recover it. 00:26:30.448 [2024-07-26 14:20:38.200134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.448 [2024-07-26 14:20:38.200165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.448 qpair failed and we were unable to recover it. 00:26:30.448 [2024-07-26 14:20:38.200248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.448 [2024-07-26 14:20:38.200275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.448 qpair failed and we were unable to recover it. 00:26:30.448 [2024-07-26 14:20:38.200384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.448 [2024-07-26 14:20:38.200410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.448 qpair failed and we were unable to recover it. 00:26:30.448 [2024-07-26 14:20:38.200484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.448 [2024-07-26 14:20:38.200510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.448 qpair failed and we were unable to recover it. 00:26:30.448 [2024-07-26 14:20:38.200606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.448 [2024-07-26 14:20:38.200632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.448 qpair failed and we were unable to recover it. 00:26:30.448 [2024-07-26 14:20:38.200714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.448 [2024-07-26 14:20:38.200740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.448 qpair failed and we were unable to recover it. 00:26:30.448 [2024-07-26 14:20:38.200822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.449 [2024-07-26 14:20:38.200849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.449 qpair failed and we were unable to recover it. 00:26:30.449 [2024-07-26 14:20:38.200926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.449 [2024-07-26 14:20:38.200953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.449 qpair failed and we were unable to recover it. 00:26:30.449 [2024-07-26 14:20:38.201067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.449 [2024-07-26 14:20:38.201093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.449 qpair failed and we were unable to recover it. 00:26:30.449 [2024-07-26 14:20:38.201190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.449 [2024-07-26 14:20:38.201217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.449 qpair failed and we were unable to recover it. 00:26:30.449 [2024-07-26 14:20:38.201302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.449 [2024-07-26 14:20:38.201331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.449 qpair failed and we were unable to recover it. 00:26:30.449 [2024-07-26 14:20:38.201419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.449 [2024-07-26 14:20:38.201446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.449 qpair failed and we were unable to recover it. 00:26:30.449 [2024-07-26 14:20:38.201538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.449 [2024-07-26 14:20:38.201567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.449 qpair failed and we were unable to recover it. 00:26:30.449 [2024-07-26 14:20:38.201648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.449 [2024-07-26 14:20:38.201679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.449 qpair failed and we were unable to recover it. 00:26:30.449 [2024-07-26 14:20:38.201791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.449 [2024-07-26 14:20:38.201818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.449 qpair failed and we were unable to recover it. 00:26:30.449 [2024-07-26 14:20:38.201906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.449 [2024-07-26 14:20:38.201933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.449 qpair failed and we were unable to recover it. 00:26:30.449 [2024-07-26 14:20:38.202029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.449 [2024-07-26 14:20:38.202056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.449 qpair failed and we were unable to recover it. 00:26:30.449 [2024-07-26 14:20:38.202144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.449 [2024-07-26 14:20:38.202170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.449 qpair failed and we were unable to recover it. 00:26:30.449 [2024-07-26 14:20:38.202279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.449 [2024-07-26 14:20:38.202306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.449 qpair failed and we were unable to recover it. 00:26:30.449 [2024-07-26 14:20:38.202399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.449 [2024-07-26 14:20:38.202426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.449 qpair failed and we were unable to recover it. 00:26:30.449 [2024-07-26 14:20:38.202540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.449 [2024-07-26 14:20:38.202567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.449 qpair failed and we were unable to recover it. 00:26:30.449 [2024-07-26 14:20:38.202653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.449 [2024-07-26 14:20:38.202679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.449 qpair failed and we were unable to recover it. 00:26:30.449 [2024-07-26 14:20:38.202758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.449 [2024-07-26 14:20:38.202784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.449 qpair failed and we were unable to recover it. 00:26:30.449 [2024-07-26 14:20:38.202864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.449 [2024-07-26 14:20:38.202890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.449 qpair failed and we were unable to recover it. 00:26:30.449 [2024-07-26 14:20:38.202971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.449 [2024-07-26 14:20:38.202998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.449 qpair failed and we were unable to recover it. 00:26:30.449 [2024-07-26 14:20:38.203111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.449 [2024-07-26 14:20:38.203137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.449 qpair failed and we were unable to recover it. 00:26:30.449 [2024-07-26 14:20:38.203243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.449 [2024-07-26 14:20:38.203283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.449 qpair failed and we were unable to recover it. 00:26:30.449 [2024-07-26 14:20:38.203413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.449 [2024-07-26 14:20:38.203441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.449 qpair failed and we were unable to recover it. 00:26:30.449 [2024-07-26 14:20:38.203521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.449 [2024-07-26 14:20:38.203558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.449 qpair failed and we were unable to recover it. 00:26:30.449 [2024-07-26 14:20:38.203645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.449 [2024-07-26 14:20:38.203672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.449 qpair failed and we were unable to recover it. 00:26:30.449 [2024-07-26 14:20:38.203751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.449 [2024-07-26 14:20:38.203778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.449 qpair failed and we were unable to recover it. 00:26:30.449 [2024-07-26 14:20:38.203865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.449 [2024-07-26 14:20:38.203892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.449 qpair failed and we were unable to recover it. 00:26:30.449 [2024-07-26 14:20:38.203973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.449 [2024-07-26 14:20:38.203999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.449 qpair failed and we were unable to recover it. 00:26:30.449 [2024-07-26 14:20:38.204076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.449 [2024-07-26 14:20:38.204103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.449 qpair failed and we were unable to recover it. 00:26:30.449 [2024-07-26 14:20:38.204191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.449 [2024-07-26 14:20:38.204216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.449 qpair failed and we were unable to recover it. 00:26:30.449 [2024-07-26 14:20:38.204294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.449 [2024-07-26 14:20:38.204319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.449 qpair failed and we were unable to recover it. 00:26:30.449 [2024-07-26 14:20:38.204397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.449 [2024-07-26 14:20:38.204423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.449 qpair failed and we were unable to recover it. 00:26:30.449 [2024-07-26 14:20:38.204502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.450 [2024-07-26 14:20:38.204534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.450 qpair failed and we were unable to recover it. 00:26:30.450 [2024-07-26 14:20:38.204609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.450 [2024-07-26 14:20:38.204635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.450 qpair failed and we were unable to recover it. 00:26:30.450 [2024-07-26 14:20:38.204727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.450 [2024-07-26 14:20:38.204754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.450 qpair failed and we were unable to recover it. 00:26:30.450 [2024-07-26 14:20:38.204830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.450 [2024-07-26 14:20:38.204860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.450 qpair failed and we were unable to recover it. 00:26:30.450 [2024-07-26 14:20:38.204976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.450 [2024-07-26 14:20:38.205005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.450 qpair failed and we were unable to recover it. 00:26:30.450 [2024-07-26 14:20:38.205151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.450 [2024-07-26 14:20:38.205180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.450 qpair failed and we were unable to recover it. 00:26:30.450 [2024-07-26 14:20:38.205304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.450 [2024-07-26 14:20:38.205331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.450 qpair failed and we were unable to recover it. 00:26:30.450 [2024-07-26 14:20:38.205414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.450 [2024-07-26 14:20:38.205441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.450 qpair failed and we were unable to recover it. 00:26:30.450 [2024-07-26 14:20:38.205525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.450 [2024-07-26 14:20:38.205559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.450 qpair failed and we were unable to recover it. 00:26:30.450 [2024-07-26 14:20:38.205640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.450 [2024-07-26 14:20:38.205667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.450 qpair failed and we were unable to recover it. 00:26:30.450 [2024-07-26 14:20:38.205757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.450 [2024-07-26 14:20:38.205784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.450 qpair failed and we were unable to recover it. 00:26:30.450 [2024-07-26 14:20:38.205864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.450 [2024-07-26 14:20:38.205890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.450 qpair failed and we were unable to recover it. 00:26:30.450 [2024-07-26 14:20:38.205974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.450 [2024-07-26 14:20:38.206001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.450 qpair failed and we were unable to recover it. 00:26:30.450 [2024-07-26 14:20:38.206092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.450 [2024-07-26 14:20:38.206119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.450 qpair failed and we were unable to recover it. 00:26:30.450 [2024-07-26 14:20:38.206208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.450 [2024-07-26 14:20:38.206234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.450 qpair failed and we were unable to recover it. 00:26:30.450 [2024-07-26 14:20:38.206312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.450 [2024-07-26 14:20:38.206338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.450 qpair failed and we were unable to recover it. 00:26:30.450 [2024-07-26 14:20:38.206418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.450 [2024-07-26 14:20:38.206444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.450 qpair failed and we were unable to recover it. 00:26:30.450 [2024-07-26 14:20:38.206543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.450 [2024-07-26 14:20:38.206570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.450 qpair failed and we were unable to recover it. 00:26:30.450 [2024-07-26 14:20:38.206653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.450 [2024-07-26 14:20:38.206679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.450 qpair failed and we were unable to recover it. 00:26:30.450 [2024-07-26 14:20:38.206762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.450 [2024-07-26 14:20:38.206788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.450 qpair failed and we were unable to recover it. 00:26:30.450 [2024-07-26 14:20:38.206871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.450 [2024-07-26 14:20:38.206897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.450 qpair failed and we were unable to recover it. 00:26:30.450 [2024-07-26 14:20:38.207007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.450 [2024-07-26 14:20:38.207033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.450 qpair failed and we were unable to recover it. 00:26:30.450 [2024-07-26 14:20:38.207120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.450 [2024-07-26 14:20:38.207146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.450 qpair failed and we were unable to recover it. 00:26:30.450 [2024-07-26 14:20:38.207263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.450 [2024-07-26 14:20:38.207289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.450 qpair failed and we were unable to recover it. 00:26:30.450 [2024-07-26 14:20:38.207372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.450 [2024-07-26 14:20:38.207398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.450 qpair failed and we were unable to recover it. 00:26:30.450 [2024-07-26 14:20:38.207483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.450 [2024-07-26 14:20:38.207509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.450 qpair failed and we were unable to recover it. 00:26:30.450 [2024-07-26 14:20:38.207602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.450 [2024-07-26 14:20:38.207629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.450 qpair failed and we were unable to recover it. 00:26:30.450 [2024-07-26 14:20:38.207711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.450 [2024-07-26 14:20:38.207738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.450 qpair failed and we were unable to recover it. 00:26:30.450 [2024-07-26 14:20:38.207853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.450 [2024-07-26 14:20:38.207879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.450 qpair failed and we were unable to recover it. 00:26:30.450 [2024-07-26 14:20:38.207991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.450 [2024-07-26 14:20:38.208017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.450 qpair failed and we were unable to recover it. 00:26:30.450 [2024-07-26 14:20:38.208103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.450 [2024-07-26 14:20:38.208134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.450 qpair failed and we were unable to recover it. 00:26:30.450 [2024-07-26 14:20:38.208215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.450 [2024-07-26 14:20:38.208242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.450 qpair failed and we were unable to recover it. 00:26:30.450 [2024-07-26 14:20:38.208320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.450 [2024-07-26 14:20:38.208347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.450 qpair failed and we were unable to recover it. 00:26:30.450 [2024-07-26 14:20:38.208430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.450 [2024-07-26 14:20:38.208455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.450 qpair failed and we were unable to recover it. 00:26:30.450 [2024-07-26 14:20:38.208561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.450 [2024-07-26 14:20:38.208587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.450 qpair failed and we were unable to recover it. 00:26:30.450 [2024-07-26 14:20:38.208666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.450 [2024-07-26 14:20:38.208692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.450 qpair failed and we were unable to recover it. 00:26:30.450 [2024-07-26 14:20:38.208775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.450 [2024-07-26 14:20:38.208801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.450 qpair failed and we were unable to recover it. 00:26:30.450 [2024-07-26 14:20:38.208885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.450 [2024-07-26 14:20:38.208911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.450 qpair failed and we were unable to recover it. 00:26:30.450 [2024-07-26 14:20:38.209005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.451 [2024-07-26 14:20:38.209037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.451 qpair failed and we were unable to recover it. 00:26:30.451 [2024-07-26 14:20:38.209131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.451 [2024-07-26 14:20:38.209170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.451 qpair failed and we were unable to recover it. 00:26:30.451 [2024-07-26 14:20:38.209265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.451 [2024-07-26 14:20:38.209292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.451 qpair failed and we were unable to recover it. 00:26:30.451 [2024-07-26 14:20:38.209385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.451 [2024-07-26 14:20:38.209412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.451 qpair failed and we were unable to recover it. 00:26:30.451 [2024-07-26 14:20:38.209502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.451 [2024-07-26 14:20:38.209535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.451 qpair failed and we were unable to recover it. 00:26:30.451 [2024-07-26 14:20:38.209627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.451 [2024-07-26 14:20:38.209654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.451 qpair failed and we were unable to recover it. 00:26:30.451 [2024-07-26 14:20:38.209750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.451 [2024-07-26 14:20:38.209777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.451 qpair failed and we were unable to recover it. 00:26:30.451 [2024-07-26 14:20:38.209864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.451 [2024-07-26 14:20:38.209891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.451 qpair failed and we were unable to recover it. 00:26:30.451 [2024-07-26 14:20:38.209966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.451 [2024-07-26 14:20:38.209992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.451 qpair failed and we were unable to recover it. 00:26:30.451 [2024-07-26 14:20:38.210081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.451 [2024-07-26 14:20:38.210108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.451 qpair failed and we were unable to recover it. 00:26:30.451 [2024-07-26 14:20:38.210192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.451 [2024-07-26 14:20:38.210218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.451 qpair failed and we were unable to recover it. 00:26:30.451 [2024-07-26 14:20:38.210310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.451 [2024-07-26 14:20:38.210338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.451 qpair failed and we were unable to recover it. 00:26:30.451 [2024-07-26 14:20:38.210428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.451 [2024-07-26 14:20:38.210459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.451 qpair failed and we were unable to recover it. 00:26:30.451 [2024-07-26 14:20:38.210557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.451 [2024-07-26 14:20:38.210596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.451 qpair failed and we were unable to recover it. 00:26:30.451 [2024-07-26 14:20:38.210946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.451 [2024-07-26 14:20:38.210976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.451 qpair failed and we were unable to recover it. 00:26:30.451 [2024-07-26 14:20:38.211077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.451 [2024-07-26 14:20:38.211105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.451 qpair failed and we were unable to recover it. 00:26:30.451 [2024-07-26 14:20:38.211227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.451 [2024-07-26 14:20:38.211254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.451 qpair failed and we were unable to recover it. 00:26:30.451 [2024-07-26 14:20:38.211363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.451 [2024-07-26 14:20:38.211391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.451 qpair failed and we were unable to recover it. 00:26:30.451 [2024-07-26 14:20:38.211474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.451 [2024-07-26 14:20:38.211500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.451 qpair failed and we were unable to recover it. 00:26:30.451 [2024-07-26 14:20:38.211595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.451 [2024-07-26 14:20:38.211629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.451 qpair failed and we were unable to recover it. 00:26:30.451 [2024-07-26 14:20:38.211716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.451 [2024-07-26 14:20:38.211743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.451 qpair failed and we were unable to recover it. 00:26:30.451 [2024-07-26 14:20:38.211832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.451 [2024-07-26 14:20:38.211860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.451 qpair failed and we were unable to recover it. 00:26:30.451 [2024-07-26 14:20:38.211949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.451 [2024-07-26 14:20:38.211975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.451 qpair failed and we were unable to recover it. 00:26:30.451 [2024-07-26 14:20:38.212062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.451 [2024-07-26 14:20:38.212088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.451 qpair failed and we were unable to recover it. 00:26:30.451 [2024-07-26 14:20:38.212176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.451 [2024-07-26 14:20:38.212204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.451 qpair failed and we were unable to recover it. 00:26:30.451 [2024-07-26 14:20:38.212303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.451 [2024-07-26 14:20:38.212341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.451 qpair failed and we were unable to recover it. 00:26:30.451 [2024-07-26 14:20:38.212434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.451 [2024-07-26 14:20:38.212462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.451 qpair failed and we were unable to recover it. 00:26:30.451 [2024-07-26 14:20:38.212557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.451 [2024-07-26 14:20:38.212585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.451 qpair failed and we were unable to recover it. 00:26:30.451 [2024-07-26 14:20:38.212670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.451 [2024-07-26 14:20:38.212696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.451 qpair failed and we were unable to recover it. 00:26:30.451 [2024-07-26 14:20:38.212781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.451 [2024-07-26 14:20:38.212808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.451 qpair failed and we were unable to recover it. 00:26:30.451 [2024-07-26 14:20:38.212900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.451 [2024-07-26 14:20:38.212926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.451 qpair failed and we were unable to recover it. 00:26:30.451 [2024-07-26 14:20:38.213002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.451 [2024-07-26 14:20:38.213028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.451 qpair failed and we were unable to recover it. 00:26:30.451 [2024-07-26 14:20:38.213106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.451 [2024-07-26 14:20:38.213132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.451 qpair failed and we were unable to recover it. 00:26:30.451 [2024-07-26 14:20:38.213225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.451 [2024-07-26 14:20:38.213265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.451 qpair failed and we were unable to recover it. 00:26:30.451 [2024-07-26 14:20:38.213362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.451 [2024-07-26 14:20:38.213394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.451 qpair failed and we were unable to recover it. 00:26:30.451 [2024-07-26 14:20:38.213480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.451 [2024-07-26 14:20:38.213511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.451 qpair failed and we were unable to recover it. 00:26:30.451 [2024-07-26 14:20:38.213607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.451 [2024-07-26 14:20:38.213635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.451 qpair failed and we were unable to recover it. 00:26:30.451 [2024-07-26 14:20:38.213725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.451 [2024-07-26 14:20:38.213752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.451 qpair failed and we were unable to recover it. 00:26:30.451 [2024-07-26 14:20:38.213844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.452 [2024-07-26 14:20:38.213871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.452 qpair failed and we were unable to recover it. 00:26:30.452 [2024-07-26 14:20:38.213963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.452 [2024-07-26 14:20:38.213991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.452 qpair failed and we were unable to recover it. 00:26:30.452 [2024-07-26 14:20:38.214077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.452 [2024-07-26 14:20:38.214103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.452 qpair failed and we were unable to recover it. 00:26:30.452 [2024-07-26 14:20:38.214191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.452 [2024-07-26 14:20:38.214217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.452 qpair failed and we were unable to recover it. 00:26:30.452 [2024-07-26 14:20:38.214329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.452 [2024-07-26 14:20:38.214355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.452 qpair failed and we were unable to recover it. 00:26:30.452 [2024-07-26 14:20:38.214443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.452 [2024-07-26 14:20:38.214473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.452 qpair failed and we were unable to recover it. 00:26:30.452 [2024-07-26 14:20:38.214560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.452 [2024-07-26 14:20:38.214587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.452 qpair failed and we were unable to recover it. 00:26:30.452 [2024-07-26 14:20:38.214676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.452 [2024-07-26 14:20:38.214703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.452 qpair failed and we were unable to recover it. 00:26:30.452 [2024-07-26 14:20:38.214793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.452 [2024-07-26 14:20:38.214820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.452 qpair failed and we were unable to recover it. 00:26:30.452 [2024-07-26 14:20:38.214905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.452 [2024-07-26 14:20:38.214932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.452 qpair failed and we were unable to recover it. 00:26:30.452 [2024-07-26 14:20:38.215022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.452 [2024-07-26 14:20:38.215049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.452 qpair failed and we were unable to recover it. 00:26:30.452 [2024-07-26 14:20:38.215128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.452 [2024-07-26 14:20:38.215154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.452 qpair failed and we were unable to recover it. 00:26:30.452 [2024-07-26 14:20:38.215255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.452 [2024-07-26 14:20:38.215295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.452 qpair failed and we were unable to recover it. 00:26:30.452 [2024-07-26 14:20:38.215385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.452 [2024-07-26 14:20:38.215413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.452 qpair failed and we were unable to recover it. 00:26:30.452 [2024-07-26 14:20:38.215506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.452 [2024-07-26 14:20:38.215543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.452 qpair failed and we were unable to recover it. 00:26:30.452 [2024-07-26 14:20:38.215629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.452 [2024-07-26 14:20:38.215655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.452 qpair failed and we were unable to recover it. 00:26:30.452 [2024-07-26 14:20:38.215752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.452 [2024-07-26 14:20:38.215779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.452 qpair failed and we were unable to recover it. 00:26:30.452 [2024-07-26 14:20:38.215869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.452 [2024-07-26 14:20:38.215896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.452 qpair failed and we were unable to recover it. 00:26:30.452 [2024-07-26 14:20:38.215973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.452 [2024-07-26 14:20:38.216001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.452 qpair failed and we were unable to recover it. 00:26:30.452 [2024-07-26 14:20:38.216096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.452 [2024-07-26 14:20:38.216122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.452 qpair failed and we were unable to recover it. 00:26:30.452 [2024-07-26 14:20:38.216200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.452 [2024-07-26 14:20:38.216226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.452 qpair failed and we were unable to recover it. 00:26:30.452 [2024-07-26 14:20:38.216309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.452 [2024-07-26 14:20:38.216340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.452 qpair failed and we were unable to recover it. 00:26:30.452 [2024-07-26 14:20:38.216425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.452 [2024-07-26 14:20:38.216453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.452 qpair failed and we were unable to recover it. 00:26:30.452 [2024-07-26 14:20:38.216552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.452 [2024-07-26 14:20:38.216582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.452 qpair failed and we were unable to recover it. 00:26:30.452 [2024-07-26 14:20:38.216667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.452 [2024-07-26 14:20:38.216695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.452 qpair failed and we were unable to recover it. 00:26:30.452 [2024-07-26 14:20:38.216780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.452 [2024-07-26 14:20:38.216807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.452 qpair failed and we were unable to recover it. 00:26:30.452 [2024-07-26 14:20:38.216892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.452 [2024-07-26 14:20:38.216919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.452 qpair failed and we were unable to recover it. 00:26:30.452 [2024-07-26 14:20:38.217007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.452 [2024-07-26 14:20:38.217035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.452 qpair failed and we were unable to recover it. 00:26:30.452 [2024-07-26 14:20:38.217124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.452 [2024-07-26 14:20:38.217151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.452 qpair failed and we were unable to recover it. 00:26:30.452 [2024-07-26 14:20:38.217231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.452 [2024-07-26 14:20:38.217259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.452 qpair failed and we were unable to recover it. 00:26:30.452 [2024-07-26 14:20:38.217349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.452 [2024-07-26 14:20:38.217377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.452 qpair failed and we were unable to recover it. 00:26:30.452 [2024-07-26 14:20:38.217467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.452 [2024-07-26 14:20:38.217493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.452 qpair failed and we were unable to recover it. 00:26:30.452 [2024-07-26 14:20:38.217577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.452 [2024-07-26 14:20:38.217603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.452 qpair failed and we were unable to recover it. 00:26:30.452 [2024-07-26 14:20:38.217678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.452 [2024-07-26 14:20:38.217704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.452 qpair failed and we were unable to recover it. 00:26:30.452 [2024-07-26 14:20:38.217787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.452 [2024-07-26 14:20:38.217814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.452 qpair failed and we were unable to recover it. 00:26:30.452 [2024-07-26 14:20:38.217913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.452 [2024-07-26 14:20:38.217939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.452 qpair failed and we were unable to recover it. 00:26:30.452 [2024-07-26 14:20:38.218043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.452 [2024-07-26 14:20:38.218069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.452 qpair failed and we were unable to recover it. 00:26:30.452 [2024-07-26 14:20:38.218158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.452 [2024-07-26 14:20:38.218186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.452 qpair failed and we were unable to recover it. 00:26:30.453 [2024-07-26 14:20:38.218282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.453 [2024-07-26 14:20:38.218309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.453 qpair failed and we were unable to recover it. 00:26:30.453 [2024-07-26 14:20:38.218400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.453 [2024-07-26 14:20:38.218426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.453 qpair failed and we were unable to recover it. 00:26:30.453 [2024-07-26 14:20:38.218504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.453 [2024-07-26 14:20:38.218538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.453 qpair failed and we were unable to recover it. 00:26:30.453 [2024-07-26 14:20:38.218620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.453 [2024-07-26 14:20:38.218646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.453 qpair failed and we were unable to recover it. 00:26:30.453 [2024-07-26 14:20:38.218727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.453 [2024-07-26 14:20:38.218753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.453 qpair failed and we were unable to recover it. 00:26:30.453 [2024-07-26 14:20:38.218833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.453 [2024-07-26 14:20:38.218861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.453 qpair failed and we were unable to recover it. 00:26:30.453 [2024-07-26 14:20:38.218950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.453 [2024-07-26 14:20:38.218977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.453 qpair failed and we were unable to recover it. 00:26:30.453 [2024-07-26 14:20:38.219084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.453 [2024-07-26 14:20:38.219110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.453 qpair failed and we were unable to recover it. 00:26:30.453 [2024-07-26 14:20:38.219197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.453 [2024-07-26 14:20:38.219225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.453 qpair failed and we were unable to recover it. 00:26:30.453 [2024-07-26 14:20:38.219310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.453 [2024-07-26 14:20:38.219339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.453 qpair failed and we were unable to recover it. 00:26:30.453 [2024-07-26 14:20:38.219418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.453 [2024-07-26 14:20:38.219450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.453 qpair failed and we were unable to recover it. 00:26:30.453 [2024-07-26 14:20:38.219562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.453 [2024-07-26 14:20:38.219589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.453 qpair failed and we were unable to recover it. 00:26:30.453 [2024-07-26 14:20:38.219667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.453 [2024-07-26 14:20:38.219693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.453 qpair failed and we were unable to recover it. 00:26:30.453 [2024-07-26 14:20:38.219777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.453 [2024-07-26 14:20:38.219804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.453 qpair failed and we were unable to recover it. 00:26:30.453 [2024-07-26 14:20:38.219913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.453 [2024-07-26 14:20:38.219940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.453 qpair failed and we were unable to recover it. 00:26:30.453 [2024-07-26 14:20:38.220025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.453 [2024-07-26 14:20:38.220051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.453 qpair failed and we were unable to recover it. 00:26:30.453 [2024-07-26 14:20:38.220134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.453 [2024-07-26 14:20:38.220160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.453 qpair failed and we were unable to recover it. 00:26:30.453 [2024-07-26 14:20:38.220246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.453 [2024-07-26 14:20:38.220272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.453 qpair failed and we were unable to recover it. 00:26:30.453 [2024-07-26 14:20:38.220384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.453 [2024-07-26 14:20:38.220410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.453 qpair failed and we were unable to recover it. 00:26:30.453 [2024-07-26 14:20:38.220491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.453 [2024-07-26 14:20:38.220519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.453 qpair failed and we were unable to recover it. 00:26:30.453 [2024-07-26 14:20:38.220612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.453 [2024-07-26 14:20:38.220639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.453 qpair failed and we were unable to recover it. 00:26:30.453 [2024-07-26 14:20:38.220724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.453 [2024-07-26 14:20:38.220754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.453 qpair failed and we were unable to recover it. 00:26:30.453 [2024-07-26 14:20:38.220836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.453 [2024-07-26 14:20:38.220863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.453 qpair failed and we were unable to recover it. 00:26:30.453 [2024-07-26 14:20:38.220951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.453 [2024-07-26 14:20:38.220979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.453 qpair failed and we were unable to recover it. 00:26:30.453 [2024-07-26 14:20:38.221066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.453 [2024-07-26 14:20:38.221093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.453 qpair failed and we were unable to recover it. 00:26:30.453 [2024-07-26 14:20:38.221184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.453 [2024-07-26 14:20:38.221211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.453 qpair failed and we were unable to recover it. 00:26:30.453 [2024-07-26 14:20:38.221296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.453 [2024-07-26 14:20:38.221322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.453 qpair failed and we were unable to recover it. 00:26:30.453 [2024-07-26 14:20:38.221402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.453 [2024-07-26 14:20:38.221427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.453 qpair failed and we were unable to recover it. 00:26:30.453 [2024-07-26 14:20:38.221504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.453 [2024-07-26 14:20:38.221536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.453 qpair failed and we were unable to recover it. 00:26:30.453 [2024-07-26 14:20:38.221620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.453 [2024-07-26 14:20:38.221645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.453 qpair failed and we were unable to recover it. 00:26:30.453 [2024-07-26 14:20:38.221727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.453 [2024-07-26 14:20:38.221753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.453 qpair failed and we were unable to recover it. 00:26:30.453 [2024-07-26 14:20:38.221842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.453 [2024-07-26 14:20:38.221868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.453 qpair failed and we were unable to recover it. 00:26:30.453 [2024-07-26 14:20:38.221953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.453 [2024-07-26 14:20:38.221980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.453 qpair failed and we were unable to recover it. 00:26:30.453 [2024-07-26 14:20:38.222086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.453 [2024-07-26 14:20:38.222111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.453 qpair failed and we were unable to recover it. 00:26:30.453 [2024-07-26 14:20:38.222235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.453 [2024-07-26 14:20:38.222261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.453 qpair failed and we were unable to recover it. 00:26:30.453 [2024-07-26 14:20:38.222342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.454 [2024-07-26 14:20:38.222368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.454 qpair failed and we were unable to recover it. 00:26:30.454 [2024-07-26 14:20:38.222474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.454 [2024-07-26 14:20:38.222500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.454 qpair failed and we were unable to recover it. 00:26:30.454 [2024-07-26 14:20:38.222592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.454 [2024-07-26 14:20:38.222625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.454 qpair failed and we were unable to recover it. 00:26:30.454 [2024-07-26 14:20:38.222704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.454 [2024-07-26 14:20:38.222729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.454 qpair failed and we were unable to recover it. 00:26:30.454 [2024-07-26 14:20:38.222809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.454 [2024-07-26 14:20:38.222834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.454 qpair failed and we were unable to recover it. 00:26:30.454 [2024-07-26 14:20:38.222913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.454 [2024-07-26 14:20:38.222940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.454 qpair failed and we were unable to recover it. 00:26:30.454 [2024-07-26 14:20:38.223027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.454 [2024-07-26 14:20:38.223052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.454 qpair failed and we were unable to recover it. 00:26:30.454 [2024-07-26 14:20:38.223141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.454 [2024-07-26 14:20:38.223166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.454 qpair failed and we were unable to recover it. 00:26:30.454 [2024-07-26 14:20:38.223261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.454 [2024-07-26 14:20:38.223290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.454 qpair failed and we were unable to recover it. 00:26:30.454 [2024-07-26 14:20:38.223374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.454 [2024-07-26 14:20:38.223400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.454 qpair failed and we were unable to recover it. 00:26:30.454 [2024-07-26 14:20:38.223479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.454 [2024-07-26 14:20:38.223505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.454 qpair failed and we were unable to recover it. 00:26:30.454 [2024-07-26 14:20:38.223597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.454 [2024-07-26 14:20:38.223625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.454 qpair failed and we were unable to recover it. 00:26:30.454 [2024-07-26 14:20:38.223732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.454 [2024-07-26 14:20:38.223759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.454 qpair failed and we were unable to recover it. 00:26:30.454 [2024-07-26 14:20:38.223849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.454 [2024-07-26 14:20:38.223875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.454 qpair failed and we were unable to recover it. 00:26:30.454 [2024-07-26 14:20:38.223957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.454 [2024-07-26 14:20:38.223985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.454 qpair failed and we were unable to recover it. 00:26:30.454 [2024-07-26 14:20:38.224060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.454 [2024-07-26 14:20:38.224086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.454 qpair failed and we were unable to recover it. 00:26:30.454 [2024-07-26 14:20:38.224198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.454 [2024-07-26 14:20:38.224224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.454 qpair failed and we were unable to recover it. 00:26:30.454 [2024-07-26 14:20:38.224316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.454 [2024-07-26 14:20:38.224342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.454 qpair failed and we were unable to recover it. 00:26:30.454 [2024-07-26 14:20:38.224428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.454 [2024-07-26 14:20:38.224454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.454 qpair failed and we were unable to recover it. 00:26:30.454 [2024-07-26 14:20:38.224546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.454 [2024-07-26 14:20:38.224573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.454 qpair failed and we were unable to recover it. 00:26:30.454 [2024-07-26 14:20:38.224680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.454 [2024-07-26 14:20:38.224707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.454 qpair failed and we were unable to recover it. 00:26:30.454 [2024-07-26 14:20:38.224783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.454 [2024-07-26 14:20:38.224809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.454 qpair failed and we were unable to recover it. 00:26:30.454 [2024-07-26 14:20:38.224890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.454 [2024-07-26 14:20:38.224916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.454 qpair failed and we were unable to recover it. 00:26:30.454 [2024-07-26 14:20:38.225012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.454 [2024-07-26 14:20:38.225040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.454 qpair failed and we were unable to recover it. 00:26:30.454 [2024-07-26 14:20:38.225121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.455 [2024-07-26 14:20:38.225148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.455 qpair failed and we were unable to recover it. 00:26:30.455 [2024-07-26 14:20:38.225244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.455 [2024-07-26 14:20:38.225283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.455 qpair failed and we were unable to recover it. 00:26:30.455 [2024-07-26 14:20:38.225381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.455 [2024-07-26 14:20:38.225407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.455 qpair failed and we were unable to recover it. 00:26:30.455 [2024-07-26 14:20:38.225488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.455 [2024-07-26 14:20:38.225514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.455 qpair failed and we were unable to recover it. 00:26:30.455 [2024-07-26 14:20:38.225606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.455 [2024-07-26 14:20:38.225632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.455 qpair failed and we were unable to recover it. 00:26:30.455 [2024-07-26 14:20:38.225715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.455 [2024-07-26 14:20:38.225744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.455 qpair failed and we were unable to recover it. 00:26:30.455 [2024-07-26 14:20:38.225836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.455 [2024-07-26 14:20:38.225864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.455 qpair failed and we were unable to recover it. 00:26:30.455 [2024-07-26 14:20:38.225943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.455 [2024-07-26 14:20:38.225970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.455 qpair failed and we were unable to recover it. 00:26:30.455 [2024-07-26 14:20:38.226045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.455 [2024-07-26 14:20:38.226071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.455 qpair failed and we were unable to recover it. 00:26:30.455 [2024-07-26 14:20:38.226151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.455 [2024-07-26 14:20:38.226177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.455 qpair failed and we were unable to recover it. 00:26:30.455 [2024-07-26 14:20:38.226263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.455 [2024-07-26 14:20:38.226289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.455 qpair failed and we were unable to recover it. 00:26:30.455 [2024-07-26 14:20:38.226366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.455 [2024-07-26 14:20:38.226393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.455 qpair failed and we were unable to recover it. 00:26:30.455 [2024-07-26 14:20:38.226473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.455 [2024-07-26 14:20:38.226498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.455 qpair failed and we were unable to recover it. 00:26:30.455 [2024-07-26 14:20:38.226589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.455 [2024-07-26 14:20:38.226615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.455 qpair failed and we were unable to recover it. 00:26:30.455 [2024-07-26 14:20:38.226697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.455 [2024-07-26 14:20:38.226723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.455 qpair failed and we were unable to recover it. 00:26:30.455 [2024-07-26 14:20:38.226805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.455 [2024-07-26 14:20:38.226830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.455 qpair failed and we were unable to recover it. 00:26:30.455 [2024-07-26 14:20:38.226937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.455 [2024-07-26 14:20:38.226963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.455 qpair failed and we were unable to recover it. 00:26:30.455 [2024-07-26 14:20:38.227042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.455 [2024-07-26 14:20:38.227069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.455 qpair failed and we were unable to recover it. 00:26:30.455 [2024-07-26 14:20:38.227150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.455 [2024-07-26 14:20:38.227176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.455 qpair failed and we were unable to recover it. 00:26:30.455 [2024-07-26 14:20:38.227279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.455 [2024-07-26 14:20:38.227309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.455 qpair failed and we were unable to recover it. 00:26:30.455 [2024-07-26 14:20:38.227386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.455 [2024-07-26 14:20:38.227412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.455 qpair failed and we were unable to recover it. 00:26:30.455 [2024-07-26 14:20:38.227490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.455 [2024-07-26 14:20:38.227516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.455 qpair failed and we were unable to recover it. 00:26:30.455 [2024-07-26 14:20:38.227610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.455 [2024-07-26 14:20:38.227637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.455 qpair failed and we were unable to recover it. 00:26:30.455 [2024-07-26 14:20:38.227719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.455 [2024-07-26 14:20:38.227745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.455 qpair failed and we were unable to recover it. 00:26:30.455 [2024-07-26 14:20:38.227827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.455 [2024-07-26 14:20:38.227853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.455 qpair failed and we were unable to recover it. 00:26:30.455 [2024-07-26 14:20:38.227933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.455 [2024-07-26 14:20:38.227960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.455 qpair failed and we were unable to recover it. 00:26:30.455 [2024-07-26 14:20:38.228040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.455 [2024-07-26 14:20:38.228066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.455 qpair failed and we were unable to recover it. 00:26:30.455 [2024-07-26 14:20:38.228180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.455 [2024-07-26 14:20:38.228208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.455 qpair failed and we were unable to recover it. 00:26:30.455 [2024-07-26 14:20:38.228284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.455 [2024-07-26 14:20:38.228310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.455 qpair failed and we were unable to recover it. 00:26:30.455 [2024-07-26 14:20:38.228407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.455 [2024-07-26 14:20:38.228435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.455 qpair failed and we were unable to recover it. 00:26:30.455 [2024-07-26 14:20:38.228516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.455 [2024-07-26 14:20:38.228549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.455 qpair failed and we were unable to recover it. 00:26:30.455 [2024-07-26 14:20:38.228634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.455 [2024-07-26 14:20:38.228660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.455 qpair failed and we were unable to recover it. 00:26:30.455 [2024-07-26 14:20:38.228748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.455 [2024-07-26 14:20:38.228774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.455 qpair failed and we were unable to recover it. 00:26:30.455 [2024-07-26 14:20:38.228854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.455 [2024-07-26 14:20:38.228880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.455 qpair failed and we were unable to recover it. 00:26:30.455 [2024-07-26 14:20:38.228961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.455 [2024-07-26 14:20:38.228989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.455 qpair failed and we were unable to recover it. 00:26:30.455 [2024-07-26 14:20:38.229068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.455 [2024-07-26 14:20:38.229096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.455 qpair failed and we were unable to recover it. 00:26:30.455 [2024-07-26 14:20:38.229184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.455 [2024-07-26 14:20:38.229223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.455 qpair failed and we were unable to recover it. 00:26:30.455 [2024-07-26 14:20:38.229314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.456 [2024-07-26 14:20:38.229342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.456 qpair failed and we were unable to recover it. 00:26:30.456 [2024-07-26 14:20:38.229423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.456 [2024-07-26 14:20:38.229449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.456 qpair failed and we were unable to recover it. 00:26:30.456 [2024-07-26 14:20:38.229537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.456 [2024-07-26 14:20:38.229564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.456 qpair failed and we were unable to recover it. 00:26:30.456 [2024-07-26 14:20:38.229656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.456 [2024-07-26 14:20:38.229682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.456 qpair failed and we were unable to recover it. 00:26:30.456 [2024-07-26 14:20:38.229764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.456 [2024-07-26 14:20:38.229790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.456 qpair failed and we were unable to recover it. 00:26:30.456 [2024-07-26 14:20:38.229872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.456 [2024-07-26 14:20:38.229898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.456 qpair failed and we were unable to recover it. 00:26:30.456 [2024-07-26 14:20:38.229975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.456 [2024-07-26 14:20:38.230001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.456 qpair failed and we were unable to recover it. 00:26:30.456 [2024-07-26 14:20:38.230111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.456 [2024-07-26 14:20:38.230137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.456 qpair failed and we were unable to recover it. 00:26:30.456 [2024-07-26 14:20:38.230227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.456 [2024-07-26 14:20:38.230253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.456 qpair failed and we were unable to recover it. 00:26:30.456 [2024-07-26 14:20:38.230336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.456 [2024-07-26 14:20:38.230362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.456 qpair failed and we were unable to recover it. 00:26:30.456 [2024-07-26 14:20:38.230439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.456 [2024-07-26 14:20:38.230465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.456 qpair failed and we were unable to recover it. 00:26:30.456 [2024-07-26 14:20:38.230581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.456 [2024-07-26 14:20:38.230610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.456 qpair failed and we were unable to recover it. 00:26:30.456 [2024-07-26 14:20:38.230696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.456 [2024-07-26 14:20:38.230723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.456 qpair failed and we were unable to recover it. 00:26:30.456 [2024-07-26 14:20:38.230809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.456 [2024-07-26 14:20:38.230835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.456 qpair failed and we were unable to recover it. 00:26:30.456 [2024-07-26 14:20:38.230917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.456 [2024-07-26 14:20:38.230944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.456 qpair failed and we were unable to recover it. 00:26:30.456 [2024-07-26 14:20:38.231037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.456 [2024-07-26 14:20:38.231076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.456 qpair failed and we were unable to recover it. 00:26:30.456 [2024-07-26 14:20:38.231165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.456 [2024-07-26 14:20:38.231194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.456 qpair failed and we were unable to recover it. 00:26:30.456 [2024-07-26 14:20:38.231279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.456 [2024-07-26 14:20:38.231306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.456 qpair failed and we were unable to recover it. 00:26:30.456 [2024-07-26 14:20:38.231387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.456 [2024-07-26 14:20:38.231413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.456 qpair failed and we were unable to recover it. 00:26:30.456 [2024-07-26 14:20:38.231495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.456 [2024-07-26 14:20:38.231522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.456 qpair failed and we were unable to recover it. 00:26:30.456 [2024-07-26 14:20:38.231615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.456 [2024-07-26 14:20:38.231641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.456 qpair failed and we were unable to recover it. 00:26:30.456 [2024-07-26 14:20:38.231749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.456 [2024-07-26 14:20:38.231775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.456 qpair failed and we were unable to recover it. 00:26:30.456 [2024-07-26 14:20:38.231861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.456 [2024-07-26 14:20:38.231887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.456 qpair failed and we were unable to recover it. 00:26:30.456 [2024-07-26 14:20:38.231970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.456 [2024-07-26 14:20:38.231996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.456 qpair failed and we were unable to recover it. 00:26:30.456 [2024-07-26 14:20:38.232074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.456 [2024-07-26 14:20:38.232100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.456 qpair failed and we were unable to recover it. 00:26:30.456 [2024-07-26 14:20:38.232186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.456 [2024-07-26 14:20:38.232212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.456 qpair failed and we were unable to recover it. 00:26:30.456 [2024-07-26 14:20:38.232294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.456 [2024-07-26 14:20:38.232320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.456 qpair failed and we were unable to recover it. 00:26:30.456 [2024-07-26 14:20:38.232406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.456 [2024-07-26 14:20:38.232435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.456 qpair failed and we were unable to recover it. 00:26:30.456 [2024-07-26 14:20:38.232544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.456 [2024-07-26 14:20:38.232572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.456 qpair failed and we were unable to recover it. 00:26:30.456 [2024-07-26 14:20:38.232656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.456 [2024-07-26 14:20:38.232683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.456 qpair failed and we were unable to recover it. 00:26:30.456 [2024-07-26 14:20:38.232762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.456 [2024-07-26 14:20:38.232788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.456 qpair failed and we were unable to recover it. 00:26:30.456 [2024-07-26 14:20:38.232873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.456 [2024-07-26 14:20:38.232900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.456 qpair failed and we were unable to recover it. 00:26:30.456 [2024-07-26 14:20:38.232985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.456 [2024-07-26 14:20:38.233013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.456 qpair failed and we were unable to recover it. 00:26:30.456 [2024-07-26 14:20:38.233092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.457 [2024-07-26 14:20:38.233119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.457 qpair failed and we were unable to recover it. 00:26:30.457 [2024-07-26 14:20:38.233202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.457 [2024-07-26 14:20:38.233228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.457 qpair failed and we were unable to recover it. 00:26:30.457 [2024-07-26 14:20:38.233320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.457 [2024-07-26 14:20:38.233364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.457 qpair failed and we were unable to recover it. 00:26:30.457 [2024-07-26 14:20:38.233454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.457 [2024-07-26 14:20:38.233482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.457 qpair failed and we were unable to recover it. 00:26:30.457 [2024-07-26 14:20:38.233583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.457 [2024-07-26 14:20:38.233623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.457 qpair failed and we were unable to recover it. 00:26:30.457 [2024-07-26 14:20:38.233728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.457 [2024-07-26 14:20:38.233757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.457 qpair failed and we were unable to recover it. 00:26:30.457 [2024-07-26 14:20:38.233842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.457 [2024-07-26 14:20:38.233870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.457 qpair failed and we were unable to recover it. 00:26:30.457 [2024-07-26 14:20:38.233952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.457 [2024-07-26 14:20:38.233979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.457 qpair failed and we were unable to recover it. 00:26:30.457 [2024-07-26 14:20:38.234064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.457 [2024-07-26 14:20:38.234092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.457 qpair failed and we were unable to recover it. 00:26:30.457 [2024-07-26 14:20:38.234177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.457 [2024-07-26 14:20:38.234203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.457 qpair failed and we were unable to recover it. 00:26:30.457 [2024-07-26 14:20:38.234282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.457 [2024-07-26 14:20:38.234310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.457 qpair failed and we were unable to recover it. 00:26:30.457 [2024-07-26 14:20:38.234397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.457 [2024-07-26 14:20:38.234423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.457 qpair failed and we were unable to recover it. 00:26:30.457 [2024-07-26 14:20:38.234497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.457 [2024-07-26 14:20:38.234524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.457 qpair failed and we were unable to recover it. 00:26:30.457 [2024-07-26 14:20:38.234618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.457 [2024-07-26 14:20:38.234644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.457 qpair failed and we were unable to recover it. 00:26:30.457 [2024-07-26 14:20:38.234728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.457 [2024-07-26 14:20:38.234754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.457 qpair failed and we were unable to recover it. 00:26:30.457 [2024-07-26 14:20:38.234835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.457 [2024-07-26 14:20:38.234861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.457 qpair failed and we were unable to recover it. 00:26:30.457 [2024-07-26 14:20:38.234944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.457 [2024-07-26 14:20:38.234970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.457 qpair failed and we were unable to recover it. 00:26:30.457 [2024-07-26 14:20:38.235058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.457 [2024-07-26 14:20:38.235084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.457 qpair failed and we were unable to recover it. 00:26:30.457 [2024-07-26 14:20:38.235165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.457 [2024-07-26 14:20:38.235191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.457 qpair failed and we were unable to recover it. 00:26:30.457 [2024-07-26 14:20:38.235270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.457 [2024-07-26 14:20:38.235297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.457 qpair failed and we were unable to recover it. 00:26:30.457 [2024-07-26 14:20:38.235380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.457 [2024-07-26 14:20:38.235405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.457 qpair failed and we were unable to recover it. 00:26:30.457 [2024-07-26 14:20:38.235486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.457 [2024-07-26 14:20:38.235516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.457 qpair failed and we were unable to recover it. 00:26:30.457 [2024-07-26 14:20:38.235615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.457 [2024-07-26 14:20:38.235643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.457 qpair failed and we were unable to recover it. 00:26:30.457 [2024-07-26 14:20:38.235722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.457 [2024-07-26 14:20:38.235748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.457 qpair failed and we were unable to recover it. 00:26:30.457 [2024-07-26 14:20:38.235831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.457 [2024-07-26 14:20:38.235857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.457 qpair failed and we were unable to recover it. 00:26:30.457 [2024-07-26 14:20:38.235942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.457 [2024-07-26 14:20:38.235969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.457 qpair failed and we were unable to recover it. 00:26:30.457 [2024-07-26 14:20:38.236047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.457 [2024-07-26 14:20:38.236073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.457 qpair failed and we were unable to recover it. 00:26:30.457 [2024-07-26 14:20:38.236156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.457 [2024-07-26 14:20:38.236182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.457 qpair failed and we were unable to recover it. 00:26:30.457 [2024-07-26 14:20:38.236271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.457 [2024-07-26 14:20:38.236297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.457 qpair failed and we were unable to recover it. 00:26:30.457 [2024-07-26 14:20:38.236382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.457 [2024-07-26 14:20:38.236413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.457 qpair failed and we were unable to recover it. 00:26:30.457 [2024-07-26 14:20:38.236505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.457 [2024-07-26 14:20:38.236538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.457 qpair failed and we were unable to recover it. 00:26:30.457 [2024-07-26 14:20:38.236622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.457 [2024-07-26 14:20:38.236648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.457 qpair failed and we were unable to recover it. 00:26:30.457 [2024-07-26 14:20:38.236731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.457 [2024-07-26 14:20:38.236757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.457 qpair failed and we were unable to recover it. 00:26:30.457 [2024-07-26 14:20:38.236832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.457 [2024-07-26 14:20:38.236859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.457 qpair failed and we were unable to recover it. 00:26:30.457 [2024-07-26 14:20:38.236952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.457 [2024-07-26 14:20:38.236978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.457 qpair failed and we were unable to recover it. 00:26:30.457 [2024-07-26 14:20:38.237068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.457 [2024-07-26 14:20:38.237097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.457 qpair failed and we were unable to recover it. 00:26:30.458 [2024-07-26 14:20:38.237181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.458 [2024-07-26 14:20:38.237207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.458 qpair failed and we were unable to recover it. 00:26:30.458 [2024-07-26 14:20:38.237288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.458 [2024-07-26 14:20:38.237315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.458 qpair failed and we were unable to recover it. 00:26:30.458 [2024-07-26 14:20:38.237400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.458 [2024-07-26 14:20:38.237426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.458 qpair failed and we were unable to recover it. 00:26:30.458 [2024-07-26 14:20:38.237538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.458 [2024-07-26 14:20:38.237565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.458 qpair failed and we were unable to recover it. 00:26:30.458 [2024-07-26 14:20:38.237649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.458 [2024-07-26 14:20:38.237675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.458 qpair failed and we were unable to recover it. 00:26:30.458 [2024-07-26 14:20:38.237750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.458 [2024-07-26 14:20:38.237776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.458 qpair failed and we were unable to recover it. 00:26:30.458 [2024-07-26 14:20:38.237852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.458 [2024-07-26 14:20:38.237878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.458 qpair failed and we were unable to recover it. 00:26:30.458 [2024-07-26 14:20:38.237968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.458 [2024-07-26 14:20:38.237995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.458 qpair failed and we were unable to recover it. 00:26:30.458 [2024-07-26 14:20:38.238085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.458 [2024-07-26 14:20:38.238112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.458 qpair failed and we were unable to recover it. 00:26:30.458 [2024-07-26 14:20:38.238204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.458 [2024-07-26 14:20:38.238231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.458 qpair failed and we were unable to recover it. 00:26:30.458 [2024-07-26 14:20:38.238319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.458 [2024-07-26 14:20:38.238345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.458 qpair failed and we were unable to recover it. 00:26:30.458 [2024-07-26 14:20:38.238430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.458 [2024-07-26 14:20:38.238456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.458 qpair failed and we were unable to recover it. 00:26:30.458 [2024-07-26 14:20:38.238541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.458 [2024-07-26 14:20:38.238568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.458 qpair failed and we were unable to recover it. 00:26:30.458 [2024-07-26 14:20:38.238649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.458 [2024-07-26 14:20:38.238676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.458 qpair failed and we were unable to recover it. 00:26:30.458 [2024-07-26 14:20:38.238756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.458 [2024-07-26 14:20:38.238783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.458 qpair failed and we were unable to recover it. 00:26:30.458 [2024-07-26 14:20:38.238867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.458 [2024-07-26 14:20:38.238893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.458 qpair failed and we were unable to recover it. 00:26:30.458 [2024-07-26 14:20:38.238993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.458 [2024-07-26 14:20:38.239034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.458 qpair failed and we were unable to recover it. 00:26:30.458 [2024-07-26 14:20:38.239123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.458 [2024-07-26 14:20:38.239151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.458 qpair failed and we were unable to recover it. 00:26:30.458 [2024-07-26 14:20:38.239234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.458 [2024-07-26 14:20:38.239272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.458 qpair failed and we were unable to recover it. 00:26:30.458 [2024-07-26 14:20:38.239356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.458 [2024-07-26 14:20:38.239383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.458 qpair failed and we were unable to recover it. 00:26:30.458 [2024-07-26 14:20:38.239478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.458 [2024-07-26 14:20:38.239505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.458 qpair failed and we were unable to recover it. 00:26:30.458 [2024-07-26 14:20:38.239600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.458 [2024-07-26 14:20:38.239628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.458 qpair failed and we were unable to recover it. 00:26:30.458 [2024-07-26 14:20:38.239712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.458 [2024-07-26 14:20:38.239739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.458 qpair failed and we were unable to recover it. 00:26:30.458 [2024-07-26 14:20:38.239834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.458 [2024-07-26 14:20:38.239860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.458 qpair failed and we were unable to recover it. 00:26:30.458 [2024-07-26 14:20:38.239938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.458 [2024-07-26 14:20:38.239965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.458 qpair failed and we were unable to recover it. 00:26:30.458 [2024-07-26 14:20:38.240050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.458 [2024-07-26 14:20:38.240076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.458 qpair failed and we were unable to recover it. 00:26:30.458 [2024-07-26 14:20:38.240178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.458 [2024-07-26 14:20:38.240205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.458 qpair failed and we were unable to recover it. 00:26:30.458 [2024-07-26 14:20:38.240300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.458 [2024-07-26 14:20:38.240327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.458 qpair failed and we were unable to recover it. 00:26:30.458 [2024-07-26 14:20:38.240410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.458 [2024-07-26 14:20:38.240438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.458 qpair failed and we were unable to recover it. 00:26:30.458 [2024-07-26 14:20:38.240518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.458 [2024-07-26 14:20:38.240552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.458 qpair failed and we were unable to recover it. 00:26:30.458 [2024-07-26 14:20:38.240634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.458 [2024-07-26 14:20:38.240660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.458 qpair failed and we were unable to recover it. 00:26:30.458 [2024-07-26 14:20:38.240742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.458 [2024-07-26 14:20:38.240768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.458 qpair failed and we were unable to recover it. 00:26:30.458 [2024-07-26 14:20:38.240846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.458 [2024-07-26 14:20:38.240872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.458 qpair failed and we were unable to recover it. 00:26:30.458 [2024-07-26 14:20:38.240983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.458 [2024-07-26 14:20:38.241009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.458 qpair failed and we were unable to recover it. 00:26:30.458 [2024-07-26 14:20:38.241101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.458 [2024-07-26 14:20:38.241127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.458 qpair failed and we were unable to recover it. 00:26:30.458 [2024-07-26 14:20:38.241209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.458 [2024-07-26 14:20:38.241238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.458 qpair failed and we were unable to recover it. 00:26:30.458 [2024-07-26 14:20:38.241330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.458 [2024-07-26 14:20:38.241360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.458 qpair failed and we were unable to recover it. 00:26:30.458 [2024-07-26 14:20:38.241474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.458 [2024-07-26 14:20:38.241500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.459 qpair failed and we were unable to recover it. 00:26:30.459 [2024-07-26 14:20:38.241593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.459 [2024-07-26 14:20:38.241621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.459 qpair failed and we were unable to recover it. 00:26:30.459 [2024-07-26 14:20:38.241710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.459 [2024-07-26 14:20:38.241737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.459 qpair failed and we were unable to recover it. 00:26:30.459 [2024-07-26 14:20:38.241811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.459 [2024-07-26 14:20:38.241837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.459 qpair failed and we were unable to recover it. 00:26:30.459 [2024-07-26 14:20:38.241921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.459 [2024-07-26 14:20:38.241948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.459 qpair failed and we were unable to recover it. 00:26:30.459 [2024-07-26 14:20:38.242031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.459 [2024-07-26 14:20:38.242058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.459 qpair failed and we were unable to recover it. 00:26:30.459 [2024-07-26 14:20:38.242140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.459 [2024-07-26 14:20:38.242167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.459 qpair failed and we were unable to recover it. 00:26:30.459 [2024-07-26 14:20:38.242242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.459 [2024-07-26 14:20:38.242268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.459 qpair failed and we were unable to recover it. 00:26:30.459 [2024-07-26 14:20:38.242350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.459 [2024-07-26 14:20:38.242376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.459 qpair failed and we were unable to recover it. 00:26:30.459 [2024-07-26 14:20:38.242459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.459 [2024-07-26 14:20:38.242487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.459 qpair failed and we were unable to recover it. 00:26:30.459 [2024-07-26 14:20:38.242582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.459 [2024-07-26 14:20:38.242609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.459 qpair failed and we were unable to recover it. 00:26:30.459 [2024-07-26 14:20:38.242707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.459 [2024-07-26 14:20:38.242734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.459 qpair failed and we were unable to recover it. 00:26:30.459 [2024-07-26 14:20:38.242818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.459 [2024-07-26 14:20:38.242847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.459 qpair failed and we were unable to recover it. 00:26:30.459 [2024-07-26 14:20:38.242923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.459 [2024-07-26 14:20:38.242950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.459 qpair failed and we were unable to recover it. 00:26:30.459 [2024-07-26 14:20:38.243023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.459 [2024-07-26 14:20:38.243050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.459 qpair failed and we were unable to recover it. 00:26:30.459 [2024-07-26 14:20:38.243140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.459 [2024-07-26 14:20:38.243166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.459 qpair failed and we were unable to recover it. 00:26:30.459 [2024-07-26 14:20:38.243251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.459 [2024-07-26 14:20:38.243279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.459 qpair failed and we were unable to recover it. 00:26:30.459 [2024-07-26 14:20:38.243362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.459 [2024-07-26 14:20:38.243390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.459 qpair failed and we were unable to recover it. 00:26:30.459 [2024-07-26 14:20:38.243500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.459 [2024-07-26 14:20:38.243532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.459 qpair failed and we were unable to recover it. 00:26:30.459 [2024-07-26 14:20:38.243621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.459 [2024-07-26 14:20:38.243647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.459 qpair failed and we were unable to recover it. 00:26:30.459 [2024-07-26 14:20:38.243729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.459 [2024-07-26 14:20:38.243755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.459 qpair failed and we were unable to recover it. 00:26:30.459 [2024-07-26 14:20:38.243832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.459 [2024-07-26 14:20:38.243857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.459 qpair failed and we were unable to recover it. 00:26:30.459 [2024-07-26 14:20:38.243940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.459 [2024-07-26 14:20:38.243967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.459 qpair failed and we were unable to recover it. 00:26:30.459 [2024-07-26 14:20:38.244053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.459 [2024-07-26 14:20:38.244083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.459 qpair failed and we were unable to recover it. 00:26:30.459 [2024-07-26 14:20:38.244167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.459 [2024-07-26 14:20:38.244195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.459 qpair failed and we were unable to recover it. 00:26:30.459 [2024-07-26 14:20:38.244278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.459 [2024-07-26 14:20:38.244306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.459 qpair failed and we were unable to recover it. 00:26:30.459 [2024-07-26 14:20:38.244401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.459 [2024-07-26 14:20:38.244427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.459 qpair failed and we were unable to recover it. 00:26:30.459 [2024-07-26 14:20:38.244507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.459 [2024-07-26 14:20:38.244547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.459 qpair failed and we were unable to recover it. 00:26:30.459 [2024-07-26 14:20:38.244637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.459 [2024-07-26 14:20:38.244664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.459 qpair failed and we were unable to recover it. 00:26:30.459 [2024-07-26 14:20:38.244744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.459 [2024-07-26 14:20:38.244770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.459 qpair failed and we were unable to recover it. 00:26:30.459 [2024-07-26 14:20:38.244845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.459 [2024-07-26 14:20:38.244870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.459 qpair failed and we were unable to recover it. 00:26:30.459 [2024-07-26 14:20:38.244947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.460 [2024-07-26 14:20:38.244972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.460 qpair failed and we were unable to recover it. 00:26:30.460 [2024-07-26 14:20:38.245065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.460 [2024-07-26 14:20:38.245092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.460 qpair failed and we were unable to recover it. 00:26:30.460 [2024-07-26 14:20:38.245169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.460 [2024-07-26 14:20:38.245197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.460 qpair failed and we were unable to recover it. 00:26:30.460 [2024-07-26 14:20:38.245280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.460 [2024-07-26 14:20:38.245308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.460 qpair failed and we were unable to recover it. 00:26:30.460 [2024-07-26 14:20:38.245424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.460 [2024-07-26 14:20:38.245452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.460 qpair failed and we were unable to recover it. 00:26:30.460 [2024-07-26 14:20:38.245539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.460 [2024-07-26 14:20:38.245566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.460 qpair failed and we were unable to recover it. 00:26:30.460 [2024-07-26 14:20:38.245655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.460 [2024-07-26 14:20:38.245683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.460 qpair failed and we were unable to recover it. 00:26:30.460 [2024-07-26 14:20:38.245766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.460 [2024-07-26 14:20:38.245792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.460 qpair failed and we were unable to recover it. 00:26:30.460 [2024-07-26 14:20:38.245903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.460 [2024-07-26 14:20:38.245930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.460 qpair failed and we were unable to recover it. 00:26:30.460 [2024-07-26 14:20:38.246013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.460 [2024-07-26 14:20:38.246041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.460 qpair failed and we were unable to recover it. 00:26:30.460 [2024-07-26 14:20:38.246130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.460 [2024-07-26 14:20:38.246158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.460 qpair failed and we were unable to recover it. 00:26:30.460 [2024-07-26 14:20:38.246245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.460 [2024-07-26 14:20:38.246272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.460 qpair failed and we were unable to recover it. 00:26:30.460 [2024-07-26 14:20:38.246362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.460 [2024-07-26 14:20:38.246389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.460 qpair failed and we were unable to recover it. 00:26:30.460 [2024-07-26 14:20:38.246466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.460 [2024-07-26 14:20:38.246491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.460 qpair failed and we were unable to recover it. 00:26:30.460 [2024-07-26 14:20:38.246577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.460 [2024-07-26 14:20:38.246603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.460 qpair failed and we were unable to recover it. 00:26:30.460 [2024-07-26 14:20:38.246683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.460 [2024-07-26 14:20:38.246708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.460 qpair failed and we were unable to recover it. 00:26:30.460 [2024-07-26 14:20:38.246788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.460 [2024-07-26 14:20:38.246813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.460 qpair failed and we were unable to recover it. 00:26:30.460 [2024-07-26 14:20:38.246895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.460 [2024-07-26 14:20:38.246921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.460 qpair failed and we were unable to recover it. 00:26:30.460 [2024-07-26 14:20:38.247016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.460 [2024-07-26 14:20:38.247044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.460 qpair failed and we were unable to recover it. 00:26:30.460 [2024-07-26 14:20:38.247125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.460 [2024-07-26 14:20:38.247155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.460 qpair failed and we were unable to recover it. 00:26:30.460 [2024-07-26 14:20:38.247243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.460 [2024-07-26 14:20:38.247270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.460 qpair failed and we were unable to recover it. 00:26:30.460 [2024-07-26 14:20:38.247357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.460 [2024-07-26 14:20:38.247384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.460 qpair failed and we were unable to recover it. 00:26:30.460 [2024-07-26 14:20:38.247463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.460 [2024-07-26 14:20:38.247490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.460 qpair failed and we were unable to recover it. 00:26:30.460 [2024-07-26 14:20:38.247586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.460 [2024-07-26 14:20:38.247614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.460 qpair failed and we were unable to recover it. 00:26:30.460 [2024-07-26 14:20:38.247696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.460 [2024-07-26 14:20:38.247723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.460 qpair failed and we were unable to recover it. 00:26:30.460 [2024-07-26 14:20:38.247830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.460 [2024-07-26 14:20:38.247856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.460 qpair failed and we were unable to recover it. 00:26:30.460 [2024-07-26 14:20:38.247937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.460 [2024-07-26 14:20:38.247964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.460 qpair failed and we were unable to recover it. 00:26:30.460 [2024-07-26 14:20:38.248042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.460 [2024-07-26 14:20:38.248067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.460 qpair failed and we were unable to recover it. 00:26:30.460 [2024-07-26 14:20:38.248151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.460 [2024-07-26 14:20:38.248176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.460 qpair failed and we were unable to recover it. 00:26:30.460 [2024-07-26 14:20:38.248257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.460 [2024-07-26 14:20:38.248283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.460 qpair failed and we were unable to recover it. 00:26:30.460 [2024-07-26 14:20:38.248375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.460 [2024-07-26 14:20:38.248402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.460 qpair failed and we were unable to recover it. 00:26:30.460 [2024-07-26 14:20:38.248493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.460 [2024-07-26 14:20:38.248521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.460 qpair failed and we were unable to recover it. 00:26:30.460 [2024-07-26 14:20:38.248617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.460 [2024-07-26 14:20:38.248647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.460 qpair failed and we were unable to recover it. 00:26:30.460 [2024-07-26 14:20:38.248731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.460 [2024-07-26 14:20:38.248757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.460 qpair failed and we were unable to recover it. 00:26:30.460 [2024-07-26 14:20:38.248845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.460 [2024-07-26 14:20:38.248872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.460 qpair failed and we were unable to recover it. 00:26:30.460 [2024-07-26 14:20:38.248950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.460 [2024-07-26 14:20:38.248977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.460 qpair failed and we were unable to recover it. 00:26:30.460 [2024-07-26 14:20:38.249054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.460 [2024-07-26 14:20:38.249081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.460 qpair failed and we were unable to recover it. 00:26:30.460 [2024-07-26 14:20:38.249168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.460 [2024-07-26 14:20:38.249195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.460 qpair failed and we were unable to recover it. 00:26:30.461 [2024-07-26 14:20:38.249288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.461 [2024-07-26 14:20:38.249329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.461 qpair failed and we were unable to recover it. 00:26:30.461 [2024-07-26 14:20:38.249418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.461 [2024-07-26 14:20:38.249446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.461 qpair failed and we were unable to recover it. 00:26:30.461 [2024-07-26 14:20:38.249533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.461 [2024-07-26 14:20:38.249561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.461 qpair failed and we were unable to recover it. 00:26:30.461 [2024-07-26 14:20:38.249650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.461 [2024-07-26 14:20:38.249676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.461 qpair failed and we were unable to recover it. 00:26:30.461 [2024-07-26 14:20:38.249761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.461 [2024-07-26 14:20:38.249788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.461 qpair failed and we were unable to recover it. 00:26:30.461 [2024-07-26 14:20:38.249870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.461 [2024-07-26 14:20:38.249897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.461 qpair failed and we were unable to recover it. 00:26:30.461 [2024-07-26 14:20:38.249978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.461 [2024-07-26 14:20:38.250004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.461 qpair failed and we were unable to recover it. 00:26:30.461 [2024-07-26 14:20:38.250085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.461 [2024-07-26 14:20:38.250111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.461 qpair failed and we were unable to recover it. 00:26:30.461 [2024-07-26 14:20:38.250205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.461 [2024-07-26 14:20:38.250231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.461 qpair failed and we were unable to recover it. 00:26:30.461 [2024-07-26 14:20:38.250311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.461 [2024-07-26 14:20:38.250338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.461 qpair failed and we were unable to recover it. 00:26:30.461 [2024-07-26 14:20:38.250420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.461 [2024-07-26 14:20:38.250449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.461 qpair failed and we were unable to recover it. 00:26:30.461 [2024-07-26 14:20:38.250536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.461 [2024-07-26 14:20:38.250563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.461 qpair failed and we were unable to recover it. 00:26:30.461 [2024-07-26 14:20:38.250651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.461 [2024-07-26 14:20:38.250676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.461 qpair failed and we were unable to recover it. 00:26:30.461 [2024-07-26 14:20:38.250761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.461 [2024-07-26 14:20:38.250787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.461 qpair failed and we were unable to recover it. 00:26:30.461 [2024-07-26 14:20:38.250871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.461 [2024-07-26 14:20:38.250896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.461 qpair failed and we were unable to recover it. 00:26:30.461 [2024-07-26 14:20:38.250972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.461 [2024-07-26 14:20:38.250996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.461 qpair failed and we were unable to recover it. 00:26:30.461 [2024-07-26 14:20:38.251087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.461 [2024-07-26 14:20:38.251113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.461 qpair failed and we were unable to recover it. 00:26:30.461 [2024-07-26 14:20:38.251192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.461 [2024-07-26 14:20:38.251217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.461 qpair failed and we were unable to recover it. 00:26:30.461 [2024-07-26 14:20:38.251298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.461 [2024-07-26 14:20:38.251324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.461 qpair failed and we were unable to recover it. 00:26:30.461 [2024-07-26 14:20:38.251406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.461 [2024-07-26 14:20:38.251433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.461 qpair failed and we were unable to recover it. 00:26:30.461 [2024-07-26 14:20:38.251523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.461 [2024-07-26 14:20:38.251563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.461 qpair failed and we were unable to recover it. 00:26:30.461 [2024-07-26 14:20:38.251654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.461 [2024-07-26 14:20:38.251687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.461 qpair failed and we were unable to recover it. 00:26:30.461 [2024-07-26 14:20:38.251776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.461 [2024-07-26 14:20:38.251804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.461 qpair failed and we were unable to recover it. 00:26:30.461 [2024-07-26 14:20:38.251887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.461 [2024-07-26 14:20:38.251914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.461 qpair failed and we were unable to recover it. 00:26:30.461 [2024-07-26 14:20:38.251999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.461 [2024-07-26 14:20:38.252025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.461 qpair failed and we were unable to recover it. 00:26:30.461 [2024-07-26 14:20:38.252118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.461 [2024-07-26 14:20:38.252145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.461 qpair failed and we were unable to recover it. 00:26:30.461 [2024-07-26 14:20:38.252221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.461 [2024-07-26 14:20:38.252247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.461 qpair failed and we were unable to recover it. 00:26:30.461 [2024-07-26 14:20:38.252324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.461 [2024-07-26 14:20:38.252350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.461 qpair failed and we were unable to recover it. 00:26:30.461 [2024-07-26 14:20:38.252437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.461 [2024-07-26 14:20:38.252465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.461 qpair failed and we were unable to recover it. 00:26:30.461 [2024-07-26 14:20:38.252564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.461 [2024-07-26 14:20:38.252603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.461 qpair failed and we were unable to recover it. 00:26:30.461 [2024-07-26 14:20:38.252702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.461 [2024-07-26 14:20:38.252742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.461 qpair failed and we were unable to recover it. 00:26:30.461 [2024-07-26 14:20:38.252836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.461 [2024-07-26 14:20:38.252863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.461 qpair failed and we were unable to recover it. 00:26:30.461 [2024-07-26 14:20:38.252956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.461 [2024-07-26 14:20:38.252984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.461 qpair failed and we were unable to recover it. 00:26:30.461 [2024-07-26 14:20:38.253061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.461 [2024-07-26 14:20:38.253088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.461 qpair failed and we were unable to recover it. 00:26:30.461 [2024-07-26 14:20:38.253165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.461 [2024-07-26 14:20:38.253191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.461 qpair failed and we were unable to recover it. 00:26:30.461 [2024-07-26 14:20:38.253278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.461 [2024-07-26 14:20:38.253305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.461 qpair failed and we were unable to recover it. 00:26:30.461 [2024-07-26 14:20:38.253389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.461 [2024-07-26 14:20:38.253416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.461 qpair failed and we were unable to recover it. 00:26:30.461 [2024-07-26 14:20:38.253499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.461 [2024-07-26 14:20:38.253525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.462 qpair failed and we were unable to recover it. 00:26:30.462 [2024-07-26 14:20:38.253651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.462 [2024-07-26 14:20:38.253678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.462 qpair failed and we were unable to recover it. 00:26:30.462 [2024-07-26 14:20:38.253757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.462 [2024-07-26 14:20:38.253783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.462 qpair failed and we were unable to recover it. 00:26:30.462 [2024-07-26 14:20:38.253866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.462 [2024-07-26 14:20:38.253892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.462 qpair failed and we were unable to recover it. 00:26:30.462 [2024-07-26 14:20:38.253990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.462 [2024-07-26 14:20:38.254017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.462 qpair failed and we were unable to recover it. 00:26:30.462 [2024-07-26 14:20:38.254101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.462 [2024-07-26 14:20:38.254131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.462 qpair failed and we were unable to recover it. 00:26:30.462 [2024-07-26 14:20:38.254242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.462 [2024-07-26 14:20:38.254282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.462 qpair failed and we were unable to recover it. 00:26:30.462 [2024-07-26 14:20:38.254369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.462 [2024-07-26 14:20:38.254397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.462 qpair failed and we were unable to recover it. 00:26:30.462 [2024-07-26 14:20:38.254479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.462 [2024-07-26 14:20:38.254505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.462 qpair failed and we were unable to recover it. 00:26:30.462 [2024-07-26 14:20:38.254600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.462 [2024-07-26 14:20:38.254628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.462 qpair failed and we were unable to recover it. 00:26:30.462 [2024-07-26 14:20:38.254720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.462 [2024-07-26 14:20:38.254746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.462 qpair failed and we were unable to recover it. 00:26:30.462 [2024-07-26 14:20:38.254832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.462 [2024-07-26 14:20:38.254860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.462 qpair failed and we were unable to recover it. 00:26:30.462 [2024-07-26 14:20:38.254942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.462 [2024-07-26 14:20:38.254969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.462 qpair failed and we were unable to recover it. 00:26:30.462 [2024-07-26 14:20:38.255061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.462 [2024-07-26 14:20:38.255100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.462 qpair failed and we were unable to recover it. 00:26:30.462 [2024-07-26 14:20:38.255190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.462 [2024-07-26 14:20:38.255218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.462 qpair failed and we were unable to recover it. 00:26:30.462 [2024-07-26 14:20:38.255307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.462 [2024-07-26 14:20:38.255334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.462 qpair failed and we were unable to recover it. 00:26:30.462 [2024-07-26 14:20:38.255438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.462 [2024-07-26 14:20:38.255464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.462 qpair failed and we were unable to recover it. 00:26:30.462 [2024-07-26 14:20:38.255549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.462 [2024-07-26 14:20:38.255576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.462 qpair failed and we were unable to recover it. 00:26:30.462 [2024-07-26 14:20:38.255676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.462 [2024-07-26 14:20:38.255702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.462 qpair failed and we were unable to recover it. 00:26:30.462 [2024-07-26 14:20:38.255787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.462 [2024-07-26 14:20:38.255813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.462 qpair failed and we were unable to recover it. 00:26:30.462 [2024-07-26 14:20:38.255901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.462 [2024-07-26 14:20:38.255929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.462 qpair failed and we were unable to recover it. 00:26:30.462 [2024-07-26 14:20:38.256017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.462 [2024-07-26 14:20:38.256044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.462 qpair failed and we were unable to recover it. 00:26:30.462 [2024-07-26 14:20:38.256129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.462 [2024-07-26 14:20:38.256157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.462 qpair failed and we were unable to recover it. 00:26:30.462 [2024-07-26 14:20:38.256248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.462 [2024-07-26 14:20:38.256288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.462 qpair failed and we were unable to recover it. 00:26:30.462 [2024-07-26 14:20:38.256376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.462 [2024-07-26 14:20:38.256408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.462 qpair failed and we were unable to recover it. 00:26:30.462 [2024-07-26 14:20:38.256495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.462 [2024-07-26 14:20:38.256521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.462 qpair failed and we were unable to recover it. 00:26:30.462 [2024-07-26 14:20:38.256619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.462 [2024-07-26 14:20:38.256645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.462 qpair failed and we were unable to recover it. 00:26:30.462 [2024-07-26 14:20:38.256736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.462 [2024-07-26 14:20:38.256763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.462 qpair failed and we were unable to recover it. 00:26:30.462 [2024-07-26 14:20:38.256879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.462 [2024-07-26 14:20:38.256906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.462 qpair failed and we were unable to recover it. 00:26:30.462 [2024-07-26 14:20:38.257011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.462 [2024-07-26 14:20:38.257038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.462 qpair failed and we were unable to recover it. 00:26:30.462 [2024-07-26 14:20:38.257121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.463 [2024-07-26 14:20:38.257147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.463 qpair failed and we were unable to recover it. 00:26:30.463 [2024-07-26 14:20:38.257230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.463 [2024-07-26 14:20:38.257256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.463 qpair failed and we were unable to recover it. 00:26:30.463 [2024-07-26 14:20:38.257342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.463 [2024-07-26 14:20:38.257368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.463 qpair failed and we were unable to recover it. 00:26:30.463 [2024-07-26 14:20:38.257452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.463 [2024-07-26 14:20:38.257478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.463 qpair failed and we were unable to recover it. 00:26:30.463 [2024-07-26 14:20:38.257565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.463 [2024-07-26 14:20:38.257594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.463 qpair failed and we were unable to recover it. 00:26:30.463 [2024-07-26 14:20:38.257681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.463 [2024-07-26 14:20:38.257709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.463 qpair failed and we were unable to recover it. 00:26:30.463 [2024-07-26 14:20:38.257796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.463 [2024-07-26 14:20:38.257823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.463 qpair failed and we were unable to recover it. 00:26:30.463 [2024-07-26 14:20:38.257941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.463 [2024-07-26 14:20:38.257968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.463 qpair failed and we were unable to recover it. 00:26:30.463 [2024-07-26 14:20:38.258070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.463 [2024-07-26 14:20:38.258110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.463 qpair failed and we were unable to recover it. 00:26:30.463 [2024-07-26 14:20:38.258200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.463 [2024-07-26 14:20:38.258228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.463 qpair failed and we were unable to recover it. 00:26:30.463 [2024-07-26 14:20:38.258309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.463 [2024-07-26 14:20:38.258336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.463 qpair failed and we were unable to recover it. 00:26:30.463 [2024-07-26 14:20:38.258451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.463 [2024-07-26 14:20:38.258477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.463 qpair failed and we were unable to recover it. 00:26:30.463 [2024-07-26 14:20:38.258562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.463 [2024-07-26 14:20:38.258588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.463 qpair failed and we were unable to recover it. 00:26:30.463 [2024-07-26 14:20:38.258675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.463 [2024-07-26 14:20:38.258700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.463 qpair failed and we were unable to recover it. 00:26:30.463 [2024-07-26 14:20:38.258784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.463 [2024-07-26 14:20:38.258811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.463 qpair failed and we were unable to recover it. 00:26:30.463 [2024-07-26 14:20:38.258902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.463 [2024-07-26 14:20:38.258927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.463 qpair failed and we were unable to recover it. 00:26:30.463 [2024-07-26 14:20:38.259016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.463 [2024-07-26 14:20:38.259042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.463 qpair failed and we were unable to recover it. 00:26:30.463 [2024-07-26 14:20:38.259134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.463 [2024-07-26 14:20:38.259160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.463 qpair failed and we were unable to recover it. 00:26:30.463 [2024-07-26 14:20:38.259267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.463 [2024-07-26 14:20:38.259293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.463 qpair failed and we were unable to recover it. 00:26:30.463 [2024-07-26 14:20:38.259368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.463 [2024-07-26 14:20:38.259394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.463 qpair failed and we were unable to recover it. 00:26:30.463 [2024-07-26 14:20:38.259486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.463 [2024-07-26 14:20:38.259511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.463 qpair failed and we were unable to recover it. 00:26:30.463 [2024-07-26 14:20:38.259606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.463 [2024-07-26 14:20:38.259636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.463 qpair failed and we were unable to recover it. 00:26:30.463 [2024-07-26 14:20:38.259723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.463 [2024-07-26 14:20:38.259750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.463 qpair failed and we were unable to recover it. 00:26:30.463 [2024-07-26 14:20:38.259836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.463 [2024-07-26 14:20:38.259862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.463 qpair failed and we were unable to recover it. 00:26:30.463 [2024-07-26 14:20:38.259975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.463 [2024-07-26 14:20:38.260002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.463 qpair failed and we were unable to recover it. 00:26:30.463 [2024-07-26 14:20:38.260142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.463 [2024-07-26 14:20:38.260173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.463 qpair failed and we were unable to recover it. 00:26:30.463 [2024-07-26 14:20:38.260288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.463 [2024-07-26 14:20:38.260316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.463 qpair failed and we were unable to recover it. 00:26:30.463 [2024-07-26 14:20:38.260399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.463 [2024-07-26 14:20:38.260426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.463 qpair failed and we were unable to recover it. 00:26:30.463 [2024-07-26 14:20:38.260503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.463 [2024-07-26 14:20:38.260537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.463 qpair failed and we were unable to recover it. 00:26:30.463 [2024-07-26 14:20:38.260627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.463 [2024-07-26 14:20:38.260654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.463 qpair failed and we were unable to recover it. 00:26:30.463 [2024-07-26 14:20:38.260736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.463 [2024-07-26 14:20:38.260762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.463 qpair failed and we were unable to recover it. 00:26:30.463 [2024-07-26 14:20:38.260873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.463 [2024-07-26 14:20:38.260901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.463 qpair failed and we were unable to recover it. 00:26:30.463 [2024-07-26 14:20:38.260994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.463 [2024-07-26 14:20:38.261021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.463 qpair failed and we were unable to recover it. 00:26:30.463 [2024-07-26 14:20:38.261108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.463 [2024-07-26 14:20:38.261135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.463 qpair failed and we were unable to recover it. 00:26:30.463 [2024-07-26 14:20:38.261246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.463 [2024-07-26 14:20:38.261278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.463 qpair failed and we were unable to recover it. 00:26:30.463 [2024-07-26 14:20:38.261364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.463 [2024-07-26 14:20:38.261391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.463 qpair failed and we were unable to recover it. 00:26:30.463 [2024-07-26 14:20:38.261502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.464 [2024-07-26 14:20:38.261534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.464 qpair failed and we were unable to recover it. 00:26:30.464 [2024-07-26 14:20:38.261622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.464 [2024-07-26 14:20:38.261648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.464 qpair failed and we were unable to recover it. 00:26:30.464 [2024-07-26 14:20:38.261725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.464 [2024-07-26 14:20:38.261751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.464 qpair failed and we were unable to recover it. 00:26:30.464 [2024-07-26 14:20:38.261831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.464 [2024-07-26 14:20:38.261858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.464 qpair failed and we were unable to recover it. 00:26:30.464 [2024-07-26 14:20:38.261939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.464 [2024-07-26 14:20:38.261966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.464 qpair failed and we were unable to recover it. 00:26:30.464 [2024-07-26 14:20:38.262046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.464 [2024-07-26 14:20:38.262073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.464 qpair failed and we were unable to recover it. 00:26:30.464 [2024-07-26 14:20:38.262154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.464 [2024-07-26 14:20:38.262183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.464 qpair failed and we were unable to recover it. 00:26:30.464 [2024-07-26 14:20:38.262271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.464 [2024-07-26 14:20:38.262299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.464 qpair failed and we were unable to recover it. 00:26:30.464 [2024-07-26 14:20:38.262392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.464 [2024-07-26 14:20:38.262417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.464 qpair failed and we were unable to recover it. 00:26:30.464 [2024-07-26 14:20:38.262503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.464 [2024-07-26 14:20:38.262536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.464 qpair failed and we were unable to recover it. 00:26:30.464 [2024-07-26 14:20:38.262628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.464 [2024-07-26 14:20:38.262653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.464 qpair failed and we were unable to recover it. 00:26:30.464 [2024-07-26 14:20:38.262773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.464 [2024-07-26 14:20:38.262799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.464 qpair failed and we were unable to recover it. 00:26:30.464 [2024-07-26 14:20:38.262916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.464 [2024-07-26 14:20:38.262942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.464 qpair failed and we were unable to recover it. 00:26:30.464 [2024-07-26 14:20:38.263047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.464 [2024-07-26 14:20:38.263073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.464 qpair failed and we were unable to recover it. 00:26:30.464 [2024-07-26 14:20:38.263162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.464 [2024-07-26 14:20:38.263188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.464 qpair failed and we were unable to recover it. 00:26:30.464 [2024-07-26 14:20:38.263268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.464 [2024-07-26 14:20:38.263296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.464 qpair failed and we were unable to recover it. 00:26:30.464 [2024-07-26 14:20:38.263386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.464 [2024-07-26 14:20:38.263426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.464 qpair failed and we were unable to recover it. 00:26:30.464 [2024-07-26 14:20:38.263536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.464 [2024-07-26 14:20:38.263575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.464 qpair failed and we were unable to recover it. 00:26:30.464 [2024-07-26 14:20:38.263664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.464 [2024-07-26 14:20:38.263691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.464 qpair failed and we were unable to recover it. 00:26:30.464 [2024-07-26 14:20:38.263773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.464 [2024-07-26 14:20:38.263799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.464 qpair failed and we were unable to recover it. 00:26:30.464 [2024-07-26 14:20:38.263876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.464 [2024-07-26 14:20:38.263902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.464 qpair failed and we were unable to recover it. 00:26:30.464 [2024-07-26 14:20:38.263986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.464 [2024-07-26 14:20:38.264012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.464 qpair failed and we were unable to recover it. 00:26:30.464 [2024-07-26 14:20:38.264129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.464 [2024-07-26 14:20:38.264157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.464 qpair failed and we were unable to recover it. 00:26:30.464 [2024-07-26 14:20:38.264249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.464 [2024-07-26 14:20:38.264277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.464 qpair failed and we were unable to recover it. 00:26:30.464 [2024-07-26 14:20:38.264361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.464 [2024-07-26 14:20:38.264389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.464 qpair failed and we were unable to recover it. 00:26:30.464 [2024-07-26 14:20:38.264474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.464 [2024-07-26 14:20:38.264505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.464 qpair failed and we were unable to recover it. 00:26:30.464 [2024-07-26 14:20:38.264593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.464 [2024-07-26 14:20:38.264620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.464 qpair failed and we were unable to recover it. 00:26:30.465 [2024-07-26 14:20:38.264712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.465 [2024-07-26 14:20:38.264737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.465 qpair failed and we were unable to recover it. 00:26:30.465 [2024-07-26 14:20:38.264822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.465 [2024-07-26 14:20:38.264847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.465 qpair failed and we were unable to recover it. 00:26:30.465 [2024-07-26 14:20:38.264921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.465 [2024-07-26 14:20:38.264946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.465 qpair failed and we were unable to recover it. 00:26:30.465 [2024-07-26 14:20:38.265029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.465 [2024-07-26 14:20:38.265055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.465 qpair failed and we were unable to recover it. 00:26:30.465 [2024-07-26 14:20:38.265141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.465 [2024-07-26 14:20:38.265166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.465 qpair failed and we were unable to recover it. 00:26:30.465 [2024-07-26 14:20:38.265265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.465 [2024-07-26 14:20:38.265305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.465 qpair failed and we were unable to recover it. 00:26:30.465 [2024-07-26 14:20:38.265403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.465 [2024-07-26 14:20:38.265432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.465 qpair failed and we were unable to recover it. 00:26:30.465 [2024-07-26 14:20:38.265525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.465 [2024-07-26 14:20:38.265559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.465 qpair failed and we were unable to recover it. 00:26:30.465 [2024-07-26 14:20:38.265639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.465 [2024-07-26 14:20:38.265665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.465 qpair failed and we were unable to recover it. 00:26:30.465 [2024-07-26 14:20:38.265742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.465 [2024-07-26 14:20:38.265768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.465 qpair failed and we were unable to recover it. 00:26:30.465 [2024-07-26 14:20:38.265855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.465 [2024-07-26 14:20:38.265881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.465 qpair failed and we were unable to recover it. 00:26:30.465 [2024-07-26 14:20:38.265988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.465 [2024-07-26 14:20:38.266014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.465 qpair failed and we were unable to recover it. 00:26:30.465 [2024-07-26 14:20:38.266108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.465 [2024-07-26 14:20:38.266134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.465 qpair failed and we were unable to recover it. 00:26:30.465 [2024-07-26 14:20:38.266221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.465 [2024-07-26 14:20:38.266249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.465 qpair failed and we were unable to recover it. 00:26:30.465 [2024-07-26 14:20:38.266336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.465 [2024-07-26 14:20:38.266363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.465 qpair failed and we were unable to recover it. 00:26:30.465 [2024-07-26 14:20:38.266441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.465 [2024-07-26 14:20:38.266467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.465 qpair failed and we were unable to recover it. 00:26:30.465 [2024-07-26 14:20:38.266553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.465 [2024-07-26 14:20:38.266580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.465 qpair failed and we were unable to recover it. 00:26:30.465 [2024-07-26 14:20:38.266670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.465 [2024-07-26 14:20:38.266697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.465 qpair failed and we were unable to recover it. 00:26:30.465 [2024-07-26 14:20:38.266783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.465 [2024-07-26 14:20:38.266811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.465 qpair failed and we were unable to recover it. 00:26:30.465 [2024-07-26 14:20:38.266895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.465 [2024-07-26 14:20:38.266934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.465 qpair failed and we were unable to recover it. 00:26:30.465 [2024-07-26 14:20:38.267042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.465 [2024-07-26 14:20:38.267069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.465 qpair failed and we were unable to recover it. 00:26:30.465 [2024-07-26 14:20:38.267175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.465 [2024-07-26 14:20:38.267202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.465 qpair failed and we were unable to recover it. 00:26:30.465 [2024-07-26 14:20:38.267278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.465 [2024-07-26 14:20:38.267304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.465 qpair failed and we were unable to recover it. 00:26:30.466 [2024-07-26 14:20:38.267417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.466 [2024-07-26 14:20:38.267444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.466 qpair failed and we were unable to recover it. 00:26:30.466 [2024-07-26 14:20:38.267565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.466 [2024-07-26 14:20:38.267592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.466 qpair failed and we were unable to recover it. 00:26:30.466 [2024-07-26 14:20:38.267679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.466 [2024-07-26 14:20:38.267706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.466 qpair failed and we were unable to recover it. 00:26:30.466 [2024-07-26 14:20:38.267790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.466 [2024-07-26 14:20:38.267817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.466 qpair failed and we were unable to recover it. 00:26:30.466 [2024-07-26 14:20:38.267915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.466 [2024-07-26 14:20:38.267942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.466 qpair failed and we were unable to recover it. 00:26:30.466 [2024-07-26 14:20:38.268022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.466 [2024-07-26 14:20:38.268049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.466 qpair failed and we were unable to recover it. 00:26:30.466 [2024-07-26 14:20:38.268163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.466 [2024-07-26 14:20:38.268190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.466 qpair failed and we were unable to recover it. 00:26:30.466 [2024-07-26 14:20:38.268276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.466 [2024-07-26 14:20:38.268304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.466 qpair failed and we were unable to recover it. 00:26:30.466 [2024-07-26 14:20:38.268394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.466 [2024-07-26 14:20:38.268423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.466 qpair failed and we were unable to recover it. 00:26:30.466 [2024-07-26 14:20:38.268511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.466 [2024-07-26 14:20:38.268545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.466 qpair failed and we were unable to recover it. 00:26:30.466 [2024-07-26 14:20:38.268631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.466 [2024-07-26 14:20:38.268656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.466 qpair failed and we were unable to recover it. 00:26:30.466 [2024-07-26 14:20:38.268739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.466 [2024-07-26 14:20:38.268765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.466 qpair failed and we were unable to recover it. 00:26:30.466 [2024-07-26 14:20:38.268853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.466 [2024-07-26 14:20:38.268878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.466 qpair failed and we were unable to recover it. 00:26:30.466 [2024-07-26 14:20:38.268965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.466 [2024-07-26 14:20:38.268990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.466 qpair failed and we were unable to recover it. 00:26:30.466 [2024-07-26 14:20:38.269094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.466 [2024-07-26 14:20:38.269120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.466 qpair failed and we were unable to recover it. 00:26:30.466 [2024-07-26 14:20:38.269200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.466 [2024-07-26 14:20:38.269230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.466 qpair failed and we were unable to recover it. 00:26:30.466 [2024-07-26 14:20:38.269318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.466 [2024-07-26 14:20:38.269344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.466 qpair failed and we were unable to recover it. 00:26:30.466 [2024-07-26 14:20:38.269432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.466 [2024-07-26 14:20:38.269457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.466 qpair failed and we were unable to recover it. 00:26:30.466 [2024-07-26 14:20:38.269584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.466 [2024-07-26 14:20:38.269623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.466 qpair failed and we were unable to recover it. 00:26:30.466 [2024-07-26 14:20:38.269726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.466 [2024-07-26 14:20:38.269766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.466 qpair failed and we were unable to recover it. 00:26:30.466 [2024-07-26 14:20:38.269859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.466 [2024-07-26 14:20:38.269887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.466 qpair failed and we were unable to recover it. 00:26:30.466 [2024-07-26 14:20:38.269965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.466 [2024-07-26 14:20:38.269992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.466 qpair failed and we were unable to recover it. 00:26:30.466 [2024-07-26 14:20:38.270069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.466 [2024-07-26 14:20:38.270096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.466 qpair failed and we were unable to recover it. 00:26:30.466 [2024-07-26 14:20:38.270182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.466 [2024-07-26 14:20:38.270208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.466 qpair failed and we were unable to recover it. 00:26:30.466 [2024-07-26 14:20:38.270300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.466 [2024-07-26 14:20:38.270328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.466 qpair failed and we were unable to recover it. 00:26:30.466 [2024-07-26 14:20:38.270436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.466 [2024-07-26 14:20:38.270462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.466 qpair failed and we were unable to recover it. 00:26:30.466 [2024-07-26 14:20:38.270549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.466 [2024-07-26 14:20:38.270576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.466 qpair failed and we were unable to recover it. 00:26:30.466 [2024-07-26 14:20:38.270654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.466 [2024-07-26 14:20:38.270681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.466 qpair failed and we were unable to recover it. 00:26:30.466 [2024-07-26 14:20:38.270764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.466 [2024-07-26 14:20:38.270790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.466 qpair failed and we were unable to recover it. 00:26:30.466 [2024-07-26 14:20:38.270883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.466 [2024-07-26 14:20:38.270911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.466 qpair failed and we were unable to recover it. 00:26:30.466 [2024-07-26 14:20:38.270995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.466 [2024-07-26 14:20:38.271022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.466 qpair failed and we were unable to recover it. 00:26:30.466 [2024-07-26 14:20:38.271102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.466 [2024-07-26 14:20:38.271128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.466 qpair failed and we were unable to recover it. 00:26:30.466 [2024-07-26 14:20:38.271250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.466 [2024-07-26 14:20:38.271276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.466 qpair failed and we were unable to recover it. 00:26:30.466 [2024-07-26 14:20:38.271466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.466 [2024-07-26 14:20:38.271493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.466 qpair failed and we were unable to recover it. 00:26:30.466 [2024-07-26 14:20:38.271592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.466 [2024-07-26 14:20:38.271619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.466 qpair failed and we were unable to recover it. 00:26:30.466 [2024-07-26 14:20:38.271704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.466 [2024-07-26 14:20:38.271731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.466 qpair failed and we were unable to recover it. 00:26:30.466 [2024-07-26 14:20:38.271822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.466 [2024-07-26 14:20:38.271850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.466 qpair failed and we were unable to recover it. 00:26:30.467 [2024-07-26 14:20:38.271932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.467 [2024-07-26 14:20:38.271958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.467 qpair failed and we were unable to recover it. 00:26:30.467 [2024-07-26 14:20:38.272052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.467 [2024-07-26 14:20:38.272092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.467 qpair failed and we were unable to recover it. 00:26:30.467 [2024-07-26 14:20:38.272185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.467 [2024-07-26 14:20:38.272213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.467 qpair failed and we were unable to recover it. 00:26:30.467 [2024-07-26 14:20:38.272292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.467 [2024-07-26 14:20:38.272319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.467 qpair failed and we were unable to recover it. 00:26:30.467 [2024-07-26 14:20:38.272403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.467 [2024-07-26 14:20:38.272429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.467 qpair failed and we were unable to recover it. 00:26:30.467 [2024-07-26 14:20:38.272623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.467 [2024-07-26 14:20:38.272651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.467 qpair failed and we were unable to recover it. 00:26:30.467 [2024-07-26 14:20:38.272743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.467 [2024-07-26 14:20:38.272770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.467 qpair failed and we were unable to recover it. 00:26:30.467 [2024-07-26 14:20:38.272881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.467 [2024-07-26 14:20:38.272907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.467 qpair failed and we were unable to recover it. 00:26:30.467 [2024-07-26 14:20:38.272995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.467 [2024-07-26 14:20:38.273022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.467 qpair failed and we were unable to recover it. 00:26:30.467 [2024-07-26 14:20:38.273113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.467 [2024-07-26 14:20:38.273140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.467 qpair failed and we were unable to recover it. 00:26:30.467 [2024-07-26 14:20:38.273223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.467 [2024-07-26 14:20:38.273250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.467 qpair failed and we were unable to recover it. 00:26:30.467 [2024-07-26 14:20:38.273332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.467 [2024-07-26 14:20:38.273359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.467 qpair failed and we were unable to recover it. 00:26:30.467 [2024-07-26 14:20:38.273434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.467 [2024-07-26 14:20:38.273460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.467 qpair failed and we were unable to recover it. 00:26:30.467 [2024-07-26 14:20:38.273555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.467 [2024-07-26 14:20:38.273582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.467 qpair failed and we were unable to recover it. 00:26:30.467 [2024-07-26 14:20:38.273770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.467 [2024-07-26 14:20:38.273796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.467 qpair failed and we were unable to recover it. 00:26:30.467 [2024-07-26 14:20:38.273883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.467 [2024-07-26 14:20:38.273909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.467 qpair failed and we were unable to recover it. 00:26:30.467 [2024-07-26 14:20:38.274012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.467 [2024-07-26 14:20:38.274043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.467 qpair failed and we were unable to recover it. 00:26:30.467 [2024-07-26 14:20:38.274121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.467 [2024-07-26 14:20:38.274148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.467 qpair failed and we were unable to recover it. 00:26:30.467 [2024-07-26 14:20:38.274232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.467 [2024-07-26 14:20:38.274263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.467 qpair failed and we were unable to recover it. 00:26:30.467 [2024-07-26 14:20:38.274362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.467 [2024-07-26 14:20:38.274402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.467 qpair failed and we were unable to recover it. 00:26:30.467 [2024-07-26 14:20:38.274485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.467 [2024-07-26 14:20:38.274511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.467 qpair failed and we were unable to recover it. 00:26:30.467 [2024-07-26 14:20:38.274613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.467 [2024-07-26 14:20:38.274640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.467 qpair failed and we were unable to recover it. 00:26:30.467 [2024-07-26 14:20:38.274720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.467 [2024-07-26 14:20:38.274746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.467 qpair failed and we were unable to recover it. 00:26:30.467 [2024-07-26 14:20:38.274832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.467 [2024-07-26 14:20:38.274857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.467 qpair failed and we were unable to recover it. 00:26:30.467 [2024-07-26 14:20:38.274938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.467 [2024-07-26 14:20:38.274964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.467 qpair failed and we were unable to recover it. 00:26:30.467 [2024-07-26 14:20:38.275042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.467 [2024-07-26 14:20:38.275067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.467 qpair failed and we were unable to recover it. 00:26:30.467 [2024-07-26 14:20:38.275153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.467 [2024-07-26 14:20:38.275179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.467 qpair failed and we were unable to recover it. 00:26:30.467 [2024-07-26 14:20:38.275265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.467 [2024-07-26 14:20:38.275292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.467 qpair failed and we were unable to recover it. 00:26:30.467 [2024-07-26 14:20:38.275369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.467 [2024-07-26 14:20:38.275396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.467 qpair failed and we were unable to recover it. 00:26:30.467 [2024-07-26 14:20:38.275489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.467 [2024-07-26 14:20:38.275515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.467 qpair failed and we were unable to recover it. 00:26:30.467 [2024-07-26 14:20:38.275630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.467 [2024-07-26 14:20:38.275656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.467 qpair failed and we were unable to recover it. 00:26:30.467 [2024-07-26 14:20:38.275743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.467 [2024-07-26 14:20:38.275768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.467 qpair failed and we were unable to recover it. 00:26:30.467 [2024-07-26 14:20:38.275852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.467 [2024-07-26 14:20:38.275878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.467 qpair failed and we were unable to recover it. 00:26:30.467 [2024-07-26 14:20:38.275963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.467 [2024-07-26 14:20:38.275988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.467 qpair failed and we were unable to recover it. 00:26:30.467 [2024-07-26 14:20:38.276075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.467 [2024-07-26 14:20:38.276104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.467 qpair failed and we were unable to recover it. 00:26:30.467 [2024-07-26 14:20:38.276214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.467 [2024-07-26 14:20:38.276254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.467 qpair failed and we were unable to recover it. 00:26:30.467 [2024-07-26 14:20:38.276358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.467 [2024-07-26 14:20:38.276397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.467 qpair failed and we were unable to recover it. 00:26:30.467 [2024-07-26 14:20:38.276512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.468 [2024-07-26 14:20:38.276549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.468 qpair failed and we were unable to recover it. 00:26:30.468 [2024-07-26 14:20:38.276637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.468 [2024-07-26 14:20:38.276663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.468 qpair failed and we were unable to recover it. 00:26:30.468 [2024-07-26 14:20:38.276752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.468 [2024-07-26 14:20:38.276778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.468 qpair failed and we were unable to recover it. 00:26:30.468 [2024-07-26 14:20:38.276860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.468 [2024-07-26 14:20:38.276888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.468 qpair failed and we were unable to recover it. 00:26:30.468 [2024-07-26 14:20:38.276995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.468 [2024-07-26 14:20:38.277021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.468 qpair failed and we were unable to recover it. 00:26:30.468 [2024-07-26 14:20:38.277121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.468 [2024-07-26 14:20:38.277150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.468 qpair failed and we were unable to recover it. 00:26:30.468 [2024-07-26 14:20:38.277271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.468 [2024-07-26 14:20:38.277300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.468 qpair failed and we were unable to recover it. 00:26:30.468 [2024-07-26 14:20:38.277404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.468 [2024-07-26 14:20:38.277444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.468 qpair failed and we were unable to recover it. 00:26:30.468 [2024-07-26 14:20:38.277541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.468 [2024-07-26 14:20:38.277569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.468 qpair failed and we were unable to recover it. 00:26:30.468 [2024-07-26 14:20:38.277655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.468 [2024-07-26 14:20:38.277681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.468 qpair failed and we were unable to recover it. 00:26:30.468 [2024-07-26 14:20:38.277770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.468 [2024-07-26 14:20:38.277795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.468 qpair failed and we were unable to recover it. 00:26:30.468 [2024-07-26 14:20:38.277879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.468 [2024-07-26 14:20:38.277905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.468 qpair failed and we were unable to recover it. 00:26:30.468 [2024-07-26 14:20:38.277985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.468 [2024-07-26 14:20:38.278011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.468 qpair failed and we were unable to recover it. 00:26:30.468 [2024-07-26 14:20:38.278089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.468 [2024-07-26 14:20:38.278115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.468 qpair failed and we were unable to recover it. 00:26:30.468 [2024-07-26 14:20:38.278227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.468 [2024-07-26 14:20:38.278252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.468 qpair failed and we were unable to recover it. 00:26:30.468 [2024-07-26 14:20:38.278336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.468 [2024-07-26 14:20:38.278365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.468 qpair failed and we were unable to recover it. 00:26:30.468 [2024-07-26 14:20:38.278506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.468 [2024-07-26 14:20:38.278541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.468 qpair failed and we were unable to recover it. 00:26:30.468 [2024-07-26 14:20:38.278638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.468 [2024-07-26 14:20:38.278668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.468 qpair failed and we were unable to recover it. 00:26:30.468 [2024-07-26 14:20:38.278757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.468 [2024-07-26 14:20:38.278785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.468 qpair failed and we were unable to recover it. 00:26:30.468 [2024-07-26 14:20:38.278867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.468 [2024-07-26 14:20:38.278895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.468 qpair failed and we were unable to recover it. 00:26:30.468 [2024-07-26 14:20:38.278991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.468 [2024-07-26 14:20:38.279017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.468 qpair failed and we were unable to recover it. 00:26:30.468 [2024-07-26 14:20:38.279094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.468 [2024-07-26 14:20:38.279120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.468 qpair failed and we were unable to recover it. 00:26:30.468 [2024-07-26 14:20:38.279206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.468 [2024-07-26 14:20:38.279233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.468 qpair failed and we were unable to recover it. 00:26:30.468 [2024-07-26 14:20:38.279318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.468 [2024-07-26 14:20:38.279345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.468 qpair failed and we were unable to recover it. 00:26:30.468 [2024-07-26 14:20:38.279421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.468 [2024-07-26 14:20:38.279448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.468 qpair failed and we were unable to recover it. 00:26:30.468 [2024-07-26 14:20:38.279547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.468 [2024-07-26 14:20:38.279587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.468 qpair failed and we were unable to recover it. 00:26:30.468 [2024-07-26 14:20:38.279681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.468 [2024-07-26 14:20:38.279708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.468 qpair failed and we were unable to recover it. 00:26:30.468 [2024-07-26 14:20:38.279803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.468 [2024-07-26 14:20:38.279828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.468 qpair failed and we were unable to recover it. 00:26:30.468 [2024-07-26 14:20:38.279909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.468 [2024-07-26 14:20:38.279935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.468 qpair failed and we were unable to recover it. 00:26:30.468 [2024-07-26 14:20:38.280050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.468 [2024-07-26 14:20:38.280075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.468 qpair failed and we were unable to recover it. 00:26:30.468 [2024-07-26 14:20:38.280156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.468 [2024-07-26 14:20:38.280182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.468 qpair failed and we were unable to recover it. 00:26:30.468 [2024-07-26 14:20:38.280263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.468 [2024-07-26 14:20:38.280291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.468 qpair failed and we were unable to recover it. 00:26:30.468 [2024-07-26 14:20:38.280362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.468 [2024-07-26 14:20:38.280388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.468 qpair failed and we were unable to recover it. 00:26:30.468 [2024-07-26 14:20:38.280473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.468 [2024-07-26 14:20:38.280502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.468 qpair failed and we were unable to recover it. 00:26:30.468 [2024-07-26 14:20:38.280596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.468 [2024-07-26 14:20:38.280623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.468 qpair failed and we were unable to recover it. 00:26:30.468 [2024-07-26 14:20:38.280716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.468 [2024-07-26 14:20:38.280746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.468 qpair failed and we were unable to recover it. 00:26:30.468 [2024-07-26 14:20:38.280824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.468 [2024-07-26 14:20:38.280853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.468 qpair failed and we were unable to recover it. 00:26:30.468 [2024-07-26 14:20:38.280957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.468 [2024-07-26 14:20:38.280986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.468 qpair failed and we were unable to recover it. 00:26:30.468 [2024-07-26 14:20:38.281071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.469 [2024-07-26 14:20:38.281097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.469 qpair failed and we were unable to recover it. 00:26:30.469 [2024-07-26 14:20:38.281223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.469 [2024-07-26 14:20:38.281251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.469 qpair failed and we were unable to recover it. 00:26:30.469 [2024-07-26 14:20:38.281337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.469 [2024-07-26 14:20:38.281364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.469 qpair failed and we were unable to recover it. 00:26:30.469 [2024-07-26 14:20:38.281448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.469 [2024-07-26 14:20:38.281476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.469 qpair failed and we were unable to recover it. 00:26:30.469 [2024-07-26 14:20:38.281563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.469 [2024-07-26 14:20:38.281590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.469 qpair failed and we were unable to recover it. 00:26:30.469 [2024-07-26 14:20:38.281670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.469 [2024-07-26 14:20:38.281696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.469 qpair failed and we were unable to recover it. 00:26:30.469 [2024-07-26 14:20:38.281783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.469 [2024-07-26 14:20:38.281810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.469 qpair failed and we were unable to recover it. 00:26:30.469 [2024-07-26 14:20:38.281895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.469 [2024-07-26 14:20:38.281922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.469 qpair failed and we were unable to recover it. 00:26:30.469 [2024-07-26 14:20:38.281999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.469 [2024-07-26 14:20:38.282025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.469 qpair failed and we were unable to recover it. 00:26:30.469 [2024-07-26 14:20:38.282132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.469 [2024-07-26 14:20:38.282158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.469 qpair failed and we were unable to recover it. 00:26:30.469 [2024-07-26 14:20:38.282242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.469 [2024-07-26 14:20:38.282273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.469 qpair failed and we were unable to recover it. 00:26:30.469 [2024-07-26 14:20:38.282363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.469 [2024-07-26 14:20:38.282391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.469 qpair failed and we were unable to recover it. 00:26:30.469 [2024-07-26 14:20:38.282481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.469 [2024-07-26 14:20:38.282509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.469 qpair failed and we were unable to recover it. 00:26:30.469 [2024-07-26 14:20:38.282601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.469 [2024-07-26 14:20:38.282629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.469 qpair failed and we were unable to recover it. 00:26:30.469 [2024-07-26 14:20:38.282709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.469 [2024-07-26 14:20:38.282736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.469 qpair failed and we were unable to recover it. 00:26:30.469 [2024-07-26 14:20:38.282848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.469 [2024-07-26 14:20:38.282875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.469 qpair failed and we were unable to recover it. 00:26:30.469 [2024-07-26 14:20:38.282959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.469 [2024-07-26 14:20:38.282986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.469 qpair failed and we were unable to recover it. 00:26:30.469 [2024-07-26 14:20:38.283074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.469 [2024-07-26 14:20:38.283101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.469 qpair failed and we were unable to recover it. 00:26:30.469 [2024-07-26 14:20:38.283185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.469 [2024-07-26 14:20:38.283212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.469 qpair failed and we were unable to recover it. 00:26:30.469 [2024-07-26 14:20:38.283294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.469 [2024-07-26 14:20:38.283321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.469 qpair failed and we were unable to recover it. 00:26:30.469 [2024-07-26 14:20:38.283433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.469 [2024-07-26 14:20:38.283461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.469 qpair failed and we were unable to recover it. 00:26:30.469 [2024-07-26 14:20:38.283554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.469 [2024-07-26 14:20:38.283582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.469 qpair failed and we were unable to recover it. 00:26:30.469 [2024-07-26 14:20:38.283672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.469 [2024-07-26 14:20:38.283698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.469 qpair failed and we were unable to recover it. 00:26:30.469 [2024-07-26 14:20:38.283784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.469 [2024-07-26 14:20:38.283809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.469 qpair failed and we were unable to recover it. 00:26:30.469 [2024-07-26 14:20:38.283910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.469 [2024-07-26 14:20:38.283937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.469 qpair failed and we were unable to recover it. 00:26:30.469 [2024-07-26 14:20:38.284024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.469 [2024-07-26 14:20:38.284050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.469 qpair failed and we were unable to recover it. 00:26:30.469 [2024-07-26 14:20:38.284141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.469 [2024-07-26 14:20:38.284168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.469 qpair failed and we were unable to recover it. 00:26:30.469 [2024-07-26 14:20:38.284275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.469 [2024-07-26 14:20:38.284301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.469 qpair failed and we were unable to recover it. 00:26:30.469 [2024-07-26 14:20:38.284390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.469 [2024-07-26 14:20:38.284416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.469 qpair failed and we were unable to recover it. 00:26:30.469 [2024-07-26 14:20:38.284503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.469 [2024-07-26 14:20:38.284535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.469 qpair failed and we were unable to recover it. 00:26:30.469 [2024-07-26 14:20:38.284611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.470 [2024-07-26 14:20:38.284637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.470 qpair failed and we were unable to recover it. 00:26:30.470 [2024-07-26 14:20:38.284714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.470 [2024-07-26 14:20:38.284740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.470 qpair failed and we were unable to recover it. 00:26:30.470 [2024-07-26 14:20:38.284828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.470 [2024-07-26 14:20:38.284854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.470 qpair failed and we were unable to recover it. 00:26:30.470 [2024-07-26 14:20:38.284951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.470 [2024-07-26 14:20:38.284977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.470 qpair failed and we were unable to recover it. 00:26:30.470 [2024-07-26 14:20:38.285106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.470 [2024-07-26 14:20:38.285131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.470 qpair failed and we were unable to recover it. 00:26:30.470 [2024-07-26 14:20:38.285221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.470 [2024-07-26 14:20:38.285247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.470 qpair failed and we were unable to recover it. 00:26:30.470 [2024-07-26 14:20:38.285334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.470 [2024-07-26 14:20:38.285359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.470 qpair failed and we were unable to recover it. 00:26:30.470 [2024-07-26 14:20:38.285455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.470 [2024-07-26 14:20:38.285486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.470 qpair failed and we were unable to recover it. 00:26:30.470 [2024-07-26 14:20:38.285574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.470 [2024-07-26 14:20:38.285601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.470 qpair failed and we were unable to recover it. 00:26:30.470 [2024-07-26 14:20:38.285687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.470 [2024-07-26 14:20:38.285714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.470 qpair failed and we were unable to recover it. 00:26:30.470 [2024-07-26 14:20:38.285798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.470 [2024-07-26 14:20:38.285824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.470 qpair failed and we were unable to recover it. 00:26:30.470 [2024-07-26 14:20:38.285912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.470 [2024-07-26 14:20:38.285937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.470 qpair failed and we were unable to recover it. 00:26:30.470 [2024-07-26 14:20:38.286017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.470 [2024-07-26 14:20:38.286042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.470 qpair failed and we were unable to recover it. 00:26:30.470 [2024-07-26 14:20:38.286117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.470 [2024-07-26 14:20:38.286142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.470 qpair failed and we were unable to recover it. 00:26:30.470 [2024-07-26 14:20:38.286216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.470 [2024-07-26 14:20:38.286241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.470 qpair failed and we were unable to recover it. 00:26:30.470 [2024-07-26 14:20:38.286349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.470 [2024-07-26 14:20:38.286375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.470 qpair failed and we were unable to recover it. 00:26:30.470 [2024-07-26 14:20:38.286457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.470 [2024-07-26 14:20:38.286483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.470 qpair failed and we were unable to recover it. 00:26:30.470 [2024-07-26 14:20:38.286564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.470 [2024-07-26 14:20:38.286590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.470 qpair failed and we were unable to recover it. 00:26:30.470 [2024-07-26 14:20:38.286668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.470 [2024-07-26 14:20:38.286694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.470 qpair failed and we were unable to recover it. 00:26:30.470 [2024-07-26 14:20:38.286774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.470 [2024-07-26 14:20:38.286800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.470 qpair failed and we were unable to recover it. 00:26:30.470 [2024-07-26 14:20:38.286878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.470 [2024-07-26 14:20:38.286903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.470 qpair failed and we were unable to recover it. 00:26:30.470 [2024-07-26 14:20:38.286989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.470 [2024-07-26 14:20:38.287019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.470 qpair failed and we were unable to recover it. 00:26:30.470 [2024-07-26 14:20:38.287114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.470 [2024-07-26 14:20:38.287153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.470 qpair failed and we were unable to recover it. 00:26:30.470 [2024-07-26 14:20:38.287241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.470 [2024-07-26 14:20:38.287268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.470 qpair failed and we were unable to recover it. 00:26:30.470 [2024-07-26 14:20:38.287357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.470 [2024-07-26 14:20:38.287384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.470 qpair failed and we were unable to recover it. 00:26:30.470 [2024-07-26 14:20:38.287471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.470 [2024-07-26 14:20:38.287496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.470 qpair failed and we were unable to recover it. 00:26:30.470 [2024-07-26 14:20:38.287579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.470 [2024-07-26 14:20:38.287606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.470 qpair failed and we were unable to recover it. 00:26:30.470 [2024-07-26 14:20:38.287696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.470 [2024-07-26 14:20:38.287724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.470 qpair failed and we were unable to recover it. 00:26:30.470 [2024-07-26 14:20:38.287804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.470 [2024-07-26 14:20:38.287830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.470 qpair failed and we were unable to recover it. 00:26:30.470 [2024-07-26 14:20:38.287918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.470 [2024-07-26 14:20:38.287944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.470 qpair failed and we were unable to recover it. 00:26:30.470 [2024-07-26 14:20:38.288024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.470 [2024-07-26 14:20:38.288052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.470 qpair failed and we were unable to recover it. 00:26:30.470 [2024-07-26 14:20:38.288132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.470 [2024-07-26 14:20:38.288157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.470 qpair failed and we were unable to recover it. 00:26:30.470 [2024-07-26 14:20:38.288259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.470 [2024-07-26 14:20:38.288285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.470 qpair failed and we were unable to recover it. 00:26:30.470 [2024-07-26 14:20:38.288370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.470 [2024-07-26 14:20:38.288396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.470 qpair failed and we were unable to recover it. 00:26:30.470 [2024-07-26 14:20:38.288493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.470 [2024-07-26 14:20:38.288522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.470 qpair failed and we were unable to recover it. 00:26:30.470 [2024-07-26 14:20:38.288618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.470 [2024-07-26 14:20:38.288644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.470 qpair failed and we were unable to recover it. 00:26:30.470 [2024-07-26 14:20:38.288726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.470 [2024-07-26 14:20:38.288754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.470 qpair failed and we were unable to recover it. 00:26:30.470 [2024-07-26 14:20:38.288845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.470 [2024-07-26 14:20:38.288871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.470 qpair failed and we were unable to recover it. 00:26:30.470 [2024-07-26 14:20:38.288954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.471 [2024-07-26 14:20:38.288980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.471 qpair failed and we were unable to recover it. 00:26:30.471 [2024-07-26 14:20:38.289070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.471 [2024-07-26 14:20:38.289098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.471 qpair failed and we were unable to recover it. 00:26:30.471 [2024-07-26 14:20:38.289187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.471 [2024-07-26 14:20:38.289216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.471 qpair failed and we were unable to recover it. 00:26:30.471 [2024-07-26 14:20:38.289307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.471 [2024-07-26 14:20:38.289347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.471 qpair failed and we were unable to recover it. 00:26:30.471 [2024-07-26 14:20:38.289464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.471 [2024-07-26 14:20:38.289492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.471 qpair failed and we were unable to recover it. 00:26:30.471 [2024-07-26 14:20:38.289582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.471 [2024-07-26 14:20:38.289609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.471 qpair failed and we were unable to recover it. 00:26:30.471 [2024-07-26 14:20:38.289690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.471 [2024-07-26 14:20:38.289716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.471 qpair failed and we were unable to recover it. 00:26:30.471 [2024-07-26 14:20:38.289800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.471 [2024-07-26 14:20:38.289827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.471 qpair failed and we were unable to recover it. 00:26:30.471 [2024-07-26 14:20:38.289942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.471 [2024-07-26 14:20:38.289969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.471 qpair failed and we were unable to recover it. 00:26:30.471 [2024-07-26 14:20:38.290050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.471 [2024-07-26 14:20:38.290076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.471 qpair failed and we were unable to recover it. 00:26:30.471 [2024-07-26 14:20:38.290172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.471 [2024-07-26 14:20:38.290200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.471 qpair failed and we were unable to recover it. 00:26:30.471 [2024-07-26 14:20:38.290327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.471 [2024-07-26 14:20:38.290366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.471 qpair failed and we were unable to recover it. 00:26:30.471 [2024-07-26 14:20:38.290458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.471 [2024-07-26 14:20:38.290485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.471 qpair failed and we were unable to recover it. 00:26:30.471 [2024-07-26 14:20:38.290580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.471 [2024-07-26 14:20:38.290607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.471 qpair failed and we were unable to recover it. 00:26:30.471 [2024-07-26 14:20:38.290691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.471 [2024-07-26 14:20:38.290718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.471 qpair failed and we were unable to recover it. 00:26:30.471 [2024-07-26 14:20:38.290802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.471 [2024-07-26 14:20:38.290828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.471 qpair failed and we were unable to recover it. 00:26:30.471 [2024-07-26 14:20:38.290912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.471 [2024-07-26 14:20:38.290938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.471 qpair failed and we were unable to recover it. 00:26:30.471 [2024-07-26 14:20:38.291017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.471 [2024-07-26 14:20:38.291042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.471 qpair failed and we were unable to recover it. 00:26:30.471 [2024-07-26 14:20:38.291126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.471 [2024-07-26 14:20:38.291152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.471 qpair failed and we were unable to recover it. 00:26:30.471 [2024-07-26 14:20:38.291246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.471 [2024-07-26 14:20:38.291272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.471 qpair failed and we were unable to recover it. 00:26:30.471 [2024-07-26 14:20:38.291362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.471 [2024-07-26 14:20:38.291388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.471 qpair failed and we were unable to recover it. 00:26:30.471 [2024-07-26 14:20:38.291471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.471 [2024-07-26 14:20:38.291497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.471 qpair failed and we were unable to recover it. 00:26:30.471 [2024-07-26 14:20:38.291589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.471 [2024-07-26 14:20:38.291616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.471 qpair failed and we were unable to recover it. 00:26:30.471 [2024-07-26 14:20:38.291712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.471 [2024-07-26 14:20:38.291741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.471 qpair failed and we were unable to recover it. 00:26:30.471 [2024-07-26 14:20:38.291865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.471 [2024-07-26 14:20:38.291892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.471 qpair failed and we were unable to recover it. 00:26:30.471 [2024-07-26 14:20:38.291970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.471 [2024-07-26 14:20:38.291996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.471 qpair failed and we were unable to recover it. 00:26:30.471 [2024-07-26 14:20:38.292084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.471 [2024-07-26 14:20:38.292110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.471 qpair failed and we were unable to recover it. 00:26:30.471 [2024-07-26 14:20:38.292206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.471 [2024-07-26 14:20:38.292231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.471 qpair failed and we were unable to recover it. 00:26:30.471 [2024-07-26 14:20:38.292307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.471 [2024-07-26 14:20:38.292333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.471 qpair failed and we were unable to recover it. 00:26:30.471 [2024-07-26 14:20:38.292456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.471 [2024-07-26 14:20:38.292481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.471 qpair failed and we were unable to recover it. 00:26:30.471 [2024-07-26 14:20:38.292580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.471 [2024-07-26 14:20:38.292620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.471 qpair failed and we were unable to recover it. 00:26:30.471 [2024-07-26 14:20:38.292722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.471 [2024-07-26 14:20:38.292751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.471 qpair failed and we were unable to recover it. 00:26:30.471 [2024-07-26 14:20:38.292833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.471 [2024-07-26 14:20:38.292860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.471 qpair failed and we were unable to recover it. 00:26:30.471 [2024-07-26 14:20:38.292972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.471 [2024-07-26 14:20:38.292999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.471 qpair failed and we were unable to recover it. 00:26:30.471 [2024-07-26 14:20:38.293094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.471 [2024-07-26 14:20:38.293134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.471 qpair failed and we were unable to recover it. 00:26:30.471 [2024-07-26 14:20:38.293224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.471 [2024-07-26 14:20:38.293253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.471 qpair failed and we were unable to recover it. 00:26:30.471 [2024-07-26 14:20:38.293365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.471 [2024-07-26 14:20:38.293393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.471 qpair failed and we were unable to recover it. 00:26:30.471 [2024-07-26 14:20:38.293483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.471 [2024-07-26 14:20:38.293510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.471 qpair failed and we were unable to recover it. 00:26:30.472 [2024-07-26 14:20:38.293609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.472 [2024-07-26 14:20:38.293636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.472 qpair failed and we were unable to recover it. 00:26:30.472 [2024-07-26 14:20:38.293716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.472 [2024-07-26 14:20:38.293742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.472 qpair failed and we were unable to recover it. 00:26:30.472 [2024-07-26 14:20:38.293833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.472 [2024-07-26 14:20:38.293859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.472 qpair failed and we were unable to recover it. 00:26:30.472 [2024-07-26 14:20:38.293975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.472 [2024-07-26 14:20:38.294001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.472 qpair failed and we were unable to recover it. 00:26:30.472 [2024-07-26 14:20:38.294088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.472 [2024-07-26 14:20:38.294116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.472 qpair failed and we were unable to recover it. 00:26:30.472 [2024-07-26 14:20:38.294196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.472 [2024-07-26 14:20:38.294222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.472 qpair failed and we were unable to recover it. 00:26:30.472 [2024-07-26 14:20:38.294324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.472 [2024-07-26 14:20:38.294364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.472 qpair failed and we were unable to recover it. 00:26:30.472 [2024-07-26 14:20:38.294460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.472 [2024-07-26 14:20:38.294490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.472 qpair failed and we were unable to recover it. 00:26:30.472 [2024-07-26 14:20:38.294586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.472 [2024-07-26 14:20:38.294615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.472 qpair failed and we were unable to recover it. 00:26:30.472 [2024-07-26 14:20:38.294705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.472 [2024-07-26 14:20:38.294732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.472 qpair failed and we were unable to recover it. 00:26:30.472 [2024-07-26 14:20:38.294814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.472 [2024-07-26 14:20:38.294841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.472 qpair failed and we were unable to recover it. 00:26:30.472 [2024-07-26 14:20:38.294945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.472 [2024-07-26 14:20:38.294972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.472 qpair failed and we were unable to recover it. 00:26:30.472 [2024-07-26 14:20:38.295061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.472 [2024-07-26 14:20:38.295089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.472 qpair failed and we were unable to recover it. 00:26:30.472 [2024-07-26 14:20:38.295170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.472 [2024-07-26 14:20:38.295197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.472 qpair failed and we were unable to recover it. 00:26:30.472 [2024-07-26 14:20:38.295289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.472 [2024-07-26 14:20:38.295317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.472 qpair failed and we were unable to recover it. 00:26:30.472 [2024-07-26 14:20:38.295403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.472 [2024-07-26 14:20:38.295430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.472 qpair failed and we were unable to recover it. 00:26:30.472 [2024-07-26 14:20:38.295510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.472 [2024-07-26 14:20:38.295548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.472 qpair failed and we were unable to recover it. 00:26:30.472 [2024-07-26 14:20:38.295631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.472 [2024-07-26 14:20:38.295658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.472 qpair failed and we were unable to recover it. 00:26:30.472 [2024-07-26 14:20:38.295736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.472 [2024-07-26 14:20:38.295763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.472 qpair failed and we were unable to recover it. 00:26:30.472 [2024-07-26 14:20:38.295845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.472 [2024-07-26 14:20:38.295872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.472 qpair failed and we were unable to recover it. 00:26:30.472 [2024-07-26 14:20:38.295956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.472 [2024-07-26 14:20:38.295983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.472 qpair failed and we were unable to recover it. 00:26:30.472 [2024-07-26 14:20:38.296116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.472 [2024-07-26 14:20:38.296142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.472 qpair failed and we were unable to recover it. 00:26:30.472 [2024-07-26 14:20:38.296250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.472 [2024-07-26 14:20:38.296275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.472 qpair failed and we were unable to recover it. 00:26:30.472 [2024-07-26 14:20:38.296353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.472 [2024-07-26 14:20:38.296379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.472 qpair failed and we were unable to recover it. 00:26:30.472 [2024-07-26 14:20:38.296456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.472 [2024-07-26 14:20:38.296482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.472 qpair failed and we were unable to recover it. 00:26:30.472 [2024-07-26 14:20:38.296570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.472 [2024-07-26 14:20:38.296596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.472 qpair failed and we were unable to recover it. 00:26:30.472 [2024-07-26 14:20:38.296686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.472 [2024-07-26 14:20:38.296714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.472 qpair failed and we were unable to recover it. 00:26:30.472 [2024-07-26 14:20:38.296803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.472 [2024-07-26 14:20:38.296830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.472 qpair failed and we were unable to recover it. 00:26:30.472 [2024-07-26 14:20:38.296920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.472 [2024-07-26 14:20:38.296948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.472 qpair failed and we were unable to recover it. 00:26:30.472 [2024-07-26 14:20:38.297030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.472 [2024-07-26 14:20:38.297057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.472 qpair failed and we were unable to recover it. 00:26:30.472 [2024-07-26 14:20:38.297144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.472 [2024-07-26 14:20:38.297170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.472 qpair failed and we were unable to recover it. 00:26:30.472 [2024-07-26 14:20:38.297253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.472 [2024-07-26 14:20:38.297281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.472 qpair failed and we were unable to recover it. 00:26:30.472 [2024-07-26 14:20:38.297413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.472 [2024-07-26 14:20:38.297440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.472 qpair failed and we were unable to recover it. 00:26:30.472 [2024-07-26 14:20:38.297538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.472 [2024-07-26 14:20:38.297566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.472 qpair failed and we were unable to recover it. 00:26:30.472 [2024-07-26 14:20:38.297651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.472 [2024-07-26 14:20:38.297677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.472 qpair failed and we were unable to recover it. 00:26:30.472 [2024-07-26 14:20:38.297762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.472 [2024-07-26 14:20:38.297789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.472 qpair failed and we were unable to recover it. 00:26:30.472 [2024-07-26 14:20:38.297883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.472 [2024-07-26 14:20:38.297910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.473 qpair failed and we were unable to recover it. 00:26:30.473 [2024-07-26 14:20:38.297990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.473 [2024-07-26 14:20:38.298016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.473 qpair failed and we were unable to recover it. 00:26:30.473 [2024-07-26 14:20:38.298099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.473 [2024-07-26 14:20:38.298125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.473 qpair failed and we were unable to recover it. 00:26:30.473 [2024-07-26 14:20:38.298246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.473 [2024-07-26 14:20:38.298274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.473 qpair failed and we were unable to recover it. 00:26:30.473 [2024-07-26 14:20:38.298362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.473 [2024-07-26 14:20:38.298390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.473 qpair failed and we were unable to recover it. 00:26:30.473 [2024-07-26 14:20:38.298474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.473 [2024-07-26 14:20:38.298501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.473 qpair failed and we were unable to recover it. 00:26:30.473 [2024-07-26 14:20:38.298617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.473 [2024-07-26 14:20:38.298642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.473 qpair failed and we were unable to recover it. 00:26:30.473 [2024-07-26 14:20:38.298749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.473 [2024-07-26 14:20:38.298775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.473 qpair failed and we were unable to recover it. 00:26:30.473 [2024-07-26 14:20:38.298864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.473 [2024-07-26 14:20:38.298889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.473 qpair failed and we were unable to recover it. 00:26:30.473 [2024-07-26 14:20:38.298993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.473 [2024-07-26 14:20:38.299019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.473 qpair failed and we were unable to recover it. 00:26:30.473 [2024-07-26 14:20:38.299103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.473 [2024-07-26 14:20:38.299130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.473 qpair failed and we were unable to recover it. 00:26:30.473 [2024-07-26 14:20:38.299212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.473 [2024-07-26 14:20:38.299238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.473 qpair failed and we were unable to recover it. 00:26:30.473 [2024-07-26 14:20:38.299320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.473 [2024-07-26 14:20:38.299345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.473 qpair failed and we were unable to recover it. 00:26:30.473 [2024-07-26 14:20:38.299434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.473 [2024-07-26 14:20:38.299462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.473 qpair failed and we were unable to recover it. 00:26:30.473 [2024-07-26 14:20:38.299546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.473 [2024-07-26 14:20:38.299573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.473 qpair failed and we were unable to recover it. 00:26:30.473 [2024-07-26 14:20:38.299656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.473 [2024-07-26 14:20:38.299682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.473 qpair failed and we were unable to recover it. 00:26:30.473 [2024-07-26 14:20:38.299762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.473 [2024-07-26 14:20:38.299792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.473 qpair failed and we were unable to recover it. 00:26:30.473 [2024-07-26 14:20:38.299924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.473 [2024-07-26 14:20:38.299950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.473 qpair failed and we were unable to recover it. 00:26:30.473 [2024-07-26 14:20:38.300039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.473 [2024-07-26 14:20:38.300067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.473 qpair failed and we were unable to recover it. 00:26:30.473 [2024-07-26 14:20:38.300150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.473 [2024-07-26 14:20:38.300177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.473 qpair failed and we were unable to recover it. 00:26:30.473 [2024-07-26 14:20:38.300289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.473 [2024-07-26 14:20:38.300317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.473 qpair failed and we were unable to recover it. 00:26:30.473 [2024-07-26 14:20:38.300403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.473 [2024-07-26 14:20:38.300430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.473 qpair failed and we were unable to recover it. 00:26:30.473 [2024-07-26 14:20:38.300517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.473 [2024-07-26 14:20:38.300548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.473 qpair failed and we were unable to recover it. 00:26:30.473 [2024-07-26 14:20:38.300635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.473 [2024-07-26 14:20:38.300661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.473 qpair failed and we were unable to recover it. 00:26:30.473 [2024-07-26 14:20:38.300738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.473 [2024-07-26 14:20:38.300764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.473 qpair failed and we were unable to recover it. 00:26:30.473 [2024-07-26 14:20:38.300852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.473 [2024-07-26 14:20:38.300878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.473 qpair failed and we were unable to recover it. 00:26:30.473 [2024-07-26 14:20:38.300954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.473 [2024-07-26 14:20:38.300980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.473 qpair failed and we were unable to recover it. 00:26:30.473 [2024-07-26 14:20:38.301063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.473 [2024-07-26 14:20:38.301091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.473 qpair failed and we were unable to recover it. 00:26:30.473 [2024-07-26 14:20:38.301205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.473 [2024-07-26 14:20:38.301232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.473 qpair failed and we were unable to recover it. 00:26:30.473 [2024-07-26 14:20:38.301317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.473 [2024-07-26 14:20:38.301343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.473 qpair failed and we were unable to recover it. 00:26:30.473 [2024-07-26 14:20:38.301428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.473 [2024-07-26 14:20:38.301455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.473 qpair failed and we were unable to recover it. 00:26:30.473 [2024-07-26 14:20:38.301569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.473 [2024-07-26 14:20:38.301596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.473 qpair failed and we were unable to recover it. 00:26:30.473 [2024-07-26 14:20:38.301679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.473 [2024-07-26 14:20:38.301706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.473 qpair failed and we were unable to recover it. 00:26:30.473 [2024-07-26 14:20:38.301786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.473 [2024-07-26 14:20:38.301812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.473 qpair failed and we were unable to recover it. 00:26:30.473 [2024-07-26 14:20:38.301957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.473 [2024-07-26 14:20:38.301996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.473 qpair failed and we were unable to recover it. 00:26:30.473 [2024-07-26 14:20:38.302091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.474 [2024-07-26 14:20:38.302118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.474 qpair failed and we were unable to recover it. 00:26:30.474 [2024-07-26 14:20:38.302225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.474 [2024-07-26 14:20:38.302251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.474 qpair failed and we were unable to recover it. 00:26:30.474 [2024-07-26 14:20:38.302335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.474 [2024-07-26 14:20:38.302361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.474 qpair failed and we were unable to recover it. 00:26:30.474 [2024-07-26 14:20:38.302440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.474 [2024-07-26 14:20:38.302466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.474 qpair failed and we were unable to recover it. 00:26:30.474 [2024-07-26 14:20:38.302570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.474 [2024-07-26 14:20:38.302599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.474 qpair failed and we were unable to recover it. 00:26:30.474 [2024-07-26 14:20:38.302682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.474 [2024-07-26 14:20:38.302710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.474 qpair failed and we were unable to recover it. 00:26:30.474 [2024-07-26 14:20:38.302811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.474 [2024-07-26 14:20:38.302850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.474 qpair failed and we were unable to recover it. 00:26:30.474 [2024-07-26 14:20:38.302936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.474 [2024-07-26 14:20:38.302964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.474 qpair failed and we were unable to recover it. 00:26:30.474 [2024-07-26 14:20:38.303047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.474 [2024-07-26 14:20:38.303074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.474 qpair failed and we were unable to recover it. 00:26:30.474 [2024-07-26 14:20:38.303182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.474 [2024-07-26 14:20:38.303208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.474 qpair failed and we were unable to recover it. 00:26:30.474 [2024-07-26 14:20:38.303343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.474 [2024-07-26 14:20:38.303370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.474 qpair failed and we were unable to recover it. 00:26:30.474 [2024-07-26 14:20:38.303457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.474 [2024-07-26 14:20:38.303482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.474 qpair failed and we were unable to recover it. 00:26:30.474 [2024-07-26 14:20:38.303578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.474 [2024-07-26 14:20:38.303606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.474 qpair failed and we were unable to recover it. 00:26:30.474 [2024-07-26 14:20:38.303689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.474 [2024-07-26 14:20:38.303715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.474 qpair failed and we were unable to recover it. 00:26:30.474 [2024-07-26 14:20:38.303839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.474 [2024-07-26 14:20:38.303865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.474 qpair failed and we were unable to recover it. 00:26:30.474 [2024-07-26 14:20:38.303946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.474 [2024-07-26 14:20:38.303971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.474 qpair failed and we were unable to recover it. 00:26:30.474 [2024-07-26 14:20:38.304055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.474 [2024-07-26 14:20:38.304080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.474 qpair failed and we were unable to recover it. 00:26:30.474 [2024-07-26 14:20:38.304159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.474 [2024-07-26 14:20:38.304185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.474 qpair failed and we were unable to recover it. 00:26:30.474 [2024-07-26 14:20:38.304266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.474 [2024-07-26 14:20:38.304291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.474 qpair failed and we were unable to recover it. 00:26:30.474 [2024-07-26 14:20:38.304399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.474 [2024-07-26 14:20:38.304425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.474 qpair failed and we were unable to recover it. 00:26:30.474 [2024-07-26 14:20:38.304535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.474 [2024-07-26 14:20:38.304561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.474 qpair failed and we were unable to recover it. 00:26:30.474 [2024-07-26 14:20:38.304651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.474 [2024-07-26 14:20:38.304682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.474 qpair failed and we were unable to recover it. 00:26:30.474 [2024-07-26 14:20:38.304764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.474 [2024-07-26 14:20:38.304792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.474 qpair failed and we were unable to recover it. 00:26:30.474 [2024-07-26 14:20:38.304874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.474 [2024-07-26 14:20:38.304900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.474 qpair failed and we were unable to recover it. 00:26:30.474 [2024-07-26 14:20:38.304991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.474 [2024-07-26 14:20:38.305017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.474 qpair failed and we were unable to recover it. 00:26:30.474 [2024-07-26 14:20:38.305095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.474 [2024-07-26 14:20:38.305121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.474 qpair failed and we were unable to recover it. 00:26:30.474 [2024-07-26 14:20:38.305206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.474 [2024-07-26 14:20:38.305232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.474 qpair failed and we were unable to recover it. 00:26:30.474 [2024-07-26 14:20:38.305343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.474 [2024-07-26 14:20:38.305383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.474 qpair failed and we were unable to recover it. 00:26:30.474 [2024-07-26 14:20:38.305472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.474 [2024-07-26 14:20:38.305500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.474 qpair failed and we were unable to recover it. 00:26:30.474 [2024-07-26 14:20:38.305600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.474 [2024-07-26 14:20:38.305628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.474 qpair failed and we were unable to recover it. 00:26:30.474 [2024-07-26 14:20:38.305741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.474 [2024-07-26 14:20:38.305769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.474 qpair failed and we were unable to recover it. 00:26:30.474 [2024-07-26 14:20:38.305855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.474 [2024-07-26 14:20:38.305882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.474 qpair failed and we were unable to recover it. 00:26:30.474 [2024-07-26 14:20:38.305970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.474 [2024-07-26 14:20:38.305996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.474 qpair failed and we were unable to recover it. 00:26:30.474 [2024-07-26 14:20:38.306099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.474 [2024-07-26 14:20:38.306126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.474 qpair failed and we were unable to recover it. 00:26:30.474 [2024-07-26 14:20:38.306202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.474 [2024-07-26 14:20:38.306227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.474 qpair failed and we were unable to recover it. 00:26:30.475 [2024-07-26 14:20:38.306317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.475 [2024-07-26 14:20:38.306344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.475 qpair failed and we were unable to recover it. 00:26:30.475 [2024-07-26 14:20:38.306425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.475 [2024-07-26 14:20:38.306450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.475 qpair failed and we were unable to recover it. 00:26:30.475 [2024-07-26 14:20:38.306537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.475 [2024-07-26 14:20:38.306563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.475 qpair failed and we were unable to recover it. 00:26:30.475 [2024-07-26 14:20:38.306653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.475 [2024-07-26 14:20:38.306678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.475 qpair failed and we were unable to recover it. 00:26:30.475 [2024-07-26 14:20:38.306766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.475 [2024-07-26 14:20:38.306791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.475 qpair failed and we were unable to recover it. 00:26:30.475 [2024-07-26 14:20:38.306880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.475 [2024-07-26 14:20:38.306909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.475 qpair failed and we were unable to recover it. 00:26:30.475 [2024-07-26 14:20:38.307000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.475 [2024-07-26 14:20:38.307026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.475 qpair failed and we were unable to recover it. 00:26:30.475 [2024-07-26 14:20:38.307133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.475 [2024-07-26 14:20:38.307161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.475 qpair failed and we were unable to recover it. 00:26:30.475 [2024-07-26 14:20:38.307240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.475 [2024-07-26 14:20:38.307266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.475 qpair failed and we were unable to recover it. 00:26:30.475 [2024-07-26 14:20:38.307359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.475 [2024-07-26 14:20:38.307399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.475 qpair failed and we were unable to recover it. 00:26:30.475 [2024-07-26 14:20:38.307551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.475 [2024-07-26 14:20:38.307579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.475 qpair failed and we were unable to recover it. 00:26:30.475 [2024-07-26 14:20:38.307666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.475 [2024-07-26 14:20:38.307693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.475 qpair failed and we were unable to recover it. 00:26:30.475 [2024-07-26 14:20:38.307773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.475 [2024-07-26 14:20:38.307800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.475 qpair failed and we were unable to recover it. 00:26:30.475 [2024-07-26 14:20:38.307892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.475 [2024-07-26 14:20:38.307923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.475 qpair failed and we were unable to recover it. 00:26:30.475 [2024-07-26 14:20:38.308038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.475 [2024-07-26 14:20:38.308065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.475 qpair failed and we were unable to recover it. 00:26:30.475 [2024-07-26 14:20:38.308153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.475 [2024-07-26 14:20:38.308181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.475 qpair failed and we were unable to recover it. 00:26:30.475 [2024-07-26 14:20:38.308269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.475 [2024-07-26 14:20:38.308297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.475 qpair failed and we were unable to recover it. 00:26:30.475 [2024-07-26 14:20:38.308380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.475 [2024-07-26 14:20:38.308407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.475 qpair failed and we were unable to recover it. 00:26:30.475 [2024-07-26 14:20:38.308485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.475 [2024-07-26 14:20:38.308512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.475 qpair failed and we were unable to recover it. 00:26:30.475 [2024-07-26 14:20:38.308604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.475 [2024-07-26 14:20:38.308630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.475 qpair failed and we were unable to recover it. 00:26:30.475 [2024-07-26 14:20:38.308713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.475 [2024-07-26 14:20:38.308739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.475 qpair failed and we were unable to recover it. 00:26:30.475 [2024-07-26 14:20:38.308820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.475 [2024-07-26 14:20:38.308847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.475 qpair failed and we were unable to recover it. 00:26:30.475 [2024-07-26 14:20:38.308954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.475 [2024-07-26 14:20:38.308981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.475 qpair failed and we were unable to recover it. 00:26:30.475 [2024-07-26 14:20:38.309068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.475 [2024-07-26 14:20:38.309097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.475 qpair failed and we were unable to recover it. 00:26:30.475 [2024-07-26 14:20:38.309182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.475 [2024-07-26 14:20:38.309210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.475 qpair failed and we were unable to recover it. 00:26:30.475 [2024-07-26 14:20:38.309304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.475 [2024-07-26 14:20:38.309331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.475 qpair failed and we were unable to recover it. 00:26:30.475 [2024-07-26 14:20:38.309440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.475 [2024-07-26 14:20:38.309466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.475 qpair failed and we were unable to recover it. 00:26:30.475 [2024-07-26 14:20:38.309562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.475 [2024-07-26 14:20:38.309590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.475 qpair failed and we were unable to recover it. 00:26:30.475 [2024-07-26 14:20:38.309685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.475 [2024-07-26 14:20:38.309712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.475 qpair failed and we were unable to recover it. 00:26:30.475 [2024-07-26 14:20:38.309799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.475 [2024-07-26 14:20:38.309827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.475 qpair failed and we were unable to recover it. 00:26:30.475 [2024-07-26 14:20:38.309911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.475 [2024-07-26 14:20:38.309938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.475 qpair failed and we were unable to recover it. 00:26:30.475 [2024-07-26 14:20:38.310026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.475 [2024-07-26 14:20:38.310053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.475 qpair failed and we were unable to recover it. 00:26:30.475 [2024-07-26 14:20:38.310134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.475 [2024-07-26 14:20:38.310162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.475 qpair failed and we were unable to recover it. 00:26:30.475 [2024-07-26 14:20:38.310298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.475 [2024-07-26 14:20:38.310327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.475 qpair failed and we were unable to recover it. 00:26:30.475 [2024-07-26 14:20:38.310439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.475 [2024-07-26 14:20:38.310465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.475 qpair failed and we were unable to recover it. 00:26:30.475 [2024-07-26 14:20:38.310554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.475 [2024-07-26 14:20:38.310582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.475 qpair failed and we were unable to recover it. 00:26:30.475 [2024-07-26 14:20:38.310668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.475 [2024-07-26 14:20:38.310694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.475 qpair failed and we were unable to recover it. 00:26:30.475 [2024-07-26 14:20:38.310777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.475 [2024-07-26 14:20:38.310803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.476 qpair failed and we were unable to recover it. 00:26:30.476 [2024-07-26 14:20:38.310891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.476 [2024-07-26 14:20:38.310916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.476 qpair failed and we were unable to recover it. 00:26:30.476 [2024-07-26 14:20:38.310995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.476 [2024-07-26 14:20:38.311021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.476 qpair failed and we were unable to recover it. 00:26:30.476 [2024-07-26 14:20:38.311134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.476 [2024-07-26 14:20:38.311162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.476 qpair failed and we were unable to recover it. 00:26:30.476 [2024-07-26 14:20:38.311253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.476 [2024-07-26 14:20:38.311293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.476 qpair failed and we were unable to recover it. 00:26:30.476 [2024-07-26 14:20:38.311384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.476 [2024-07-26 14:20:38.311411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.476 qpair failed and we were unable to recover it. 00:26:30.476 [2024-07-26 14:20:38.311498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.476 [2024-07-26 14:20:38.311526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.476 qpair failed and we were unable to recover it. 00:26:30.476 [2024-07-26 14:20:38.311652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.476 [2024-07-26 14:20:38.311679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.476 qpair failed and we were unable to recover it. 00:26:30.476 [2024-07-26 14:20:38.311761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.476 [2024-07-26 14:20:38.311787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.476 qpair failed and we were unable to recover it. 00:26:30.476 [2024-07-26 14:20:38.311869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.476 [2024-07-26 14:20:38.311895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.476 qpair failed and we were unable to recover it. 00:26:30.476 [2024-07-26 14:20:38.311986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.476 [2024-07-26 14:20:38.312012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.476 qpair failed and we were unable to recover it. 00:26:30.476 [2024-07-26 14:20:38.312091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.476 [2024-07-26 14:20:38.312118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.476 qpair failed and we were unable to recover it. 00:26:30.476 [2024-07-26 14:20:38.312204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.476 [2024-07-26 14:20:38.312231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.476 qpair failed and we were unable to recover it. 00:26:30.476 [2024-07-26 14:20:38.312320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.476 [2024-07-26 14:20:38.312348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.476 qpair failed and we were unable to recover it. 00:26:30.476 [2024-07-26 14:20:38.312438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.476 [2024-07-26 14:20:38.312466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.476 qpair failed and we were unable to recover it. 00:26:30.476 [2024-07-26 14:20:38.312585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.476 [2024-07-26 14:20:38.312612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.476 qpair failed and we were unable to recover it. 00:26:30.476 [2024-07-26 14:20:38.312690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.476 [2024-07-26 14:20:38.312721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.476 qpair failed and we were unable to recover it. 00:26:30.476 [2024-07-26 14:20:38.312808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.476 [2024-07-26 14:20:38.312834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.476 qpair failed and we were unable to recover it. 00:26:30.476 [2024-07-26 14:20:38.312944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.476 [2024-07-26 14:20:38.312971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.476 qpair failed and we were unable to recover it. 00:26:30.476 [2024-07-26 14:20:38.313055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.476 [2024-07-26 14:20:38.313082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.476 qpair failed and we were unable to recover it. 00:26:30.476 [2024-07-26 14:20:38.313163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.476 [2024-07-26 14:20:38.313189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.476 qpair failed and we were unable to recover it. 00:26:30.476 [2024-07-26 14:20:38.313271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.476 [2024-07-26 14:20:38.313299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.476 qpair failed and we were unable to recover it. 00:26:30.476 [2024-07-26 14:20:38.313377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.476 [2024-07-26 14:20:38.313405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.476 qpair failed and we were unable to recover it. 00:26:30.476 [2024-07-26 14:20:38.313482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.476 [2024-07-26 14:20:38.313509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.476 qpair failed and we were unable to recover it. 00:26:30.476 [2024-07-26 14:20:38.313600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.476 [2024-07-26 14:20:38.313626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.476 qpair failed and we were unable to recover it. 00:26:30.476 [2024-07-26 14:20:38.313704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.476 [2024-07-26 14:20:38.313730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.476 qpair failed and we were unable to recover it. 00:26:30.476 [2024-07-26 14:20:38.313821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.476 [2024-07-26 14:20:38.313850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.476 qpair failed and we were unable to recover it. 00:26:30.476 [2024-07-26 14:20:38.313931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.476 [2024-07-26 14:20:38.313957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.476 qpair failed and we were unable to recover it. 00:26:30.476 [2024-07-26 14:20:38.314044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.476 [2024-07-26 14:20:38.314071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.476 qpair failed and we were unable to recover it. 00:26:30.476 [2024-07-26 14:20:38.314178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.476 [2024-07-26 14:20:38.314205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.476 qpair failed and we were unable to recover it. 00:26:30.476 [2024-07-26 14:20:38.314291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.476 [2024-07-26 14:20:38.314318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.476 qpair failed and we were unable to recover it. 00:26:30.476 [2024-07-26 14:20:38.314401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.476 [2024-07-26 14:20:38.314427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.476 qpair failed and we were unable to recover it. 00:26:30.476 [2024-07-26 14:20:38.314511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.476 [2024-07-26 14:20:38.314552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.476 qpair failed and we were unable to recover it. 00:26:30.476 [2024-07-26 14:20:38.314632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.476 [2024-07-26 14:20:38.314658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.476 qpair failed and we were unable to recover it. 00:26:30.476 [2024-07-26 14:20:38.314738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.476 [2024-07-26 14:20:38.314765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.476 qpair failed and we were unable to recover it. 00:26:30.476 [2024-07-26 14:20:38.314844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.476 [2024-07-26 14:20:38.314869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.476 qpair failed and we were unable to recover it. 00:26:30.476 [2024-07-26 14:20:38.314944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.476 [2024-07-26 14:20:38.314969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.476 qpair failed and we were unable to recover it. 00:26:30.476 [2024-07-26 14:20:38.315047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.476 [2024-07-26 14:20:38.315073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.476 qpair failed and we were unable to recover it. 00:26:30.476 [2024-07-26 14:20:38.315150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.477 [2024-07-26 14:20:38.315175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.477 qpair failed and we were unable to recover it. 00:26:30.477 [2024-07-26 14:20:38.315256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.477 [2024-07-26 14:20:38.315282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.477 qpair failed and we were unable to recover it. 00:26:30.477 [2024-07-26 14:20:38.315364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.477 [2024-07-26 14:20:38.315390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.477 qpair failed and we were unable to recover it. 00:26:30.477 [2024-07-26 14:20:38.315475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.477 [2024-07-26 14:20:38.315501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.477 qpair failed and we were unable to recover it. 00:26:30.477 [2024-07-26 14:20:38.315598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.477 [2024-07-26 14:20:38.315624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.477 qpair failed and we were unable to recover it. 00:26:30.477 [2024-07-26 14:20:38.315720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.477 [2024-07-26 14:20:38.315751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.477 qpair failed and we were unable to recover it. 00:26:30.477 [2024-07-26 14:20:38.315829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.477 [2024-07-26 14:20:38.315855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.477 qpair failed and we were unable to recover it. 00:26:30.477 [2024-07-26 14:20:38.315981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.477 [2024-07-26 14:20:38.316006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.477 qpair failed and we were unable to recover it. 00:26:30.477 [2024-07-26 14:20:38.316096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.477 [2024-07-26 14:20:38.316122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.477 qpair failed and we were unable to recover it. 00:26:30.477 [2024-07-26 14:20:38.316214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.477 [2024-07-26 14:20:38.316242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.477 qpair failed and we were unable to recover it. 00:26:30.477 [2024-07-26 14:20:38.316326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.477 [2024-07-26 14:20:38.316353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.477 qpair failed and we were unable to recover it. 00:26:30.477 [2024-07-26 14:20:38.316431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.477 [2024-07-26 14:20:38.316457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.477 qpair failed and we were unable to recover it. 00:26:30.477 [2024-07-26 14:20:38.316549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.477 [2024-07-26 14:20:38.316577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.477 qpair failed and we were unable to recover it. 00:26:30.477 [2024-07-26 14:20:38.316660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.477 [2024-07-26 14:20:38.316687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.477 qpair failed and we were unable to recover it. 00:26:30.477 [2024-07-26 14:20:38.316769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.477 [2024-07-26 14:20:38.316795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.477 qpair failed and we were unable to recover it. 00:26:30.477 [2024-07-26 14:20:38.316992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.477 [2024-07-26 14:20:38.317019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.477 qpair failed and we were unable to recover it. 00:26:30.477 [2024-07-26 14:20:38.317098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.477 [2024-07-26 14:20:38.317125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.477 qpair failed and we were unable to recover it. 00:26:30.477 [2024-07-26 14:20:38.317226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.477 [2024-07-26 14:20:38.317252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.477 qpair failed and we were unable to recover it. 00:26:30.477 [2024-07-26 14:20:38.317335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.477 [2024-07-26 14:20:38.317361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.477 qpair failed and we were unable to recover it. 00:26:30.477 [2024-07-26 14:20:38.317453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.477 [2024-07-26 14:20:38.317479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.477 qpair failed and we were unable to recover it. 00:26:30.477 [2024-07-26 14:20:38.317562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.477 [2024-07-26 14:20:38.317588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.477 qpair failed and we were unable to recover it. 00:26:30.477 [2024-07-26 14:20:38.317670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.477 [2024-07-26 14:20:38.317696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.477 qpair failed and we were unable to recover it. 00:26:30.477 [2024-07-26 14:20:38.317782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.477 [2024-07-26 14:20:38.317808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.477 qpair failed and we were unable to recover it. 00:26:30.477 [2024-07-26 14:20:38.317916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.477 [2024-07-26 14:20:38.317943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.477 qpair failed and we were unable to recover it. 00:26:30.477 [2024-07-26 14:20:38.318022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.477 [2024-07-26 14:20:38.318048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.477 qpair failed and we were unable to recover it. 00:26:30.477 [2024-07-26 14:20:38.318132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.477 [2024-07-26 14:20:38.318157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.477 qpair failed and we were unable to recover it. 00:26:30.477 [2024-07-26 14:20:38.318235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.477 [2024-07-26 14:20:38.318261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.477 qpair failed and we were unable to recover it. 00:26:30.477 [2024-07-26 14:20:38.318451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.477 [2024-07-26 14:20:38.318477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.477 qpair failed and we were unable to recover it. 00:26:30.477 [2024-07-26 14:20:38.318557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.477 [2024-07-26 14:20:38.318583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.477 qpair failed and we were unable to recover it. 00:26:30.477 [2024-07-26 14:20:38.318699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.477 [2024-07-26 14:20:38.318725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.477 qpair failed and we were unable to recover it. 00:26:30.477 [2024-07-26 14:20:38.318809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.477 [2024-07-26 14:20:38.318835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.477 qpair failed and we were unable to recover it. 00:26:30.477 [2024-07-26 14:20:38.318918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.477 [2024-07-26 14:20:38.318944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.477 qpair failed and we were unable to recover it. 00:26:30.477 [2024-07-26 14:20:38.319020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.477 [2024-07-26 14:20:38.319051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.477 qpair failed and we were unable to recover it. 00:26:30.477 [2024-07-26 14:20:38.319156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.477 [2024-07-26 14:20:38.319182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.477 qpair failed and we were unable to recover it. 00:26:30.477 [2024-07-26 14:20:38.319289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.477 [2024-07-26 14:20:38.319315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.477 qpair failed and we were unable to recover it. 00:26:30.477 [2024-07-26 14:20:38.319432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.477 [2024-07-26 14:20:38.319458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.477 qpair failed and we were unable to recover it. 00:26:30.477 [2024-07-26 14:20:38.319539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.477 [2024-07-26 14:20:38.319565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.477 qpair failed and we were unable to recover it. 00:26:30.477 [2024-07-26 14:20:38.319643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.477 [2024-07-26 14:20:38.319670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.477 qpair failed and we were unable to recover it. 00:26:30.477 [2024-07-26 14:20:38.319751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.478 [2024-07-26 14:20:38.319777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.478 qpair failed and we were unable to recover it. 00:26:30.478 [2024-07-26 14:20:38.319895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.478 [2024-07-26 14:20:38.319921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.478 qpair failed and we were unable to recover it. 00:26:30.478 [2024-07-26 14:20:38.320048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.478 [2024-07-26 14:20:38.320075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.478 qpair failed and we were unable to recover it. 00:26:30.478 [2024-07-26 14:20:38.320152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.478 [2024-07-26 14:20:38.320178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.478 qpair failed and we were unable to recover it. 00:26:30.478 [2024-07-26 14:20:38.320256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.478 [2024-07-26 14:20:38.320282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.478 qpair failed and we were unable to recover it. 00:26:30.478 [2024-07-26 14:20:38.320400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.478 [2024-07-26 14:20:38.320427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.478 qpair failed and we were unable to recover it. 00:26:30.478 [2024-07-26 14:20:38.320504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.478 [2024-07-26 14:20:38.320538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.478 qpair failed and we were unable to recover it. 00:26:30.478 [2024-07-26 14:20:38.320645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.478 [2024-07-26 14:20:38.320672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.478 qpair failed and we were unable to recover it. 00:26:30.478 [2024-07-26 14:20:38.320760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.478 [2024-07-26 14:20:38.320786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.478 qpair failed and we were unable to recover it. 00:26:30.478 [2024-07-26 14:20:38.320911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.478 [2024-07-26 14:20:38.320936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.478 qpair failed and we were unable to recover it. 00:26:30.478 [2024-07-26 14:20:38.321045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.478 [2024-07-26 14:20:38.321071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.478 qpair failed and we were unable to recover it. 00:26:30.478 [2024-07-26 14:20:38.321196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.478 [2024-07-26 14:20:38.321222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.478 qpair failed and we were unable to recover it. 00:26:30.478 [2024-07-26 14:20:38.321333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.478 [2024-07-26 14:20:38.321359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.478 qpair failed and we were unable to recover it. 00:26:30.478 [2024-07-26 14:20:38.321472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.478 [2024-07-26 14:20:38.321498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.478 qpair failed and we were unable to recover it. 00:26:30.478 [2024-07-26 14:20:38.321601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.478 [2024-07-26 14:20:38.321627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.478 qpair failed and we were unable to recover it. 00:26:30.478 [2024-07-26 14:20:38.321752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.478 [2024-07-26 14:20:38.321778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.478 qpair failed and we were unable to recover it. 00:26:30.478 [2024-07-26 14:20:38.321914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.478 [2024-07-26 14:20:38.321940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.478 qpair failed and we were unable to recover it. 00:26:30.478 [2024-07-26 14:20:38.322064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.478 [2024-07-26 14:20:38.322089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.478 qpair failed and we were unable to recover it. 00:26:30.478 [2024-07-26 14:20:38.322200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.478 [2024-07-26 14:20:38.322226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.478 qpair failed and we were unable to recover it. 00:26:30.478 [2024-07-26 14:20:38.322310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.478 [2024-07-26 14:20:38.322336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.478 qpair failed and we were unable to recover it. 00:26:30.478 [2024-07-26 14:20:38.322440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.478 [2024-07-26 14:20:38.322466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.478 qpair failed and we were unable to recover it. 00:26:30.478 [2024-07-26 14:20:38.322592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.478 [2024-07-26 14:20:38.322623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.478 qpair failed and we were unable to recover it. 00:26:30.478 [2024-07-26 14:20:38.322719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.478 [2024-07-26 14:20:38.322745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.478 qpair failed and we were unable to recover it. 00:26:30.478 [2024-07-26 14:20:38.322862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.478 [2024-07-26 14:20:38.322888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.478 qpair failed and we were unable to recover it. 00:26:30.478 [2024-07-26 14:20:38.322996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.478 [2024-07-26 14:20:38.323022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.478 qpair failed and we were unable to recover it. 00:26:30.478 [2024-07-26 14:20:38.323098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.478 [2024-07-26 14:20:38.323124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.478 qpair failed and we were unable to recover it. 00:26:30.478 [2024-07-26 14:20:38.323203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.478 [2024-07-26 14:20:38.323229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.478 qpair failed and we were unable to recover it. 00:26:30.478 [2024-07-26 14:20:38.323306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.478 [2024-07-26 14:20:38.323332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.478 qpair failed and we were unable to recover it. 00:26:30.478 [2024-07-26 14:20:38.323434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.478 [2024-07-26 14:20:38.323460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.478 qpair failed and we were unable to recover it. 00:26:30.478 [2024-07-26 14:20:38.323575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.478 [2024-07-26 14:20:38.323601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.478 qpair failed and we were unable to recover it. 00:26:30.478 [2024-07-26 14:20:38.323681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.478 [2024-07-26 14:20:38.323708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.478 qpair failed and we were unable to recover it. 00:26:30.478 [2024-07-26 14:20:38.323785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.478 [2024-07-26 14:20:38.323812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.478 qpair failed and we were unable to recover it. 00:26:30.479 [2024-07-26 14:20:38.323924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.479 [2024-07-26 14:20:38.323950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.479 qpair failed and we were unable to recover it. 00:26:30.479 [2024-07-26 14:20:38.324043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.479 [2024-07-26 14:20:38.324069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.479 qpair failed and we were unable to recover it. 00:26:30.479 [2024-07-26 14:20:38.324147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.479 [2024-07-26 14:20:38.324172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.479 qpair failed and we were unable to recover it. 00:26:30.479 [2024-07-26 14:20:38.324265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.479 [2024-07-26 14:20:38.324292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.479 qpair failed and we were unable to recover it. 00:26:30.479 [2024-07-26 14:20:38.324372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.479 [2024-07-26 14:20:38.324398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.479 qpair failed and we were unable to recover it. 00:26:30.479 [2024-07-26 14:20:38.324523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.479 [2024-07-26 14:20:38.324569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.479 qpair failed and we were unable to recover it. 00:26:30.479 [2024-07-26 14:20:38.324668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.479 [2024-07-26 14:20:38.324695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.479 qpair failed and we were unable to recover it. 00:26:30.479 [2024-07-26 14:20:38.324778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.479 [2024-07-26 14:20:38.324805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.479 qpair failed and we were unable to recover it. 00:26:30.479 [2024-07-26 14:20:38.324889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.479 [2024-07-26 14:20:38.324917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.479 qpair failed and we were unable to recover it. 00:26:30.479 [2024-07-26 14:20:38.325012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.479 [2024-07-26 14:20:38.325039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.479 qpair failed and we were unable to recover it. 00:26:30.479 [2024-07-26 14:20:38.325114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.479 [2024-07-26 14:20:38.325141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.479 qpair failed and we were unable to recover it. 00:26:30.479 [2024-07-26 14:20:38.325221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.479 [2024-07-26 14:20:38.325248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.479 qpair failed and we were unable to recover it. 00:26:30.479 [2024-07-26 14:20:38.325353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.479 [2024-07-26 14:20:38.325379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.479 qpair failed and we were unable to recover it. 00:26:30.479 [2024-07-26 14:20:38.325492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.479 [2024-07-26 14:20:38.325520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.479 qpair failed and we were unable to recover it. 00:26:30.479 [2024-07-26 14:20:38.325608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.479 [2024-07-26 14:20:38.325636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.479 qpair failed and we were unable to recover it. 00:26:30.479 [2024-07-26 14:20:38.325721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.479 [2024-07-26 14:20:38.325748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.479 qpair failed and we were unable to recover it. 00:26:30.479 [2024-07-26 14:20:38.325856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.479 [2024-07-26 14:20:38.325891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.479 qpair failed and we were unable to recover it. 00:26:30.479 [2024-07-26 14:20:38.325972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.479 [2024-07-26 14:20:38.325999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.479 qpair failed and we were unable to recover it. 00:26:30.479 [2024-07-26 14:20:38.326111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.479 [2024-07-26 14:20:38.326138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.479 qpair failed and we were unable to recover it. 00:26:30.479 [2024-07-26 14:20:38.326227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.479 [2024-07-26 14:20:38.326254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.479 qpair failed and we were unable to recover it. 00:26:30.479 [2024-07-26 14:20:38.326345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.479 [2024-07-26 14:20:38.326385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4338000b90 with addr=10.0.0.2, port=4420 00:26:30.479 qpair failed and we were unable to recover it. 00:26:30.479 [2024-07-26 14:20:38.326479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.479 [2024-07-26 14:20:38.326507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.479 qpair failed and we were unable to recover it. 00:26:30.479 [2024-07-26 14:20:38.326623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.479 [2024-07-26 14:20:38.326650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.479 qpair failed and we were unable to recover it. 00:26:30.479 [2024-07-26 14:20:38.326737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.479 [2024-07-26 14:20:38.326763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.479 qpair failed and we were unable to recover it. 00:26:30.479 [2024-07-26 14:20:38.326848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.479 [2024-07-26 14:20:38.326874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.479 qpair failed and we were unable to recover it. 00:26:30.479 [2024-07-26 14:20:38.326953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.479 [2024-07-26 14:20:38.326979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.479 qpair failed and we were unable to recover it. 00:26:30.479 [2024-07-26 14:20:38.327067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.479 [2024-07-26 14:20:38.327093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.479 qpair failed and we were unable to recover it. 00:26:30.479 [2024-07-26 14:20:38.327207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.479 [2024-07-26 14:20:38.327234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.479 qpair failed and we were unable to recover it. 00:26:30.479 [2024-07-26 14:20:38.327347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.479 [2024-07-26 14:20:38.327374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.479 qpair failed and we were unable to recover it. 00:26:30.479 [2024-07-26 14:20:38.327454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.479 [2024-07-26 14:20:38.327481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4330000b90 with addr=10.0.0.2, port=4420 00:26:30.479 qpair failed and we were unable to recover it. 00:26:30.479 [2024-07-26 14:20:38.327591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.479 [2024-07-26 14:20:38.327631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4340000b90 with addr=10.0.0.2, port=4420 00:26:30.479 qpair failed and we were unable to recover it. 00:26:30.479 A controller has encountered a failure and is being reset. 00:26:30.479 [2024-07-26 14:20:38.327736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.479 [2024-07-26 14:20:38.327764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.479 qpair failed and we were unable to recover it. 00:26:30.479 [2024-07-26 14:20:38.327846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.479 [2024-07-26 14:20:38.327872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.479 qpair failed and we were unable to recover it. 00:26:30.479 [2024-07-26 14:20:38.327953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.479 [2024-07-26 14:20:38.327979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.479 qpair failed and we were unable to recover it. 00:26:30.479 [2024-07-26 14:20:38.328054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.479 [2024-07-26 14:20:38.328081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.479 qpair failed and we were unable to recover it. 00:26:30.479 [2024-07-26 14:20:38.328164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.479 [2024-07-26 14:20:38.328190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.479 qpair failed and we were unable to recover it. 00:26:30.479 [2024-07-26 14:20:38.328265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.479 [2024-07-26 14:20:38.328291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.479 qpair failed and we were unable to recover it. 00:26:30.479 [2024-07-26 14:20:38.328398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.480 [2024-07-26 14:20:38.328425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.480 qpair failed and we were unable to recover it. 00:26:30.480 [2024-07-26 14:20:38.328501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.480 [2024-07-26 14:20:38.328532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.480 qpair failed and we were unable to recover it. 00:26:30.480 [2024-07-26 14:20:38.328612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.480 [2024-07-26 14:20:38.328639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.480 qpair failed and we were unable to recover it. 00:26:30.480 [2024-07-26 14:20:38.328720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.480 [2024-07-26 14:20:38.328746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.480 qpair failed and we were unable to recover it. 00:26:30.480 [2024-07-26 14:20:38.328827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.480 [2024-07-26 14:20:38.328853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.480 qpair failed and we were unable to recover it. 00:26:30.480 [2024-07-26 14:20:38.328929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.480 [2024-07-26 14:20:38.328955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030250 with addr=10.0.0.2, port=4420 00:26:30.480 qpair failed and we were unable to recover it. 00:26:30.480 [2024-07-26 14:20:38.329067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.480 [2024-07-26 14:20:38.329113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x103e230 with addr=10.0.0.2, port=4420 00:26:30.480 [2024-07-26 14:20:38.329134] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103e230 is same with the state(5) to be set 00:26:30.480 [2024-07-26 14:20:38.329160] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x103e230 (9): Bad file descriptor 00:26:30.480 [2024-07-26 14:20:38.329180] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:30.480 [2024-07-26 14:20:38.329194] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:30.480 [2024-07-26 14:20:38.329211] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:30.480 Unable to reset the controller. 00:26:31.046 14:20:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:31.046 14:20:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:26:31.046 14:20:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:31.046 14:20:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:31.046 14:20:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:31.046 14:20:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:31.046 14:20:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:31.046 14:20:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.046 14:20:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:31.046 Malloc0 00:26:31.046 14:20:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.046 14:20:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:26:31.046 14:20:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.046 14:20:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:31.046 [2024-07-26 14:20:38.803909] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:31.046 14:20:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.046 14:20:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:31.046 14:20:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.046 14:20:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:31.046 14:20:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.046 14:20:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:31.046 14:20:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.046 14:20:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:31.046 14:20:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.046 14:20:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:31.046 14:20:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.046 14:20:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:31.046 [2024-07-26 14:20:38.832180] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:31.046 14:20:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.046 14:20:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:31.046 14:20:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.046 14:20:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:31.046 14:20:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.046 14:20:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 328905 00:26:31.611 Controller properly reset. 00:26:36.873 Initializing NVMe Controllers 00:26:36.873 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:36.873 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:36.873 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:26:36.873 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:26:36.873 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:26:36.873 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:26:36.873 Initialization complete. Launching workers. 00:26:36.873 Starting thread on core 1 00:26:36.873 Starting thread on core 2 00:26:36.873 Starting thread on core 3 00:26:36.873 Starting thread on core 0 00:26:36.873 14:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:26:36.873 00:26:36.873 real 0m10.713s 00:26:36.873 user 0m33.389s 00:26:36.873 sys 0m7.432s 00:26:36.873 14:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:36.873 14:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:36.873 ************************************ 00:26:36.873 END TEST nvmf_target_disconnect_tc2 00:26:36.873 ************************************ 00:26:36.873 14:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:26:36.873 14:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:26:36.873 14:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:26:36.873 14:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:36.873 14:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:26:36.873 14:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:36.873 14:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:26:36.873 14:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:36.873 14:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:36.873 rmmod nvme_tcp 00:26:36.873 rmmod nvme_fabrics 00:26:36.873 rmmod nvme_keyring 00:26:36.873 14:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:36.873 14:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:26:36.873 14:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:26:36.873 14:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 329352 ']' 00:26:36.873 14:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 329352 00:26:36.873 14:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@950 -- # '[' -z 329352 ']' 00:26:36.873 14:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # kill -0 329352 00:26:36.873 14:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # uname 00:26:36.873 14:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:36.873 14:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 329352 00:26:36.873 14:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_4 00:26:36.873 14:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_4 = sudo ']' 00:26:36.873 14:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 329352' 00:26:36.873 killing process with pid 329352 00:26:36.873 14:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@969 -- # kill 329352 00:26:36.873 14:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@974 -- # wait 329352 00:26:36.873 14:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:36.873 14:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:36.873 14:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:36.873 14:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:36.873 14:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:36.873 14:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:36.873 14:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:36.873 14:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:38.780 14:20:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:38.780 00:26:38.780 real 0m15.651s 00:26:38.780 user 0m58.491s 00:26:38.780 sys 0m10.090s 00:26:38.780 14:20:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:38.780 14:20:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:26:38.780 ************************************ 00:26:38.780 END TEST nvmf_target_disconnect 00:26:38.780 ************************************ 00:26:38.780 14:20:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:26:38.780 00:26:38.780 real 4m58.808s 00:26:38.780 user 10m44.872s 00:26:38.780 sys 1m15.195s 00:26:38.780 14:20:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:38.780 14:20:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.780 ************************************ 00:26:38.780 END TEST nvmf_host 00:26:38.780 ************************************ 00:26:38.780 00:26:38.780 real 19m12.708s 00:26:38.780 user 45m16.185s 00:26:38.780 sys 4m52.358s 00:26:38.780 14:20:46 nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:38.780 14:20:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:38.780 ************************************ 00:26:38.780 END TEST nvmf_tcp 00:26:38.780 ************************************ 00:26:38.780 14:20:46 -- spdk/autotest.sh@292 -- # [[ 0 -eq 0 ]] 00:26:38.780 14:20:46 -- spdk/autotest.sh@293 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:26:38.780 14:20:46 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:38.780 14:20:46 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:38.780 14:20:46 -- common/autotest_common.sh@10 -- # set +x 00:26:38.780 ************************************ 00:26:38.780 START TEST spdkcli_nvmf_tcp 00:26:38.780 ************************************ 00:26:38.780 14:20:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:26:38.780 * Looking for test storage... 00:26:38.780 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:26:38.780 14:20:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:26:38.780 14:20:46 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:26:38.780 14:20:46 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:26:38.780 14:20:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:38.780 14:20:46 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:26:38.780 14:20:46 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:38.780 14:20:46 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:38.780 14:20:46 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:38.780 14:20:46 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:38.780 14:20:46 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:38.780 14:20:46 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:38.780 14:20:46 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:38.780 14:20:46 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:38.780 14:20:46 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:38.780 14:20:46 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:38.780 14:20:46 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:26:38.780 14:20:46 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:26:38.780 14:20:46 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:38.780 14:20:46 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:38.780 14:20:46 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:38.780 14:20:46 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:38.780 14:20:46 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:38.780 14:20:46 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:38.780 14:20:46 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:38.780 14:20:46 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:38.780 14:20:46 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:38.780 14:20:46 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:38.780 14:20:46 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:38.780 14:20:46 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:26:38.781 14:20:46 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:38.781 14:20:46 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:26:38.781 14:20:46 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:38.781 14:20:46 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:38.781 14:20:46 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:38.781 14:20:46 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:38.781 14:20:46 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:38.781 14:20:46 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:38.781 14:20:46 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:38.781 14:20:46 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:38.781 14:20:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:26:38.781 14:20:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:26:38.781 14:20:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:26:38.781 14:20:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:26:38.781 14:20:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:38.781 14:20:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:38.781 14:20:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:26:38.781 14:20:46 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=330512 00:26:38.781 14:20:46 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:26:38.781 14:20:46 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 330512 00:26:38.781 14:20:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@831 -- # '[' -z 330512 ']' 00:26:38.781 14:20:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:38.781 14:20:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:38.781 14:20:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:38.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:38.781 14:20:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:38.781 14:20:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:38.781 [2024-07-26 14:20:46.671983] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:26:38.781 [2024-07-26 14:20:46.672072] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid330512 ] 00:26:38.781 EAL: No free 2048 kB hugepages reported on node 1 00:26:38.781 [2024-07-26 14:20:46.733194] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:39.039 [2024-07-26 14:20:46.840109] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:39.039 [2024-07-26 14:20:46.840113] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:39.039 14:20:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:39.039 14:20:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # return 0 00:26:39.039 14:20:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:26:39.039 14:20:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:39.039 14:20:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:39.039 14:20:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:26:39.039 14:20:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:26:39.039 14:20:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:26:39.039 14:20:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:39.039 14:20:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:39.039 14:20:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:26:39.039 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:26:39.039 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:26:39.039 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:26:39.039 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:26:39.039 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:26:39.039 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:26:39.039 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:26:39.039 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:26:39.039 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:26:39.039 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:26:39.039 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:26:39.039 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:26:39.039 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:26:39.039 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:26:39.039 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:26:39.039 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:26:39.039 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:26:39.039 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:26:39.039 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:26:39.039 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:26:39.039 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:26:39.039 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:26:39.039 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:26:39.039 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:26:39.039 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:26:39.039 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:26:39.039 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:26:39.039 ' 00:26:41.568 [2024-07-26 14:20:49.551651] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:42.965 [2024-07-26 14:20:50.767914] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:26:45.491 [2024-07-26 14:20:53.051005] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:26:47.387 [2024-07-26 14:20:54.992945] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:26:48.758 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:26:48.758 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:26:48.758 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:26:48.758 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:26:48.758 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:26:48.758 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:26:48.758 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:26:48.758 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:26:48.758 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:26:48.758 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:26:48.758 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:26:48.758 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:26:48.758 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:26:48.758 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:26:48.758 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:26:48.758 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:26:48.758 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:26:48.758 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:26:48.758 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:26:48.758 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:26:48.758 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:26:48.758 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:26:48.758 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:26:48.758 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:26:48.758 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:26:48.759 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:26:48.759 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:26:48.759 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:26:48.759 14:20:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:26:48.759 14:20:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:48.759 14:20:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:48.759 14:20:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:26:48.759 14:20:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:48.759 14:20:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:48.759 14:20:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:26:48.759 14:20:56 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:26:49.016 14:20:56 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:26:49.274 14:20:57 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:26:49.274 14:20:57 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:26:49.274 14:20:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:49.274 14:20:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:49.274 14:20:57 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:26:49.274 14:20:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:49.274 14:20:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:49.274 14:20:57 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:26:49.274 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:26:49.274 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:26:49.274 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:26:49.274 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:26:49.274 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:26:49.274 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:26:49.274 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:26:49.274 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:26:49.274 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:26:49.274 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:26:49.274 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:26:49.274 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:26:49.274 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:26:49.274 ' 00:26:54.594 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:26:54.594 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:26:54.594 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:26:54.594 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:26:54.594 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:26:54.594 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:26:54.594 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:26:54.594 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:26:54.594 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:26:54.594 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:26:54.594 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:26:54.594 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:26:54.594 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:26:54.594 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:26:54.594 14:21:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:26:54.594 14:21:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:54.594 14:21:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:54.594 14:21:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 330512 00:26:54.594 14:21:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 330512 ']' 00:26:54.594 14:21:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 330512 00:26:54.594 14:21:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # uname 00:26:54.594 14:21:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:54.594 14:21:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 330512 00:26:54.594 14:21:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:54.594 14:21:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:54.594 14:21:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 330512' 00:26:54.594 killing process with pid 330512 00:26:54.594 14:21:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@969 -- # kill 330512 00:26:54.594 14:21:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@974 -- # wait 330512 00:26:54.594 14:21:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:26:54.594 14:21:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:26:54.594 14:21:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 330512 ']' 00:26:54.594 14:21:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 330512 00:26:54.594 14:21:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 330512 ']' 00:26:54.594 14:21:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 330512 00:26:54.595 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (330512) - No such process 00:26:54.595 14:21:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@977 -- # echo 'Process with pid 330512 is not found' 00:26:54.595 Process with pid 330512 is not found 00:26:54.595 14:21:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:26:54.595 14:21:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:26:54.595 14:21:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:26:54.595 00:26:54.595 real 0m15.975s 00:26:54.595 user 0m33.607s 00:26:54.595 sys 0m0.875s 00:26:54.595 14:21:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:54.595 14:21:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:54.595 ************************************ 00:26:54.595 END TEST spdkcli_nvmf_tcp 00:26:54.595 ************************************ 00:26:54.595 14:21:02 -- spdk/autotest.sh@294 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:26:54.595 14:21:02 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:54.595 14:21:02 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:54.595 14:21:02 -- common/autotest_common.sh@10 -- # set +x 00:26:54.595 ************************************ 00:26:54.595 START TEST nvmf_identify_passthru 00:26:54.595 ************************************ 00:26:54.595 14:21:02 nvmf_identify_passthru -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:26:54.595 * Looking for test storage... 00:26:54.595 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:54.595 14:21:02 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:54.595 14:21:02 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:26:54.595 14:21:02 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:54.595 14:21:02 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:54.595 14:21:02 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:54.595 14:21:02 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:54.595 14:21:02 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:54.595 14:21:02 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:54.595 14:21:02 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:54.595 14:21:02 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:54.595 14:21:02 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:54.853 14:21:02 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:54.853 14:21:02 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:26:54.853 14:21:02 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:26:54.853 14:21:02 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:54.853 14:21:02 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:54.853 14:21:02 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:54.853 14:21:02 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:54.853 14:21:02 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:54.853 14:21:02 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:54.853 14:21:02 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:54.853 14:21:02 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:54.853 14:21:02 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:54.853 14:21:02 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:54.853 14:21:02 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:54.853 14:21:02 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:26:54.853 14:21:02 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:54.853 14:21:02 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:26:54.853 14:21:02 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:54.853 14:21:02 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:54.853 14:21:02 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:54.853 14:21:02 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:54.853 14:21:02 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:54.853 14:21:02 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:54.853 14:21:02 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:54.853 14:21:02 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:54.853 14:21:02 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:54.853 14:21:02 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:54.853 14:21:02 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:54.853 14:21:02 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:54.853 14:21:02 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:54.853 14:21:02 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:54.853 14:21:02 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:54.853 14:21:02 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:26:54.853 14:21:02 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:54.853 14:21:02 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:26:54.853 14:21:02 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:54.853 14:21:02 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:54.853 14:21:02 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:54.853 14:21:02 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:54.853 14:21:02 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:54.853 14:21:02 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:54.853 14:21:02 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:54.853 14:21:02 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:54.853 14:21:02 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:54.853 14:21:02 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:54.853 14:21:02 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:26:54.853 14:21:02 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:56.749 14:21:04 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:56.749 14:21:04 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:26:56.749 14:21:04 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:56.749 14:21:04 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:56.749 14:21:04 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:56.749 14:21:04 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:56.749 14:21:04 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:56.749 14:21:04 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:26:56.749 14:21:04 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:56.749 14:21:04 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:26:56.749 14:21:04 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:26:56.749 14:21:04 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:26:56.749 14:21:04 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:26:56.749 14:21:04 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:26:56.749 14:21:04 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:26:56.749 14:21:04 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:56.749 14:21:04 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:56.749 14:21:04 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:56.749 14:21:04 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:56.749 14:21:04 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:56.749 14:21:04 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:56.749 14:21:04 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:56.749 14:21:04 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:56.749 14:21:04 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:56.749 14:21:04 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:56.749 14:21:04 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:56.749 14:21:04 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:56.749 14:21:04 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:56.749 14:21:04 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:56.749 14:21:04 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:56.749 14:21:04 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:56.749 14:21:04 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:56.749 14:21:04 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:56.749 14:21:04 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:26:56.749 Found 0000:09:00.0 (0x8086 - 0x159b) 00:26:56.749 14:21:04 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:56.749 14:21:04 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:56.749 14:21:04 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:56.750 14:21:04 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:56.750 14:21:04 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:56.750 14:21:04 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:56.750 14:21:04 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:26:56.750 Found 0000:09:00.1 (0x8086 - 0x159b) 00:26:56.750 14:21:04 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:56.750 14:21:04 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:56.750 14:21:04 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:56.750 14:21:04 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:56.750 14:21:04 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:56.750 14:21:04 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:56.750 14:21:04 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:56.750 14:21:04 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:56.750 14:21:04 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:56.750 14:21:04 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:56.750 14:21:04 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:56.750 14:21:04 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:56.750 14:21:04 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:56.750 14:21:04 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:56.750 14:21:04 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:56.750 14:21:04 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:26:56.750 Found net devices under 0000:09:00.0: cvl_0_0 00:26:56.750 14:21:04 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:56.750 14:21:04 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:56.750 14:21:04 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:56.750 14:21:04 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:56.750 14:21:04 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:56.750 14:21:04 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:56.750 14:21:04 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:56.750 14:21:04 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:56.750 14:21:04 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:26:56.750 Found net devices under 0000:09:00.1: cvl_0_1 00:26:56.750 14:21:04 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:56.750 14:21:04 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:56.750 14:21:04 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:26:56.750 14:21:04 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:56.750 14:21:04 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:56.750 14:21:04 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:56.750 14:21:04 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:56.750 14:21:04 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:56.750 14:21:04 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:56.750 14:21:04 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:56.750 14:21:04 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:56.750 14:21:04 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:56.750 14:21:04 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:56.750 14:21:04 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:56.750 14:21:04 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:56.750 14:21:04 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:56.750 14:21:04 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:56.750 14:21:04 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:56.750 14:21:04 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:56.750 14:21:04 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:56.750 14:21:04 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:56.750 14:21:04 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:56.750 14:21:04 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:56.750 14:21:04 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:56.750 14:21:04 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:56.750 14:21:04 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:56.750 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:56.750 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.277 ms 00:26:56.750 00:26:56.750 --- 10.0.0.2 ping statistics --- 00:26:56.750 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:56.750 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:26:56.750 14:21:04 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:56.750 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:56.750 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.080 ms 00:26:56.750 00:26:56.750 --- 10.0.0.1 ping statistics --- 00:26:56.750 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:56.750 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:26:56.750 14:21:04 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:56.750 14:21:04 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:26:56.750 14:21:04 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:56.750 14:21:04 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:56.750 14:21:04 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:56.750 14:21:04 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:56.750 14:21:04 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:56.750 14:21:04 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:56.750 14:21:04 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:57.010 14:21:04 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:26:57.010 14:21:04 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:57.010 14:21:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:57.010 14:21:04 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:26:57.010 14:21:04 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:26:57.010 14:21:04 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:26:57.010 14:21:04 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:26:57.010 14:21:04 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:26:57.010 14:21:04 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:26:57.010 14:21:04 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:26:57.010 14:21:04 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:26:57.010 14:21:04 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:26:57.010 14:21:04 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:26:57.010 14:21:04 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:26:57.010 14:21:04 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:0b:00.0 00:26:57.010 14:21:04 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:0b:00.0 00:26:57.010 14:21:04 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:0b:00.0 00:26:57.010 14:21:04 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:0b:00.0 ']' 00:26:57.010 14:21:04 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:0b:00.0' -i 0 00:26:57.010 14:21:04 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:26:57.010 14:21:04 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:26:57.010 EAL: No free 2048 kB hugepages reported on node 1 00:27:01.196 14:21:08 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ72430F4Q1P0FGN 00:27:01.196 14:21:08 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:0b:00.0' -i 0 00:27:01.196 14:21:08 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:27:01.196 14:21:08 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:27:01.196 EAL: No free 2048 kB hugepages reported on node 1 00:27:05.384 14:21:13 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:27:05.384 14:21:13 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:27:05.384 14:21:13 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:05.384 14:21:13 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:05.384 14:21:13 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:27:05.384 14:21:13 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:05.384 14:21:13 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:05.384 14:21:13 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=335639 00:27:05.384 14:21:13 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:27:05.384 14:21:13 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:05.384 14:21:13 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 335639 00:27:05.384 14:21:13 nvmf_identify_passthru -- common/autotest_common.sh@831 -- # '[' -z 335639 ']' 00:27:05.384 14:21:13 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:05.384 14:21:13 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:05.384 14:21:13 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:05.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:05.384 14:21:13 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:05.384 14:21:13 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:05.384 [2024-07-26 14:21:13.123671] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:27:05.384 [2024-07-26 14:21:13.123752] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:05.384 EAL: No free 2048 kB hugepages reported on node 1 00:27:05.384 [2024-07-26 14:21:13.190746] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:05.384 [2024-07-26 14:21:13.298698] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:05.384 [2024-07-26 14:21:13.298759] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:05.384 [2024-07-26 14:21:13.298772] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:05.384 [2024-07-26 14:21:13.298783] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:05.384 [2024-07-26 14:21:13.298792] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:05.384 [2024-07-26 14:21:13.298843] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:05.384 [2024-07-26 14:21:13.298869] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:05.384 [2024-07-26 14:21:13.298992] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:05.384 [2024-07-26 14:21:13.298995] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:05.384 14:21:13 nvmf_identify_passthru -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:05.384 14:21:13 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # return 0 00:27:05.384 14:21:13 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:27:05.384 14:21:13 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.384 14:21:13 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:05.384 INFO: Log level set to 20 00:27:05.384 INFO: Requests: 00:27:05.384 { 00:27:05.384 "jsonrpc": "2.0", 00:27:05.384 "method": "nvmf_set_config", 00:27:05.384 "id": 1, 00:27:05.384 "params": { 00:27:05.384 "admin_cmd_passthru": { 00:27:05.384 "identify_ctrlr": true 00:27:05.384 } 00:27:05.384 } 00:27:05.384 } 00:27:05.384 00:27:05.384 INFO: response: 00:27:05.384 { 00:27:05.384 "jsonrpc": "2.0", 00:27:05.384 "id": 1, 00:27:05.384 "result": true 00:27:05.384 } 00:27:05.384 00:27:05.384 14:21:13 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.384 14:21:13 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:27:05.384 14:21:13 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.384 14:21:13 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:05.384 INFO: Setting log level to 20 00:27:05.384 INFO: Setting log level to 20 00:27:05.384 INFO: Log level set to 20 00:27:05.384 INFO: Log level set to 20 00:27:05.384 INFO: Requests: 00:27:05.384 { 00:27:05.384 "jsonrpc": "2.0", 00:27:05.384 "method": "framework_start_init", 00:27:05.384 "id": 1 00:27:05.385 } 00:27:05.385 00:27:05.385 INFO: Requests: 00:27:05.385 { 00:27:05.385 "jsonrpc": "2.0", 00:27:05.385 "method": "framework_start_init", 00:27:05.385 "id": 1 00:27:05.385 } 00:27:05.385 00:27:05.642 [2024-07-26 14:21:13.445905] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:27:05.642 INFO: response: 00:27:05.643 { 00:27:05.643 "jsonrpc": "2.0", 00:27:05.643 "id": 1, 00:27:05.643 "result": true 00:27:05.643 } 00:27:05.643 00:27:05.643 INFO: response: 00:27:05.643 { 00:27:05.643 "jsonrpc": "2.0", 00:27:05.643 "id": 1, 00:27:05.643 "result": true 00:27:05.643 } 00:27:05.643 00:27:05.643 14:21:13 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.643 14:21:13 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:05.643 14:21:13 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.643 14:21:13 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:05.643 INFO: Setting log level to 40 00:27:05.643 INFO: Setting log level to 40 00:27:05.643 INFO: Setting log level to 40 00:27:05.643 [2024-07-26 14:21:13.456009] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:05.643 14:21:13 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.643 14:21:13 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:27:05.643 14:21:13 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:05.643 14:21:13 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:05.643 14:21:13 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:0b:00.0 00:27:05.643 14:21:13 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.643 14:21:13 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:08.921 Nvme0n1 00:27:08.921 14:21:16 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.921 14:21:16 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:27:08.921 14:21:16 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.921 14:21:16 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:08.921 14:21:16 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.921 14:21:16 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:27:08.921 14:21:16 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.921 14:21:16 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:08.921 14:21:16 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.921 14:21:16 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:08.921 14:21:16 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.921 14:21:16 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:08.921 [2024-07-26 14:21:16.350921] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:08.921 14:21:16 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.921 14:21:16 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:27:08.921 14:21:16 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.921 14:21:16 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:08.921 [ 00:27:08.921 { 00:27:08.921 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:08.921 "subtype": "Discovery", 00:27:08.921 "listen_addresses": [], 00:27:08.921 "allow_any_host": true, 00:27:08.921 "hosts": [] 00:27:08.921 }, 00:27:08.921 { 00:27:08.921 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:08.921 "subtype": "NVMe", 00:27:08.921 "listen_addresses": [ 00:27:08.921 { 00:27:08.921 "trtype": "TCP", 00:27:08.921 "adrfam": "IPv4", 00:27:08.921 "traddr": "10.0.0.2", 00:27:08.921 "trsvcid": "4420" 00:27:08.921 } 00:27:08.921 ], 00:27:08.921 "allow_any_host": true, 00:27:08.921 "hosts": [], 00:27:08.921 "serial_number": "SPDK00000000000001", 00:27:08.921 "model_number": "SPDK bdev Controller", 00:27:08.921 "max_namespaces": 1, 00:27:08.921 "min_cntlid": 1, 00:27:08.921 "max_cntlid": 65519, 00:27:08.921 "namespaces": [ 00:27:08.921 { 00:27:08.921 "nsid": 1, 00:27:08.921 "bdev_name": "Nvme0n1", 00:27:08.921 "name": "Nvme0n1", 00:27:08.921 "nguid": "695B549538F7483F9A1B331082898577", 00:27:08.921 "uuid": "695b5495-38f7-483f-9a1b-331082898577" 00:27:08.921 } 00:27:08.921 ] 00:27:08.921 } 00:27:08.921 ] 00:27:08.921 14:21:16 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.921 14:21:16 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:08.921 14:21:16 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:27:08.921 14:21:16 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:27:08.921 EAL: No free 2048 kB hugepages reported on node 1 00:27:08.921 14:21:16 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ72430F4Q1P0FGN 00:27:08.921 14:21:16 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:08.921 14:21:16 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:27:08.921 14:21:16 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:27:08.921 EAL: No free 2048 kB hugepages reported on node 1 00:27:08.921 14:21:16 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:27:08.921 14:21:16 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ72430F4Q1P0FGN '!=' BTLJ72430F4Q1P0FGN ']' 00:27:08.921 14:21:16 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:27:08.921 14:21:16 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:08.921 14:21:16 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.921 14:21:16 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:08.921 14:21:16 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.921 14:21:16 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:27:08.921 14:21:16 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:27:08.921 14:21:16 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:08.921 14:21:16 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:27:08.921 14:21:16 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:08.921 14:21:16 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:27:08.921 14:21:16 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:08.921 14:21:16 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:08.921 rmmod nvme_tcp 00:27:08.921 rmmod nvme_fabrics 00:27:08.921 rmmod nvme_keyring 00:27:08.921 14:21:16 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:08.921 14:21:16 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:27:08.921 14:21:16 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:27:08.921 14:21:16 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 335639 ']' 00:27:08.921 14:21:16 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 335639 00:27:08.921 14:21:16 nvmf_identify_passthru -- common/autotest_common.sh@950 -- # '[' -z 335639 ']' 00:27:08.921 14:21:16 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # kill -0 335639 00:27:08.921 14:21:16 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # uname 00:27:08.921 14:21:16 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:08.921 14:21:16 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 335639 00:27:08.921 14:21:16 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:08.921 14:21:16 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:08.921 14:21:16 nvmf_identify_passthru -- common/autotest_common.sh@968 -- # echo 'killing process with pid 335639' 00:27:08.921 killing process with pid 335639 00:27:08.921 14:21:16 nvmf_identify_passthru -- common/autotest_common.sh@969 -- # kill 335639 00:27:08.921 14:21:16 nvmf_identify_passthru -- common/autotest_common.sh@974 -- # wait 335639 00:27:10.295 14:21:18 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:10.295 14:21:18 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:10.295 14:21:18 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:10.295 14:21:18 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:10.295 14:21:18 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:10.295 14:21:18 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:10.295 14:21:18 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:10.295 14:21:18 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:12.828 14:21:20 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:12.828 00:27:12.828 real 0m17.734s 00:27:12.828 user 0m26.003s 00:27:12.828 sys 0m2.362s 00:27:12.828 14:21:20 nvmf_identify_passthru -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:12.828 14:21:20 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:12.828 ************************************ 00:27:12.828 END TEST nvmf_identify_passthru 00:27:12.828 ************************************ 00:27:12.828 14:21:20 -- spdk/autotest.sh@296 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:27:12.828 14:21:20 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:27:12.828 14:21:20 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:12.828 14:21:20 -- common/autotest_common.sh@10 -- # set +x 00:27:12.828 ************************************ 00:27:12.828 START TEST nvmf_dif 00:27:12.828 ************************************ 00:27:12.828 14:21:20 nvmf_dif -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:27:12.828 * Looking for test storage... 00:27:12.828 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:12.828 14:21:20 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:12.828 14:21:20 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:27:12.828 14:21:20 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:12.828 14:21:20 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:12.828 14:21:20 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:12.828 14:21:20 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:12.828 14:21:20 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:12.828 14:21:20 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:12.828 14:21:20 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:12.828 14:21:20 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:12.828 14:21:20 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:12.828 14:21:20 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:12.828 14:21:20 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:27:12.828 14:21:20 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:27:12.828 14:21:20 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:12.828 14:21:20 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:12.828 14:21:20 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:12.828 14:21:20 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:12.828 14:21:20 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:12.828 14:21:20 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:12.828 14:21:20 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:12.828 14:21:20 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:12.828 14:21:20 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:12.828 14:21:20 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:12.828 14:21:20 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:12.828 14:21:20 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:27:12.828 14:21:20 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:12.828 14:21:20 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:27:12.828 14:21:20 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:12.828 14:21:20 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:12.828 14:21:20 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:12.828 14:21:20 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:12.828 14:21:20 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:12.828 14:21:20 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:12.828 14:21:20 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:12.828 14:21:20 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:12.828 14:21:20 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:27:12.828 14:21:20 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:27:12.828 14:21:20 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:27:12.828 14:21:20 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:27:12.828 14:21:20 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:27:12.828 14:21:20 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:12.828 14:21:20 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:12.828 14:21:20 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:12.828 14:21:20 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:12.828 14:21:20 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:12.828 14:21:20 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:12.828 14:21:20 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:12.828 14:21:20 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:12.828 14:21:20 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:12.828 14:21:20 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:12.828 14:21:20 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:27:12.828 14:21:20 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:14.729 14:21:22 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:14.729 14:21:22 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:27:14.729 14:21:22 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:14.729 14:21:22 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:14.729 14:21:22 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:14.729 14:21:22 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:14.729 14:21:22 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:14.729 14:21:22 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:27:14.729 14:21:22 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:14.729 14:21:22 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:27:14.729 14:21:22 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:27:14.729 14:21:22 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:27:14.729 14:21:22 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:27:14.729 14:21:22 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:27:14.729 14:21:22 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:27:14.729 14:21:22 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:14.729 14:21:22 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:14.729 14:21:22 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:14.729 14:21:22 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:14.729 14:21:22 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:14.729 14:21:22 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:14.729 14:21:22 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:14.729 14:21:22 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:14.729 14:21:22 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:14.729 14:21:22 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:14.729 14:21:22 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:14.729 14:21:22 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:14.729 14:21:22 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:14.729 14:21:22 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:14.729 14:21:22 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:14.729 14:21:22 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:14.729 14:21:22 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:14.729 14:21:22 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:14.729 14:21:22 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:27:14.729 Found 0000:09:00.0 (0x8086 - 0x159b) 00:27:14.729 14:21:22 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:14.729 14:21:22 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:14.729 14:21:22 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:14.729 14:21:22 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:14.729 14:21:22 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:14.729 14:21:22 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:14.729 14:21:22 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:27:14.729 Found 0000:09:00.1 (0x8086 - 0x159b) 00:27:14.729 14:21:22 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:14.729 14:21:22 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:14.729 14:21:22 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:14.729 14:21:22 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:14.729 14:21:22 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:14.729 14:21:22 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:14.729 14:21:22 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:14.729 14:21:22 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:14.729 14:21:22 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:14.729 14:21:22 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:14.729 14:21:22 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:14.729 14:21:22 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:14.729 14:21:22 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:14.729 14:21:22 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:14.729 14:21:22 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:14.729 14:21:22 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:27:14.729 Found net devices under 0000:09:00.0: cvl_0_0 00:27:14.729 14:21:22 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:14.729 14:21:22 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:14.729 14:21:22 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:14.730 14:21:22 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:14.730 14:21:22 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:14.730 14:21:22 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:14.730 14:21:22 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:14.730 14:21:22 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:14.730 14:21:22 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:27:14.730 Found net devices under 0000:09:00.1: cvl_0_1 00:27:14.730 14:21:22 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:14.730 14:21:22 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:14.730 14:21:22 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:27:14.730 14:21:22 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:14.730 14:21:22 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:14.730 14:21:22 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:14.730 14:21:22 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:14.730 14:21:22 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:14.730 14:21:22 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:14.730 14:21:22 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:14.730 14:21:22 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:14.730 14:21:22 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:14.730 14:21:22 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:14.730 14:21:22 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:14.730 14:21:22 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:14.730 14:21:22 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:14.730 14:21:22 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:14.730 14:21:22 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:14.730 14:21:22 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:14.730 14:21:22 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:14.730 14:21:22 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:14.730 14:21:22 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:14.730 14:21:22 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:14.730 14:21:22 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:14.730 14:21:22 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:14.730 14:21:22 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:14.730 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:14.730 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.271 ms 00:27:14.730 00:27:14.730 --- 10.0.0.2 ping statistics --- 00:27:14.730 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:14.730 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:27:14.730 14:21:22 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:14.730 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:14.730 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:27:14.730 00:27:14.730 --- 10.0.0.1 ping statistics --- 00:27:14.730 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:14.730 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:27:14.730 14:21:22 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:14.730 14:21:22 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:27:14.730 14:21:22 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:27:14.730 14:21:22 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:16.105 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:27:16.105 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:27:16.105 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:27:16.105 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:27:16.105 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:27:16.105 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:27:16.105 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:27:16.105 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:27:16.105 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:27:16.105 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:27:16.105 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:27:16.105 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:27:16.105 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:27:16.105 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:27:16.105 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:27:16.105 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:27:16.105 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:27:16.105 14:21:23 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:16.105 14:21:23 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:16.105 14:21:23 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:16.105 14:21:23 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:16.105 14:21:23 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:16.105 14:21:23 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:16.105 14:21:23 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:27:16.105 14:21:23 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:27:16.105 14:21:23 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:16.105 14:21:23 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:16.105 14:21:23 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:16.105 14:21:23 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=338903 00:27:16.105 14:21:23 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:27:16.105 14:21:23 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 338903 00:27:16.105 14:21:23 nvmf_dif -- common/autotest_common.sh@831 -- # '[' -z 338903 ']' 00:27:16.105 14:21:23 nvmf_dif -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:16.105 14:21:23 nvmf_dif -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:16.105 14:21:23 nvmf_dif -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:16.105 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:16.105 14:21:23 nvmf_dif -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:16.105 14:21:23 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:16.105 [2024-07-26 14:21:23.998542] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:27:16.105 [2024-07-26 14:21:23.998633] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:16.105 EAL: No free 2048 kB hugepages reported on node 1 00:27:16.105 [2024-07-26 14:21:24.063107] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:16.364 [2024-07-26 14:21:24.167994] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:16.364 [2024-07-26 14:21:24.168049] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:16.364 [2024-07-26 14:21:24.168069] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:16.364 [2024-07-26 14:21:24.168079] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:16.364 [2024-07-26 14:21:24.168089] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:16.364 [2024-07-26 14:21:24.168114] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:16.364 14:21:24 nvmf_dif -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:16.364 14:21:24 nvmf_dif -- common/autotest_common.sh@864 -- # return 0 00:27:16.364 14:21:24 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:16.364 14:21:24 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:16.364 14:21:24 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:16.364 14:21:24 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:16.364 14:21:24 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:27:16.364 14:21:24 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:27:16.364 14:21:24 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.364 14:21:24 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:16.364 [2024-07-26 14:21:24.308019] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:16.364 14:21:24 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.364 14:21:24 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:27:16.364 14:21:24 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:27:16.364 14:21:24 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:16.364 14:21:24 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:16.364 ************************************ 00:27:16.364 START TEST fio_dif_1_default 00:27:16.364 ************************************ 00:27:16.364 14:21:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1125 -- # fio_dif_1 00:27:16.364 14:21:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:27:16.364 14:21:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:27:16.364 14:21:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:27:16.364 14:21:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:27:16.364 14:21:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:27:16.364 14:21:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:27:16.364 14:21:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.364 14:21:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:16.364 bdev_null0 00:27:16.364 14:21:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.364 14:21:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:16.364 14:21:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.364 14:21:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:16.364 14:21:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.364 14:21:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:16.364 14:21:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.364 14:21:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:16.364 14:21:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.364 14:21:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:16.364 14:21:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.364 14:21:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:16.364 [2024-07-26 14:21:24.364287] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:16.364 14:21:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.364 14:21:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:27:16.364 14:21:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:27:16.364 14:21:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:27:16.364 14:21:24 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:27:16.364 14:21:24 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:27:16.364 14:21:24 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:16.364 14:21:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:16.364 14:21:24 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:16.364 { 00:27:16.364 "params": { 00:27:16.364 "name": "Nvme$subsystem", 00:27:16.364 "trtype": "$TEST_TRANSPORT", 00:27:16.364 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:16.364 "adrfam": "ipv4", 00:27:16.364 "trsvcid": "$NVMF_PORT", 00:27:16.364 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:16.364 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:16.364 "hdgst": ${hdgst:-false}, 00:27:16.364 "ddgst": ${ddgst:-false} 00:27:16.364 }, 00:27:16.364 "method": "bdev_nvme_attach_controller" 00:27:16.364 } 00:27:16.364 EOF 00:27:16.364 )") 00:27:16.364 14:21:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:16.364 14:21:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:27:16.364 14:21:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:16.364 14:21:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:27:16.364 14:21:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:16.364 14:21:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:27:16.364 14:21:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:27:16.364 14:21:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:27:16.364 14:21:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:27:16.364 14:21:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:27:16.364 14:21:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:27:16.364 14:21:24 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:27:16.364 14:21:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:16.364 14:21:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:27:16.364 14:21:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:27:16.364 14:21:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:27:16.364 14:21:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:27:16.364 14:21:24 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:27:16.364 14:21:24 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:27:16.364 14:21:24 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:16.364 "params": { 00:27:16.364 "name": "Nvme0", 00:27:16.364 "trtype": "tcp", 00:27:16.364 "traddr": "10.0.0.2", 00:27:16.364 "adrfam": "ipv4", 00:27:16.364 "trsvcid": "4420", 00:27:16.365 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:16.365 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:16.365 "hdgst": false, 00:27:16.365 "ddgst": false 00:27:16.365 }, 00:27:16.365 "method": "bdev_nvme_attach_controller" 00:27:16.365 }' 00:27:16.623 14:21:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:27:16.623 14:21:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:27:16.623 14:21:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:27:16.623 14:21:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:16.623 14:21:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:27:16.623 14:21:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:27:16.623 14:21:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:27:16.623 14:21:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:27:16.623 14:21:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:27:16.623 14:21:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:16.623 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:27:16.623 fio-3.35 00:27:16.623 Starting 1 thread 00:27:16.881 EAL: No free 2048 kB hugepages reported on node 1 00:27:29.075 00:27:29.075 filename0: (groupid=0, jobs=1): err= 0: pid=339133: Fri Jul 26 14:21:35 2024 00:27:29.075 read: IOPS=191, BW=767KiB/s (786kB/s)(7680KiB/10008msec) 00:27:29.075 slat (nsec): min=6791, max=48215, avg=8585.56, stdev=2657.05 00:27:29.075 clat (usec): min=515, max=44461, avg=20822.50, stdev=20460.31 00:27:29.075 lat (usec): min=522, max=44495, avg=20831.08, stdev=20460.16 00:27:29.075 clat percentiles (usec): 00:27:29.075 | 1.00th=[ 570], 5.00th=[ 578], 10.00th=[ 586], 20.00th=[ 603], 00:27:29.075 | 30.00th=[ 627], 40.00th=[ 660], 50.00th=[ 734], 60.00th=[41157], 00:27:29.075 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:27:29.075 | 99.00th=[42206], 99.50th=[42206], 99.90th=[44303], 99.95th=[44303], 00:27:29.075 | 99.99th=[44303] 00:27:29.075 bw ( KiB/s): min= 704, max= 960, per=99.82%, avg=766.40, stdev=54.42, samples=20 00:27:29.075 iops : min= 176, max= 240, avg=191.60, stdev=13.60, samples=20 00:27:29.075 lat (usec) : 750=50.26%, 1000=0.36% 00:27:29.075 lat (msec) : 50=49.38% 00:27:29.075 cpu : usr=89.96%, sys=9.75%, ctx=21, majf=0, minf=244 00:27:29.075 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:29.075 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:29.075 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:29.075 issued rwts: total=1920,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:29.075 latency : target=0, window=0, percentile=100.00%, depth=4 00:27:29.075 00:27:29.075 Run status group 0 (all jobs): 00:27:29.075 READ: bw=767KiB/s (786kB/s), 767KiB/s-767KiB/s (786kB/s-786kB/s), io=7680KiB (7864kB), run=10008-10008msec 00:27:29.075 14:21:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:27:29.075 14:21:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:27:29.075 14:21:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:27:29.075 14:21:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:29.075 14:21:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:27:29.075 14:21:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:29.075 14:21:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.075 14:21:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:29.075 14:21:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.075 14:21:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:29.075 14:21:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.075 14:21:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:29.075 14:21:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.075 00:27:29.075 real 0m11.242s 00:27:29.075 user 0m10.364s 00:27:29.075 sys 0m1.272s 00:27:29.075 14:21:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:29.075 14:21:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:29.075 ************************************ 00:27:29.075 END TEST fio_dif_1_default 00:27:29.075 ************************************ 00:27:29.075 14:21:35 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:27:29.075 14:21:35 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:27:29.075 14:21:35 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:29.075 14:21:35 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:29.075 ************************************ 00:27:29.075 START TEST fio_dif_1_multi_subsystems 00:27:29.075 ************************************ 00:27:29.075 14:21:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1125 -- # fio_dif_1_multi_subsystems 00:27:29.075 14:21:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:27:29.075 14:21:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:27:29.075 14:21:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:27:29.075 14:21:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:27:29.075 14:21:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:27:29.075 14:21:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:27:29.075 14:21:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:27:29.075 14:21:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.075 14:21:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:29.075 bdev_null0 00:27:29.075 14:21:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.075 14:21:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:29.075 14:21:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.075 14:21:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:29.075 14:21:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.075 14:21:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:29.075 14:21:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.075 14:21:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:29.075 14:21:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.075 14:21:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:29.075 14:21:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.075 14:21:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:29.075 [2024-07-26 14:21:35.660468] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:29.075 14:21:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.075 14:21:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:27:29.075 14:21:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:27:29.075 14:21:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:27:29.075 14:21:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:27:29.075 14:21:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.075 14:21:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:29.075 bdev_null1 00:27:29.075 14:21:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.075 14:21:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:27:29.075 14:21:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.075 14:21:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:29.075 14:21:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.075 14:21:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:27:29.075 14:21:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.075 14:21:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:29.075 14:21:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.075 14:21:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:29.075 14:21:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.075 14:21:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:29.075 14:21:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.075 14:21:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:27:29.075 14:21:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:27:29.075 14:21:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:27:29.075 14:21:35 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:27:29.075 14:21:35 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:27:29.075 14:21:35 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:29.075 14:21:35 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:29.075 { 00:27:29.075 "params": { 00:27:29.075 "name": "Nvme$subsystem", 00:27:29.075 "trtype": "$TEST_TRANSPORT", 00:27:29.075 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:29.076 "adrfam": "ipv4", 00:27:29.076 "trsvcid": "$NVMF_PORT", 00:27:29.076 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:29.076 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:29.076 "hdgst": ${hdgst:-false}, 00:27:29.076 "ddgst": ${ddgst:-false} 00:27:29.076 }, 00:27:29.076 "method": "bdev_nvme_attach_controller" 00:27:29.076 } 00:27:29.076 EOF 00:27:29.076 )") 00:27:29.076 14:21:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:29.076 14:21:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:29.076 14:21:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:27:29.076 14:21:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:29.076 14:21:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:27:29.076 14:21:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:27:29.076 14:21:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:29.076 14:21:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:27:29.076 14:21:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:27:29.076 14:21:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:27:29.076 14:21:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:27:29.076 14:21:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:27:29.076 14:21:35 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:27:29.076 14:21:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:29.076 14:21:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:27:29.076 14:21:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:27:29.076 14:21:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:27:29.076 14:21:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:27:29.076 14:21:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:27:29.076 14:21:35 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:29.076 14:21:35 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:29.076 { 00:27:29.076 "params": { 00:27:29.076 "name": "Nvme$subsystem", 00:27:29.076 "trtype": "$TEST_TRANSPORT", 00:27:29.076 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:29.076 "adrfam": "ipv4", 00:27:29.076 "trsvcid": "$NVMF_PORT", 00:27:29.076 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:29.076 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:29.076 "hdgst": ${hdgst:-false}, 00:27:29.076 "ddgst": ${ddgst:-false} 00:27:29.076 }, 00:27:29.076 "method": "bdev_nvme_attach_controller" 00:27:29.076 } 00:27:29.076 EOF 00:27:29.076 )") 00:27:29.076 14:21:35 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:27:29.076 14:21:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:27:29.076 14:21:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:27:29.076 14:21:35 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:27:29.076 14:21:35 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:27:29.076 14:21:35 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:29.076 "params": { 00:27:29.076 "name": "Nvme0", 00:27:29.076 "trtype": "tcp", 00:27:29.076 "traddr": "10.0.0.2", 00:27:29.076 "adrfam": "ipv4", 00:27:29.076 "trsvcid": "4420", 00:27:29.076 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:29.076 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:29.076 "hdgst": false, 00:27:29.076 "ddgst": false 00:27:29.076 }, 00:27:29.076 "method": "bdev_nvme_attach_controller" 00:27:29.076 },{ 00:27:29.076 "params": { 00:27:29.076 "name": "Nvme1", 00:27:29.076 "trtype": "tcp", 00:27:29.076 "traddr": "10.0.0.2", 00:27:29.076 "adrfam": "ipv4", 00:27:29.076 "trsvcid": "4420", 00:27:29.076 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:29.076 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:29.076 "hdgst": false, 00:27:29.076 "ddgst": false 00:27:29.076 }, 00:27:29.076 "method": "bdev_nvme_attach_controller" 00:27:29.076 }' 00:27:29.076 14:21:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:27:29.076 14:21:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:27:29.076 14:21:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:27:29.076 14:21:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:29.076 14:21:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:27:29.076 14:21:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:27:29.076 14:21:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:27:29.076 14:21:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:27:29.076 14:21:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:27:29.076 14:21:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:29.076 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:27:29.076 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:27:29.076 fio-3.35 00:27:29.076 Starting 2 threads 00:27:29.076 EAL: No free 2048 kB hugepages reported on node 1 00:27:39.044 00:27:39.044 filename0: (groupid=0, jobs=1): err= 0: pid=340536: Fri Jul 26 14:21:46 2024 00:27:39.044 read: IOPS=189, BW=759KiB/s (777kB/s)(7616KiB/10040msec) 00:27:39.044 slat (nsec): min=7333, max=32068, avg=9507.33, stdev=2459.73 00:27:39.044 clat (usec): min=511, max=44809, avg=21062.45, stdev=20380.78 00:27:39.044 lat (usec): min=519, max=44825, avg=21071.96, stdev=20380.60 00:27:39.044 clat percentiles (usec): 00:27:39.044 | 1.00th=[ 537], 5.00th=[ 562], 10.00th=[ 578], 20.00th=[ 594], 00:27:39.044 | 30.00th=[ 603], 40.00th=[ 619], 50.00th=[41157], 60.00th=[41157], 00:27:39.044 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:27:39.044 | 99.00th=[41681], 99.50th=[41681], 99.90th=[44827], 99.95th=[44827], 00:27:39.044 | 99.99th=[44827] 00:27:39.044 bw ( KiB/s): min= 672, max= 768, per=66.14%, avg=760.00, stdev=25.16, samples=20 00:27:39.044 iops : min= 168, max= 192, avg=190.00, stdev= 6.29, samples=20 00:27:39.044 lat (usec) : 750=48.74%, 1000=1.05% 00:27:39.044 lat (msec) : 50=50.21% 00:27:39.044 cpu : usr=94.12%, sys=5.60%, ctx=21, majf=0, minf=110 00:27:39.044 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:39.044 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:39.044 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:39.044 issued rwts: total=1904,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:39.044 latency : target=0, window=0, percentile=100.00%, depth=4 00:27:39.044 filename1: (groupid=0, jobs=1): err= 0: pid=340537: Fri Jul 26 14:21:46 2024 00:27:39.044 read: IOPS=97, BW=391KiB/s (401kB/s)(3920KiB/10015msec) 00:27:39.044 slat (nsec): min=7702, max=31818, avg=9714.30, stdev=2613.47 00:27:39.044 clat (usec): min=595, max=42807, avg=40846.29, stdev=2586.71 00:27:39.044 lat (usec): min=603, max=42823, avg=40856.00, stdev=2586.66 00:27:39.044 clat percentiles (usec): 00:27:39.044 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:27:39.044 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:27:39.044 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:27:39.044 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:27:39.044 | 99.99th=[42730] 00:27:39.044 bw ( KiB/s): min= 384, max= 416, per=33.94%, avg=390.40, stdev=13.13, samples=20 00:27:39.044 iops : min= 96, max= 104, avg=97.60, stdev= 3.28, samples=20 00:27:39.044 lat (usec) : 750=0.41% 00:27:39.044 lat (msec) : 50=99.59% 00:27:39.044 cpu : usr=94.43%, sys=5.29%, ctx=25, majf=0, minf=166 00:27:39.044 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:39.044 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:39.044 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:39.044 issued rwts: total=980,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:39.044 latency : target=0, window=0, percentile=100.00%, depth=4 00:27:39.044 00:27:39.044 Run status group 0 (all jobs): 00:27:39.044 READ: bw=1149KiB/s (1177kB/s), 391KiB/s-759KiB/s (401kB/s-777kB/s), io=11.3MiB (11.8MB), run=10015-10040msec 00:27:39.044 14:21:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:27:39.044 14:21:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:27:39.044 14:21:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:27:39.044 14:21:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:39.044 14:21:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:27:39.044 14:21:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:39.044 14:21:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.044 14:21:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:39.044 14:21:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.044 14:21:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:39.044 14:21:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.044 14:21:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:39.044 14:21:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.044 14:21:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:27:39.044 14:21:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:27:39.044 14:21:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:27:39.044 14:21:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:39.044 14:21:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.044 14:21:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:39.044 14:21:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.044 14:21:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:27:39.044 14:21:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.044 14:21:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:39.044 14:21:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.044 00:27:39.044 real 0m11.363s 00:27:39.044 user 0m20.169s 00:27:39.044 sys 0m1.360s 00:27:39.044 14:21:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:39.044 14:21:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:39.044 ************************************ 00:27:39.044 END TEST fio_dif_1_multi_subsystems 00:27:39.044 ************************************ 00:27:39.044 14:21:47 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:27:39.044 14:21:47 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:27:39.044 14:21:47 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:39.044 14:21:47 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:39.044 ************************************ 00:27:39.044 START TEST fio_dif_rand_params 00:27:39.044 ************************************ 00:27:39.044 14:21:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1125 -- # fio_dif_rand_params 00:27:39.044 14:21:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:27:39.044 14:21:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:27:39.045 14:21:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:27:39.045 14:21:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:27:39.045 14:21:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:27:39.045 14:21:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:27:39.045 14:21:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:27:39.045 14:21:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:27:39.045 14:21:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:27:39.045 14:21:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:27:39.045 14:21:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:27:39.045 14:21:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:27:39.045 14:21:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:27:39.045 14:21:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.045 14:21:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:39.045 bdev_null0 00:27:39.045 14:21:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.045 14:21:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:39.045 14:21:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.045 14:21:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:39.305 14:21:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.305 14:21:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:39.305 14:21:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.305 14:21:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:39.305 14:21:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.305 14:21:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:39.305 14:21:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.305 14:21:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:39.305 [2024-07-26 14:21:47.075313] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:39.305 14:21:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.305 14:21:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:27:39.305 14:21:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:27:39.305 14:21:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:27:39.305 14:21:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:27:39.305 14:21:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:27:39.305 14:21:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:39.305 14:21:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:39.305 14:21:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:39.305 { 00:27:39.305 "params": { 00:27:39.305 "name": "Nvme$subsystem", 00:27:39.305 "trtype": "$TEST_TRANSPORT", 00:27:39.305 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:39.305 "adrfam": "ipv4", 00:27:39.305 "trsvcid": "$NVMF_PORT", 00:27:39.305 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:39.305 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:39.305 "hdgst": ${hdgst:-false}, 00:27:39.305 "ddgst": ${ddgst:-false} 00:27:39.305 }, 00:27:39.305 "method": "bdev_nvme_attach_controller" 00:27:39.305 } 00:27:39.305 EOF 00:27:39.305 )") 00:27:39.305 14:21:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:39.305 14:21:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:27:39.305 14:21:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:27:39.305 14:21:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:39.305 14:21:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:27:39.305 14:21:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:27:39.305 14:21:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:27:39.305 14:21:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:39.305 14:21:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:27:39.305 14:21:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:27:39.305 14:21:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:27:39.305 14:21:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:27:39.305 14:21:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:39.305 14:21:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:27:39.305 14:21:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:27:39.305 14:21:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:27:39.305 14:21:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:27:39.305 14:21:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:27:39.305 14:21:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:27:39.305 14:21:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:39.305 "params": { 00:27:39.305 "name": "Nvme0", 00:27:39.305 "trtype": "tcp", 00:27:39.305 "traddr": "10.0.0.2", 00:27:39.305 "adrfam": "ipv4", 00:27:39.305 "trsvcid": "4420", 00:27:39.305 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:39.305 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:39.305 "hdgst": false, 00:27:39.305 "ddgst": false 00:27:39.305 }, 00:27:39.305 "method": "bdev_nvme_attach_controller" 00:27:39.305 }' 00:27:39.305 14:21:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:27:39.305 14:21:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:27:39.305 14:21:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:27:39.305 14:21:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:39.305 14:21:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:27:39.306 14:21:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:27:39.306 14:21:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:27:39.306 14:21:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:27:39.306 14:21:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:27:39.306 14:21:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:39.562 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:27:39.562 ... 00:27:39.562 fio-3.35 00:27:39.562 Starting 3 threads 00:27:39.562 EAL: No free 2048 kB hugepages reported on node 1 00:27:46.116 00:27:46.116 filename0: (groupid=0, jobs=1): err= 0: pid=341934: Fri Jul 26 14:21:53 2024 00:27:46.116 read: IOPS=231, BW=28.9MiB/s (30.3MB/s)(145MiB/5006msec) 00:27:46.116 slat (nsec): min=4607, max=73135, avg=18051.83, stdev=4297.80 00:27:46.116 clat (usec): min=7075, max=53227, avg=12944.13, stdev=3292.88 00:27:46.116 lat (usec): min=7093, max=53246, avg=12962.18, stdev=3292.97 00:27:46.116 clat percentiles (usec): 00:27:46.116 | 1.00th=[ 8586], 5.00th=[10421], 10.00th=[10814], 20.00th=[11469], 00:27:46.116 | 30.00th=[11731], 40.00th=[12125], 50.00th=[12649], 60.00th=[13042], 00:27:46.116 | 70.00th=[13566], 80.00th=[14222], 90.00th=[15139], 95.00th=[15795], 00:27:46.116 | 99.00th=[17433], 99.50th=[50594], 99.90th=[53216], 99.95th=[53216], 00:27:46.116 | 99.99th=[53216] 00:27:46.116 bw ( KiB/s): min=27904, max=31232, per=33.65%, avg=29568.00, stdev=1214.31, samples=10 00:27:46.116 iops : min= 218, max= 244, avg=231.00, stdev= 9.49, samples=10 00:27:46.116 lat (msec) : 10=3.28%, 20=96.20%, 100=0.52% 00:27:46.116 cpu : usr=94.53%, sys=4.98%, ctx=10, majf=0, minf=122 00:27:46.116 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:46.116 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:46.116 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:46.116 issued rwts: total=1158,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:46.116 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:46.116 filename0: (groupid=0, jobs=1): err= 0: pid=341935: Fri Jul 26 14:21:53 2024 00:27:46.116 read: IOPS=223, BW=27.9MiB/s (29.3MB/s)(141MiB/5045msec) 00:27:46.116 slat (nsec): min=4532, max=36991, avg=16411.92, stdev=3505.16 00:27:46.116 clat (usec): min=7637, max=52204, avg=13383.72, stdev=2586.19 00:27:46.116 lat (usec): min=7650, max=52218, avg=13400.13, stdev=2586.28 00:27:46.116 clat percentiles (usec): 00:27:46.116 | 1.00th=[ 8356], 5.00th=[10290], 10.00th=[10814], 20.00th=[11600], 00:27:46.116 | 30.00th=[12125], 40.00th=[12518], 50.00th=[13042], 60.00th=[13698], 00:27:46.116 | 70.00th=[14484], 80.00th=[15270], 90.00th=[16319], 95.00th=[16909], 00:27:46.116 | 99.00th=[17695], 99.50th=[18220], 99.90th=[47973], 99.95th=[52167], 00:27:46.116 | 99.99th=[52167] 00:27:46.116 bw ( KiB/s): min=26880, max=29696, per=32.75%, avg=28774.40, stdev=811.34, samples=10 00:27:46.116 iops : min= 210, max= 232, avg=224.80, stdev= 6.34, samples=10 00:27:46.116 lat (msec) : 10=4.09%, 20=95.74%, 50=0.09%, 100=0.09% 00:27:46.116 cpu : usr=96.65%, sys=2.84%, ctx=11, majf=0, minf=73 00:27:46.116 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:46.116 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:46.116 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:46.116 issued rwts: total=1126,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:46.116 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:46.116 filename0: (groupid=0, jobs=1): err= 0: pid=341936: Fri Jul 26 14:21:53 2024 00:27:46.116 read: IOPS=233, BW=29.2MiB/s (30.6MB/s)(147MiB/5045msec) 00:27:46.116 slat (nsec): min=5191, max=83388, avg=16592.61, stdev=4175.50 00:27:46.116 clat (usec): min=7230, max=57794, avg=12779.75, stdev=3755.92 00:27:46.116 lat (usec): min=7243, max=57819, avg=12796.34, stdev=3755.75 00:27:46.116 clat percentiles (usec): 00:27:46.116 | 1.00th=[ 8291], 5.00th=[10159], 10.00th=[10683], 20.00th=[11207], 00:27:46.116 | 30.00th=[11600], 40.00th=[11994], 50.00th=[12387], 60.00th=[12649], 00:27:46.116 | 70.00th=[13173], 80.00th=[13829], 90.00th=[15139], 95.00th=[15926], 00:27:46.116 | 99.00th=[17171], 99.50th=[51119], 99.90th=[57934], 99.95th=[57934], 00:27:46.116 | 99.99th=[57934] 00:27:46.116 bw ( KiB/s): min=28729, max=31744, per=34.27%, avg=30111.30, stdev=878.36, samples=10 00:27:46.116 iops : min= 224, max= 248, avg=235.20, stdev= 6.94, samples=10 00:27:46.116 lat (msec) : 10=3.99%, 20=95.34%, 50=0.08%, 100=0.59% 00:27:46.116 cpu : usr=96.00%, sys=3.51%, ctx=7, majf=0, minf=146 00:27:46.116 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:46.116 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:46.116 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:46.116 issued rwts: total=1179,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:46.116 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:46.116 00:27:46.116 Run status group 0 (all jobs): 00:27:46.116 READ: bw=85.8MiB/s (90.0MB/s), 27.9MiB/s-29.2MiB/s (29.3MB/s-30.6MB/s), io=433MiB (454MB), run=5006-5045msec 00:27:46.116 14:21:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:27:46.116 14:21:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:27:46.116 14:21:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:27:46.116 14:21:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:46.116 14:21:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:27:46.116 14:21:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:46.116 14:21:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.116 14:21:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:46.116 14:21:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.116 14:21:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:46.116 14:21:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.116 14:21:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:46.116 14:21:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.116 14:21:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:27:46.116 14:21:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:27:46.116 14:21:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:27:46.116 14:21:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:27:46.116 14:21:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:27:46.116 14:21:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:27:46.116 14:21:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:27:46.116 14:21:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:27:46.116 14:21:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:27:46.116 14:21:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:27:46.116 14:21:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:27:46.116 14:21:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:27:46.116 14:21:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.116 14:21:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:46.116 bdev_null0 00:27:46.116 14:21:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.116 14:21:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:46.116 14:21:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.116 14:21:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:46.116 14:21:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.116 14:21:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:46.116 14:21:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.116 14:21:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:46.116 14:21:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.116 14:21:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:46.116 14:21:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.116 14:21:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:46.116 [2024-07-26 14:21:53.380124] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:46.116 14:21:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.116 14:21:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:27:46.116 14:21:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:27:46.116 14:21:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:27:46.116 14:21:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:27:46.116 14:21:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.116 14:21:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:46.116 bdev_null1 00:27:46.116 14:21:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.116 14:21:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:27:46.116 14:21:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.116 14:21:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:46.116 14:21:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.116 14:21:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:27:46.116 14:21:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.116 14:21:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:46.116 14:21:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.116 14:21:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:46.116 14:21:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.116 14:21:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:46.117 14:21:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.117 14:21:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:27:46.117 14:21:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:27:46.117 14:21:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:27:46.117 14:21:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:27:46.117 14:21:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.117 14:21:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:46.117 bdev_null2 00:27:46.117 14:21:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.117 14:21:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:27:46.117 14:21:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.117 14:21:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:46.117 14:21:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.117 14:21:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:27:46.117 14:21:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.117 14:21:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:46.117 14:21:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.117 14:21:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:27:46.117 14:21:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.117 14:21:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:46.117 14:21:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.117 14:21:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:27:46.117 14:21:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:46.117 14:21:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:46.117 14:21:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:27:46.117 14:21:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:27:46.117 14:21:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:46.117 14:21:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:27:46.117 14:21:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:27:46.117 14:21:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:27:46.117 14:21:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:46.117 14:21:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:27:46.117 14:21:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:27:46.117 14:21:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:27:46.117 14:21:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:27:46.117 14:21:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:27:46.117 14:21:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:27:46.117 14:21:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:27:46.117 14:21:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:46.117 14:21:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:46.117 { 00:27:46.117 "params": { 00:27:46.117 "name": "Nvme$subsystem", 00:27:46.117 "trtype": "$TEST_TRANSPORT", 00:27:46.117 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:46.117 "adrfam": "ipv4", 00:27:46.117 "trsvcid": "$NVMF_PORT", 00:27:46.117 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:46.117 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:46.117 "hdgst": ${hdgst:-false}, 00:27:46.117 "ddgst": ${ddgst:-false} 00:27:46.117 }, 00:27:46.117 "method": "bdev_nvme_attach_controller" 00:27:46.117 } 00:27:46.117 EOF 00:27:46.117 )") 00:27:46.117 14:21:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:27:46.117 14:21:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:46.117 14:21:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:27:46.117 14:21:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:27:46.117 14:21:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:27:46.117 14:21:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:27:46.117 14:21:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:27:46.117 14:21:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:46.117 14:21:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:46.117 { 00:27:46.117 "params": { 00:27:46.117 "name": "Nvme$subsystem", 00:27:46.117 "trtype": "$TEST_TRANSPORT", 00:27:46.117 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:46.117 "adrfam": "ipv4", 00:27:46.117 "trsvcid": "$NVMF_PORT", 00:27:46.117 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:46.117 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:46.117 "hdgst": ${hdgst:-false}, 00:27:46.117 "ddgst": ${ddgst:-false} 00:27:46.117 }, 00:27:46.117 "method": "bdev_nvme_attach_controller" 00:27:46.117 } 00:27:46.117 EOF 00:27:46.117 )") 00:27:46.117 14:21:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:27:46.117 14:21:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:27:46.117 14:21:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:27:46.117 14:21:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:27:46.117 14:21:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:27:46.117 14:21:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:27:46.117 14:21:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:46.117 14:21:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:46.117 { 00:27:46.117 "params": { 00:27:46.117 "name": "Nvme$subsystem", 00:27:46.117 "trtype": "$TEST_TRANSPORT", 00:27:46.117 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:46.117 "adrfam": "ipv4", 00:27:46.117 "trsvcid": "$NVMF_PORT", 00:27:46.117 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:46.117 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:46.117 "hdgst": ${hdgst:-false}, 00:27:46.117 "ddgst": ${ddgst:-false} 00:27:46.117 }, 00:27:46.117 "method": "bdev_nvme_attach_controller" 00:27:46.117 } 00:27:46.117 EOF 00:27:46.117 )") 00:27:46.117 14:21:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:27:46.117 14:21:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:27:46.117 14:21:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:27:46.117 14:21:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:46.117 "params": { 00:27:46.117 "name": "Nvme0", 00:27:46.117 "trtype": "tcp", 00:27:46.117 "traddr": "10.0.0.2", 00:27:46.117 "adrfam": "ipv4", 00:27:46.117 "trsvcid": "4420", 00:27:46.117 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:46.117 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:46.117 "hdgst": false, 00:27:46.117 "ddgst": false 00:27:46.117 }, 00:27:46.117 "method": "bdev_nvme_attach_controller" 00:27:46.117 },{ 00:27:46.117 "params": { 00:27:46.117 "name": "Nvme1", 00:27:46.117 "trtype": "tcp", 00:27:46.117 "traddr": "10.0.0.2", 00:27:46.117 "adrfam": "ipv4", 00:27:46.117 "trsvcid": "4420", 00:27:46.117 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:46.117 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:46.117 "hdgst": false, 00:27:46.117 "ddgst": false 00:27:46.117 }, 00:27:46.117 "method": "bdev_nvme_attach_controller" 00:27:46.117 },{ 00:27:46.117 "params": { 00:27:46.117 "name": "Nvme2", 00:27:46.117 "trtype": "tcp", 00:27:46.117 "traddr": "10.0.0.2", 00:27:46.117 "adrfam": "ipv4", 00:27:46.117 "trsvcid": "4420", 00:27:46.117 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:46.117 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:46.117 "hdgst": false, 00:27:46.117 "ddgst": false 00:27:46.117 }, 00:27:46.117 "method": "bdev_nvme_attach_controller" 00:27:46.117 }' 00:27:46.117 14:21:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:27:46.117 14:21:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:27:46.117 14:21:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:27:46.117 14:21:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:46.117 14:21:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:27:46.117 14:21:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:27:46.117 14:21:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:27:46.117 14:21:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:27:46.117 14:21:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:27:46.117 14:21:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:46.117 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:27:46.117 ... 00:27:46.117 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:27:46.117 ... 00:27:46.117 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:27:46.117 ... 00:27:46.117 fio-3.35 00:27:46.117 Starting 24 threads 00:27:46.117 EAL: No free 2048 kB hugepages reported on node 1 00:27:58.329 00:27:58.329 filename0: (groupid=0, jobs=1): err= 0: pid=342798: Fri Jul 26 14:22:04 2024 00:27:58.329 read: IOPS=66, BW=266KiB/s (272kB/s)(2688KiB/10121msec) 00:27:58.329 slat (nsec): min=8454, max=95072, avg=35024.21, stdev=15930.05 00:27:58.329 clat (msec): min=105, max=345, avg=240.67, stdev=37.98 00:27:58.329 lat (msec): min=105, max=345, avg=240.70, stdev=37.97 00:27:58.329 clat percentiles (msec): 00:27:58.329 | 1.00th=[ 144], 5.00th=[ 178], 10.00th=[ 178], 20.00th=[ 211], 00:27:58.329 | 30.00th=[ 228], 40.00th=[ 247], 50.00th=[ 253], 60.00th=[ 257], 00:27:58.329 | 70.00th=[ 262], 80.00th=[ 264], 90.00th=[ 268], 95.00th=[ 279], 00:27:58.329 | 99.00th=[ 342], 99.50th=[ 347], 99.90th=[ 347], 99.95th=[ 347], 00:27:58.329 | 99.99th=[ 347] 00:27:58.329 bw ( KiB/s): min= 144, max= 400, per=3.87%, avg=262.40, stdev=62.60, samples=20 00:27:58.329 iops : min= 36, max= 100, avg=65.60, stdev=15.65, samples=20 00:27:58.329 lat (msec) : 250=46.43%, 500=53.57% 00:27:58.329 cpu : usr=97.96%, sys=1.58%, ctx=30, majf=0, minf=32 00:27:58.329 IO depths : 1=3.1%, 2=9.4%, 4=25.0%, 8=53.1%, 16=9.4%, 32=0.0%, >=64=0.0% 00:27:58.329 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:58.329 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:58.329 issued rwts: total=672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:58.329 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:58.329 filename0: (groupid=0, jobs=1): err= 0: pid=342799: Fri Jul 26 14:22:04 2024 00:27:58.329 read: IOPS=67, BW=272KiB/s (278kB/s)(2752KiB/10133msec) 00:27:58.329 slat (usec): min=6, max=117, avg=61.11, stdev=18.55 00:27:58.329 clat (msec): min=104, max=270, avg=235.10, stdev=36.72 00:27:58.329 lat (msec): min=104, max=270, avg=235.16, stdev=36.73 00:27:58.329 clat percentiles (msec): 00:27:58.329 | 1.00th=[ 106], 5.00th=[ 174], 10.00th=[ 178], 20.00th=[ 209], 00:27:58.329 | 30.00th=[ 228], 40.00th=[ 245], 50.00th=[ 251], 60.00th=[ 253], 00:27:58.329 | 70.00th=[ 259], 80.00th=[ 264], 90.00th=[ 268], 95.00th=[ 268], 00:27:58.329 | 99.00th=[ 271], 99.50th=[ 271], 99.90th=[ 271], 99.95th=[ 271], 00:27:58.329 | 99.99th=[ 271] 00:27:58.329 bw ( KiB/s): min= 128, max= 384, per=3.95%, avg=268.80, stdev=55.57, samples=20 00:27:58.329 iops : min= 32, max= 96, avg=67.20, stdev=13.89, samples=20 00:27:58.329 lat (msec) : 250=50.00%, 500=50.00% 00:27:58.329 cpu : usr=97.80%, sys=1.67%, ctx=49, majf=0, minf=40 00:27:58.329 IO depths : 1=5.8%, 2=12.1%, 4=25.0%, 8=50.4%, 16=6.7%, 32=0.0%, >=64=0.0% 00:27:58.329 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:58.329 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:58.329 issued rwts: total=688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:58.329 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:58.329 filename0: (groupid=0, jobs=1): err= 0: pid=342800: Fri Jul 26 14:22:04 2024 00:27:58.329 read: IOPS=69, BW=278KiB/s (285kB/s)(2816KiB/10135msec) 00:27:58.329 slat (usec): min=7, max=124, avg=60.84, stdev=19.45 00:27:58.329 clat (msec): min=98, max=269, avg=229.81, stdev=42.49 00:27:58.329 lat (msec): min=98, max=270, avg=229.87, stdev=42.50 00:27:58.329 clat percentiles (msec): 00:27:58.329 | 1.00th=[ 100], 5.00th=[ 159], 10.00th=[ 178], 20.00th=[ 184], 00:27:58.329 | 30.00th=[ 226], 40.00th=[ 239], 50.00th=[ 247], 60.00th=[ 251], 00:27:58.329 | 70.00th=[ 257], 80.00th=[ 264], 90.00th=[ 268], 95.00th=[ 268], 00:27:58.329 | 99.00th=[ 271], 99.50th=[ 271], 99.90th=[ 271], 99.95th=[ 271], 00:27:58.329 | 99.99th=[ 271] 00:27:58.329 bw ( KiB/s): min= 256, max= 384, per=4.06%, avg=275.20, stdev=44.84, samples=20 00:27:58.329 iops : min= 64, max= 96, avg=68.80, stdev=11.21, samples=20 00:27:58.329 lat (msec) : 100=2.27%, 250=51.70%, 500=46.02% 00:27:58.329 cpu : usr=97.84%, sys=1.63%, ctx=44, majf=0, minf=28 00:27:58.329 IO depths : 1=4.3%, 2=10.5%, 4=25.0%, 8=52.0%, 16=8.2%, 32=0.0%, >=64=0.0% 00:27:58.329 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:58.329 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:58.329 issued rwts: total=704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:58.329 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:58.329 filename0: (groupid=0, jobs=1): err= 0: pid=342801: Fri Jul 26 14:22:04 2024 00:27:58.329 read: IOPS=83, BW=335KiB/s (343kB/s)(3392KiB/10134msec) 00:27:58.329 slat (nsec): min=7633, max=96648, avg=56926.45, stdev=16206.40 00:27:58.329 clat (msec): min=46, max=325, avg=190.13, stdev=47.03 00:27:58.329 lat (msec): min=46, max=325, avg=190.19, stdev=47.03 00:27:58.329 clat percentiles (msec): 00:27:58.329 | 1.00th=[ 47], 5.00th=[ 100], 10.00th=[ 159], 20.00th=[ 169], 00:27:58.329 | 30.00th=[ 174], 40.00th=[ 178], 50.00th=[ 180], 60.00th=[ 184], 00:27:58.329 | 70.00th=[ 211], 80.00th=[ 243], 90.00th=[ 251], 95.00th=[ 253], 00:27:58.329 | 99.00th=[ 296], 99.50th=[ 321], 99.90th=[ 326], 99.95th=[ 326], 00:27:58.329 | 99.99th=[ 326] 00:27:58.329 bw ( KiB/s): min= 256, max= 512, per=4.90%, avg=332.80, stdev=73.89, samples=20 00:27:58.329 iops : min= 64, max= 128, avg=83.20, stdev=18.47, samples=20 00:27:58.329 lat (msec) : 50=1.89%, 100=3.77%, 250=83.02%, 500=11.32% 00:27:58.329 cpu : usr=98.18%, sys=1.39%, ctx=26, majf=0, minf=44 00:27:58.329 IO depths : 1=1.8%, 2=5.9%, 4=18.6%, 8=63.0%, 16=10.7%, 32=0.0%, >=64=0.0% 00:27:58.329 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:58.329 complete : 0=0.0%, 4=92.3%, 8=2.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:58.329 issued rwts: total=848,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:58.329 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:58.329 filename0: (groupid=0, jobs=1): err= 0: pid=342802: Fri Jul 26 14:22:04 2024 00:27:58.329 read: IOPS=66, BW=265KiB/s (271kB/s)(2680KiB/10111msec) 00:27:58.329 slat (nsec): min=9483, max=73518, avg=32719.77, stdev=12102.13 00:27:58.329 clat (msec): min=142, max=364, avg=240.94, stdev=36.23 00:27:58.329 lat (msec): min=142, max=364, avg=240.97, stdev=36.23 00:27:58.329 clat percentiles (msec): 00:27:58.329 | 1.00th=[ 144], 5.00th=[ 169], 10.00th=[ 180], 20.00th=[ 218], 00:27:58.329 | 30.00th=[ 236], 40.00th=[ 247], 50.00th=[ 251], 60.00th=[ 255], 00:27:58.329 | 70.00th=[ 262], 80.00th=[ 266], 90.00th=[ 268], 95.00th=[ 271], 00:27:58.329 | 99.00th=[ 326], 99.50th=[ 363], 99.90th=[ 363], 99.95th=[ 363], 00:27:58.329 | 99.99th=[ 363] 00:27:58.329 bw ( KiB/s): min= 128, max= 384, per=3.85%, avg=261.60, stdev=65.51, samples=20 00:27:58.329 iops : min= 32, max= 96, avg=65.40, stdev=16.38, samples=20 00:27:58.329 lat (msec) : 250=45.67%, 500=54.33% 00:27:58.329 cpu : usr=97.87%, sys=1.77%, ctx=14, majf=0, minf=29 00:27:58.329 IO depths : 1=5.7%, 2=11.9%, 4=25.1%, 8=50.6%, 16=6.7%, 32=0.0%, >=64=0.0% 00:27:58.329 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:58.329 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:58.329 issued rwts: total=670,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:58.329 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:58.329 filename0: (groupid=0, jobs=1): err= 0: pid=342803: Fri Jul 26 14:22:04 2024 00:27:58.329 read: IOPS=66, BW=266KiB/s (272kB/s)(2688KiB/10121msec) 00:27:58.329 slat (usec): min=18, max=113, avg=67.62, stdev=14.57 00:27:58.329 clat (msec): min=113, max=344, avg=240.40, stdev=36.43 00:27:58.329 lat (msec): min=113, max=345, avg=240.46, stdev=36.44 00:27:58.329 clat percentiles (msec): 00:27:58.329 | 1.00th=[ 163], 5.00th=[ 180], 10.00th=[ 184], 20.00th=[ 209], 00:27:58.329 | 30.00th=[ 234], 40.00th=[ 243], 50.00th=[ 251], 60.00th=[ 253], 00:27:58.329 | 70.00th=[ 262], 80.00th=[ 264], 90.00th=[ 268], 95.00th=[ 288], 00:27:58.330 | 99.00th=[ 342], 99.50th=[ 342], 99.90th=[ 347], 99.95th=[ 347], 00:27:58.330 | 99.99th=[ 347] 00:27:58.330 bw ( KiB/s): min= 144, max= 384, per=3.87%, avg=262.40, stdev=46.83, samples=20 00:27:58.330 iops : min= 36, max= 96, avg=65.60, stdev=11.71, samples=20 00:27:58.330 lat (msec) : 250=49.40%, 500=50.60% 00:27:58.330 cpu : usr=97.64%, sys=1.70%, ctx=56, majf=0, minf=30 00:27:58.330 IO depths : 1=3.4%, 2=9.7%, 4=25.0%, 8=52.8%, 16=9.1%, 32=0.0%, >=64=0.0% 00:27:58.330 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:58.330 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:58.330 issued rwts: total=672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:58.330 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:58.330 filename0: (groupid=0, jobs=1): err= 0: pid=342804: Fri Jul 26 14:22:04 2024 00:27:58.330 read: IOPS=65, BW=261KiB/s (268kB/s)(2624KiB/10039msec) 00:27:58.330 slat (usec): min=17, max=102, avg=63.09, stdev=16.68 00:27:58.330 clat (msec): min=132, max=357, avg=244.30, stdev=40.02 00:27:58.330 lat (msec): min=132, max=357, avg=244.37, stdev=40.02 00:27:58.330 clat percentiles (msec): 00:27:58.330 | 1.00th=[ 150], 5.00th=[ 178], 10.00th=[ 180], 20.00th=[ 213], 00:27:58.330 | 30.00th=[ 236], 40.00th=[ 247], 50.00th=[ 251], 60.00th=[ 259], 00:27:58.330 | 70.00th=[ 264], 80.00th=[ 268], 90.00th=[ 271], 95.00th=[ 317], 00:27:58.330 | 99.00th=[ 355], 99.50th=[ 355], 99.90th=[ 359], 99.95th=[ 359], 00:27:58.330 | 99.99th=[ 359] 00:27:58.330 bw ( KiB/s): min= 128, max= 384, per=3.76%, avg=256.00, stdev=41.53, samples=20 00:27:58.330 iops : min= 32, max= 96, avg=64.00, stdev=10.38, samples=20 00:27:58.330 lat (msec) : 250=49.39%, 500=50.61% 00:27:58.330 cpu : usr=98.20%, sys=1.33%, ctx=23, majf=0, minf=39 00:27:58.330 IO depths : 1=3.7%, 2=9.9%, 4=25.0%, 8=52.6%, 16=8.8%, 32=0.0%, >=64=0.0% 00:27:58.330 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:58.330 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:58.330 issued rwts: total=656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:58.330 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:58.330 filename0: (groupid=0, jobs=1): err= 0: pid=342805: Fri Jul 26 14:22:04 2024 00:27:58.330 read: IOPS=64, BW=260KiB/s (266kB/s)(2624KiB/10102msec) 00:27:58.330 slat (usec): min=10, max=103, avg=65.87, stdev=13.76 00:27:58.330 clat (msec): min=141, max=357, avg=244.53, stdev=40.45 00:27:58.330 lat (msec): min=141, max=357, avg=244.60, stdev=40.45 00:27:58.330 clat percentiles (msec): 00:27:58.330 | 1.00th=[ 153], 5.00th=[ 178], 10.00th=[ 178], 20.00th=[ 213], 00:27:58.330 | 30.00th=[ 234], 40.00th=[ 247], 50.00th=[ 251], 60.00th=[ 257], 00:27:58.330 | 70.00th=[ 264], 80.00th=[ 268], 90.00th=[ 271], 95.00th=[ 321], 00:27:58.330 | 99.00th=[ 351], 99.50th=[ 359], 99.90th=[ 359], 99.95th=[ 359], 00:27:58.330 | 99.99th=[ 359] 00:27:58.330 bw ( KiB/s): min= 144, max= 368, per=3.76%, avg=256.00, stdev=36.71, samples=20 00:27:58.330 iops : min= 36, max= 92, avg=64.00, stdev= 9.18, samples=20 00:27:58.330 lat (msec) : 250=49.39%, 500=50.61% 00:27:58.330 cpu : usr=97.98%, sys=1.51%, ctx=23, majf=0, minf=34 00:27:58.330 IO depths : 1=3.5%, 2=9.8%, 4=25.0%, 8=52.7%, 16=9.0%, 32=0.0%, >=64=0.0% 00:27:58.330 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:58.330 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:58.330 issued rwts: total=656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:58.330 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:58.330 filename1: (groupid=0, jobs=1): err= 0: pid=342806: Fri Jul 26 14:22:04 2024 00:27:58.330 read: IOPS=82, BW=328KiB/s (336kB/s)(3320KiB/10120msec) 00:27:58.330 slat (usec): min=7, max=103, avg=19.07, stdev=15.55 00:27:58.330 clat (msec): min=110, max=322, avg=194.48, stdev=36.29 00:27:58.330 lat (msec): min=110, max=322, avg=194.50, stdev=36.30 00:27:58.330 clat percentiles (msec): 00:27:58.330 | 1.00th=[ 133], 5.00th=[ 161], 10.00th=[ 163], 20.00th=[ 171], 00:27:58.330 | 30.00th=[ 174], 40.00th=[ 176], 50.00th=[ 178], 60.00th=[ 184], 00:27:58.330 | 70.00th=[ 211], 80.00th=[ 228], 90.00th=[ 251], 95.00th=[ 264], 00:27:58.330 | 99.00th=[ 296], 99.50th=[ 305], 99.90th=[ 321], 99.95th=[ 321], 00:27:58.330 | 99.99th=[ 321] 00:27:58.330 bw ( KiB/s): min= 144, max= 384, per=4.80%, avg=325.60, stdev=67.14, samples=20 00:27:58.330 iops : min= 36, max= 96, avg=81.40, stdev=16.78, samples=20 00:27:58.330 lat (msec) : 250=87.47%, 500=12.53% 00:27:58.330 cpu : usr=97.88%, sys=1.64%, ctx=42, majf=0, minf=31 00:27:58.330 IO depths : 1=1.3%, 2=3.9%, 4=13.6%, 8=69.9%, 16=11.3%, 32=0.0%, >=64=0.0% 00:27:58.330 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:58.330 complete : 0=0.0%, 4=90.8%, 8=3.8%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:58.330 issued rwts: total=830,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:58.330 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:58.330 filename1: (groupid=0, jobs=1): err= 0: pid=342807: Fri Jul 26 14:22:04 2024 00:27:58.330 read: IOPS=65, BW=261KiB/s (268kB/s)(2624KiB/10039msec) 00:27:58.330 slat (usec): min=9, max=112, avg=54.69, stdev=25.18 00:27:58.330 clat (msec): min=132, max=357, avg=244.40, stdev=41.60 00:27:58.330 lat (msec): min=132, max=357, avg=244.45, stdev=41.61 00:27:58.330 clat percentiles (msec): 00:27:58.330 | 1.00th=[ 146], 5.00th=[ 178], 10.00th=[ 178], 20.00th=[ 213], 00:27:58.330 | 30.00th=[ 236], 40.00th=[ 247], 50.00th=[ 251], 60.00th=[ 257], 00:27:58.330 | 70.00th=[ 264], 80.00th=[ 268], 90.00th=[ 271], 95.00th=[ 321], 00:27:58.330 | 99.00th=[ 355], 99.50th=[ 355], 99.90th=[ 359], 99.95th=[ 359], 00:27:58.330 | 99.99th=[ 359] 00:27:58.330 bw ( KiB/s): min= 128, max= 384, per=3.76%, avg=256.00, stdev=41.53, samples=20 00:27:58.330 iops : min= 32, max= 96, avg=64.00, stdev=10.38, samples=20 00:27:58.330 lat (msec) : 250=49.39%, 500=50.61% 00:27:58.330 cpu : usr=97.74%, sys=1.65%, ctx=55, majf=0, minf=45 00:27:58.330 IO depths : 1=3.2%, 2=9.5%, 4=25.0%, 8=53.0%, 16=9.3%, 32=0.0%, >=64=0.0% 00:27:58.330 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:58.330 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:58.330 issued rwts: total=656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:58.330 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:58.330 filename1: (groupid=0, jobs=1): err= 0: pid=342808: Fri Jul 26 14:22:04 2024 00:27:58.330 read: IOPS=66, BW=266KiB/s (272kB/s)(2688KiB/10120msec) 00:27:58.330 slat (usec): min=7, max=112, avg=69.66, stdev=13.89 00:27:58.330 clat (msec): min=177, max=270, avg=240.32, stdev=27.23 00:27:58.330 lat (msec): min=177, max=270, avg=240.39, stdev=27.23 00:27:58.330 clat percentiles (msec): 00:27:58.330 | 1.00th=[ 178], 5.00th=[ 182], 10.00th=[ 188], 20.00th=[ 226], 00:27:58.330 | 30.00th=[ 236], 40.00th=[ 245], 50.00th=[ 251], 60.00th=[ 253], 00:27:58.330 | 70.00th=[ 262], 80.00th=[ 264], 90.00th=[ 266], 95.00th=[ 268], 00:27:58.330 | 99.00th=[ 271], 99.50th=[ 271], 99.90th=[ 271], 99.95th=[ 271], 00:27:58.330 | 99.99th=[ 271] 00:27:58.330 bw ( KiB/s): min= 128, max= 384, per=3.87%, avg=262.40, stdev=50.44, samples=20 00:27:58.330 iops : min= 32, max= 96, avg=65.60, stdev=12.61, samples=20 00:27:58.330 lat (msec) : 250=48.66%, 500=51.34% 00:27:58.330 cpu : usr=97.62%, sys=1.75%, ctx=41, majf=0, minf=29 00:27:58.330 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:27:58.330 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:58.330 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:58.330 issued rwts: total=672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:58.330 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:58.330 filename1: (groupid=0, jobs=1): err= 0: pid=342809: Fri Jul 26 14:22:04 2024 00:27:58.330 read: IOPS=65, BW=261KiB/s (268kB/s)(2624KiB/10043msec) 00:27:58.330 slat (usec): min=17, max=100, avg=67.20, stdev=15.09 00:27:58.330 clat (msec): min=140, max=357, avg=244.38, stdev=35.53 00:27:58.330 lat (msec): min=140, max=357, avg=244.45, stdev=35.53 00:27:58.330 clat percentiles (msec): 00:27:58.330 | 1.00th=[ 167], 5.00th=[ 178], 10.00th=[ 184], 20.00th=[ 215], 00:27:58.330 | 30.00th=[ 236], 40.00th=[ 247], 50.00th=[ 249], 60.00th=[ 255], 00:27:58.330 | 70.00th=[ 262], 80.00th=[ 266], 90.00th=[ 268], 95.00th=[ 271], 00:27:58.330 | 99.00th=[ 355], 99.50th=[ 355], 99.90th=[ 359], 99.95th=[ 359], 00:27:58.330 | 99.99th=[ 359] 00:27:58.330 bw ( KiB/s): min= 128, max= 384, per=3.76%, avg=256.00, stdev=41.85, samples=20 00:27:58.330 iops : min= 32, max= 96, avg=64.00, stdev=10.46, samples=20 00:27:58.330 lat (msec) : 250=50.30%, 500=49.70% 00:27:58.330 cpu : usr=97.52%, sys=1.76%, ctx=69, majf=0, minf=28 00:27:58.330 IO depths : 1=4.7%, 2=11.0%, 4=25.0%, 8=51.5%, 16=7.8%, 32=0.0%, >=64=0.0% 00:27:58.330 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:58.330 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:58.330 issued rwts: total=656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:58.330 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:58.330 filename1: (groupid=0, jobs=1): err= 0: pid=342810: Fri Jul 26 14:22:04 2024 00:27:58.330 read: IOPS=80, BW=323KiB/s (330kB/s)(3264KiB/10119msec) 00:27:58.330 slat (usec): min=8, max=109, avg=32.69, stdev=24.11 00:27:58.330 clat (msec): min=140, max=332, avg=197.07, stdev=33.38 00:27:58.330 lat (msec): min=140, max=332, avg=197.10, stdev=33.39 00:27:58.330 clat percentiles (msec): 00:27:58.330 | 1.00th=[ 144], 5.00th=[ 163], 10.00th=[ 167], 20.00th=[ 171], 00:27:58.330 | 30.00th=[ 174], 40.00th=[ 178], 50.00th=[ 184], 60.00th=[ 188], 00:27:58.330 | 70.00th=[ 218], 80.00th=[ 241], 90.00th=[ 251], 95.00th=[ 251], 00:27:58.330 | 99.00th=[ 259], 99.50th=[ 259], 99.90th=[ 334], 99.95th=[ 334], 00:27:58.330 | 99.99th=[ 334] 00:27:58.330 bw ( KiB/s): min= 128, max= 384, per=4.72%, avg=320.00, stdev=75.23, samples=20 00:27:58.330 iops : min= 32, max= 96, avg=80.00, stdev=18.81, samples=20 00:27:58.330 lat (msec) : 250=90.20%, 500=9.80% 00:27:58.330 cpu : usr=97.78%, sys=1.71%, ctx=37, majf=0, minf=35 00:27:58.331 IO depths : 1=2.9%, 2=9.2%, 4=25.0%, 8=53.3%, 16=9.6%, 32=0.0%, >=64=0.0% 00:27:58.331 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:58.331 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:58.331 issued rwts: total=816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:58.331 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:58.331 filename1: (groupid=0, jobs=1): err= 0: pid=342811: Fri Jul 26 14:22:04 2024 00:27:58.331 read: IOPS=88, BW=352KiB/s (361kB/s)(3568KiB/10134msec) 00:27:58.331 slat (nsec): min=9086, max=53418, avg=20688.90, stdev=5831.90 00:27:58.331 clat (msec): min=45, max=262, avg=180.94, stdev=41.21 00:27:58.331 lat (msec): min=45, max=262, avg=180.96, stdev=41.21 00:27:58.331 clat percentiles (msec): 00:27:58.331 | 1.00th=[ 46], 5.00th=[ 100], 10.00th=[ 144], 20.00th=[ 163], 00:27:58.331 | 30.00th=[ 174], 40.00th=[ 176], 50.00th=[ 178], 60.00th=[ 180], 00:27:58.331 | 70.00th=[ 184], 80.00th=[ 220], 90.00th=[ 243], 95.00th=[ 249], 00:27:58.331 | 99.00th=[ 264], 99.50th=[ 264], 99.90th=[ 264], 99.95th=[ 264], 00:27:58.331 | 99.99th=[ 264] 00:27:58.331 bw ( KiB/s): min= 256, max= 512, per=5.16%, avg=350.40, stdev=70.01, samples=20 00:27:58.331 iops : min= 64, max= 128, avg=87.60, stdev=17.50, samples=20 00:27:58.331 lat (msec) : 50=1.79%, 100=3.59%, 250=90.13%, 500=4.48% 00:27:58.331 cpu : usr=97.38%, sys=1.90%, ctx=25, majf=0, minf=54 00:27:58.331 IO depths : 1=3.0%, 2=6.6%, 4=16.9%, 8=63.9%, 16=9.5%, 32=0.0%, >=64=0.0% 00:27:58.331 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:58.331 complete : 0=0.0%, 4=91.7%, 8=2.7%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:58.331 issued rwts: total=892,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:58.331 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:58.331 filename1: (groupid=0, jobs=1): err= 0: pid=342812: Fri Jul 26 14:22:04 2024 00:27:58.331 read: IOPS=66, BW=265KiB/s (271kB/s)(2680KiB/10118msec) 00:27:58.331 slat (nsec): min=6334, max=94731, avg=40969.62, stdev=18691.65 00:27:58.331 clat (msec): min=140, max=366, avg=241.10, stdev=43.97 00:27:58.331 lat (msec): min=140, max=366, avg=241.14, stdev=43.96 00:27:58.331 clat percentiles (msec): 00:27:58.331 | 1.00th=[ 146], 5.00th=[ 167], 10.00th=[ 178], 20.00th=[ 209], 00:27:58.331 | 30.00th=[ 226], 40.00th=[ 245], 50.00th=[ 251], 60.00th=[ 257], 00:27:58.331 | 70.00th=[ 264], 80.00th=[ 266], 90.00th=[ 271], 95.00th=[ 334], 00:27:58.331 | 99.00th=[ 363], 99.50th=[ 363], 99.90th=[ 368], 99.95th=[ 368], 00:27:58.331 | 99.99th=[ 368] 00:27:58.331 bw ( KiB/s): min= 128, max= 384, per=3.85%, avg=261.60, stdev=64.06, samples=20 00:27:58.331 iops : min= 32, max= 96, avg=65.40, stdev=16.01, samples=20 00:27:58.331 lat (msec) : 250=48.96%, 500=51.04% 00:27:58.331 cpu : usr=97.85%, sys=1.73%, ctx=18, majf=0, minf=26 00:27:58.331 IO depths : 1=3.3%, 2=9.6%, 4=25.1%, 8=53.0%, 16=9.1%, 32=0.0%, >=64=0.0% 00:27:58.331 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:58.331 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:58.331 issued rwts: total=670,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:58.331 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:58.331 filename1: (groupid=0, jobs=1): err= 0: pid=342813: Fri Jul 26 14:22:04 2024 00:27:58.331 read: IOPS=67, BW=272KiB/s (278kB/s)(2752KiB/10134msec) 00:27:58.331 slat (usec): min=4, max=110, avg=63.68, stdev=18.26 00:27:58.331 clat (msec): min=98, max=315, avg=235.16, stdev=39.72 00:27:58.331 lat (msec): min=98, max=315, avg=235.22, stdev=39.72 00:27:58.331 clat percentiles (msec): 00:27:58.331 | 1.00th=[ 100], 5.00th=[ 161], 10.00th=[ 182], 20.00th=[ 211], 00:27:58.331 | 30.00th=[ 236], 40.00th=[ 243], 50.00th=[ 251], 60.00th=[ 253], 00:27:58.331 | 70.00th=[ 262], 80.00th=[ 264], 90.00th=[ 268], 95.00th=[ 271], 00:27:58.331 | 99.00th=[ 271], 99.50th=[ 271], 99.90th=[ 317], 99.95th=[ 317], 00:27:58.331 | 99.99th=[ 317] 00:27:58.331 bw ( KiB/s): min= 144, max= 384, per=3.95%, avg=268.80, stdev=55.57, samples=20 00:27:58.331 iops : min= 36, max= 96, avg=67.20, stdev=13.89, samples=20 00:27:58.331 lat (msec) : 100=2.03%, 250=46.95%, 500=51.02% 00:27:58.331 cpu : usr=97.64%, sys=1.67%, ctx=57, majf=0, minf=45 00:27:58.331 IO depths : 1=2.6%, 2=8.9%, 4=25.0%, 8=53.6%, 16=9.9%, 32=0.0%, >=64=0.0% 00:27:58.331 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:58.331 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:58.331 issued rwts: total=688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:58.331 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:58.331 filename2: (groupid=0, jobs=1): err= 0: pid=342814: Fri Jul 26 14:22:04 2024 00:27:58.331 read: IOPS=64, BW=260KiB/s (266kB/s)(2624KiB/10101msec) 00:27:58.331 slat (usec): min=18, max=100, avg=65.98, stdev=11.56 00:27:58.331 clat (msec): min=112, max=422, avg=245.81, stdev=35.33 00:27:58.331 lat (msec): min=112, max=422, avg=245.88, stdev=35.33 00:27:58.331 clat percentiles (msec): 00:27:58.331 | 1.00th=[ 176], 5.00th=[ 180], 10.00th=[ 197], 20.00th=[ 226], 00:27:58.331 | 30.00th=[ 239], 40.00th=[ 247], 50.00th=[ 253], 60.00th=[ 255], 00:27:58.331 | 70.00th=[ 262], 80.00th=[ 264], 90.00th=[ 268], 95.00th=[ 271], 00:27:58.331 | 99.00th=[ 351], 99.50th=[ 351], 99.90th=[ 422], 99.95th=[ 422], 00:27:58.331 | 99.99th=[ 422] 00:27:58.331 bw ( KiB/s): min= 128, max= 384, per=3.78%, avg=256.00, stdev=57.10, samples=20 00:27:58.331 iops : min= 32, max= 96, avg=64.00, stdev=14.28, samples=20 00:27:58.331 lat (msec) : 250=46.49%, 500=53.51% 00:27:58.331 cpu : usr=97.87%, sys=1.57%, ctx=22, majf=0, minf=43 00:27:58.331 IO depths : 1=3.4%, 2=9.6%, 4=25.0%, 8=52.9%, 16=9.1%, 32=0.0%, >=64=0.0% 00:27:58.331 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:58.331 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:58.331 issued rwts: total=656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:58.331 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:58.331 filename2: (groupid=0, jobs=1): err= 0: pid=342815: Fri Jul 26 14:22:04 2024 00:27:58.331 read: IOPS=66, BW=266KiB/s (272kB/s)(2688KiB/10105msec) 00:27:58.331 slat (nsec): min=8328, max=78441, avg=30791.44, stdev=13139.26 00:27:58.331 clat (msec): min=139, max=366, avg=240.32, stdev=43.25 00:27:58.331 lat (msec): min=139, max=366, avg=240.35, stdev=43.25 00:27:58.331 clat percentiles (msec): 00:27:58.331 | 1.00th=[ 157], 5.00th=[ 161], 10.00th=[ 178], 20.00th=[ 209], 00:27:58.331 | 30.00th=[ 228], 40.00th=[ 245], 50.00th=[ 251], 60.00th=[ 257], 00:27:58.331 | 70.00th=[ 264], 80.00th=[ 266], 90.00th=[ 271], 95.00th=[ 321], 00:27:58.331 | 99.00th=[ 363], 99.50th=[ 363], 99.90th=[ 368], 99.95th=[ 368], 00:27:58.331 | 99.99th=[ 368] 00:27:58.331 bw ( KiB/s): min= 144, max= 384, per=3.87%, avg=262.40, stdev=46.26, samples=20 00:27:58.331 iops : min= 36, max= 96, avg=65.60, stdev=11.56, samples=20 00:27:58.331 lat (msec) : 250=47.02%, 500=52.98% 00:27:58.331 cpu : usr=97.81%, sys=1.69%, ctx=52, majf=0, minf=23 00:27:58.331 IO depths : 1=3.7%, 2=10.0%, 4=25.0%, 8=52.5%, 16=8.8%, 32=0.0%, >=64=0.0% 00:27:58.331 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:58.331 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:58.331 issued rwts: total=672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:58.331 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:58.331 filename2: (groupid=0, jobs=1): err= 0: pid=342816: Fri Jul 26 14:22:04 2024 00:27:58.331 read: IOPS=94, BW=377KiB/s (386kB/s)(3816KiB/10134msec) 00:27:58.331 slat (usec): min=6, max=116, avg=15.09, stdev=14.04 00:27:58.331 clat (msec): min=18, max=269, avg=168.71, stdev=40.18 00:27:58.331 lat (msec): min=18, max=269, avg=168.73, stdev=40.18 00:27:58.331 clat percentiles (msec): 00:27:58.331 | 1.00th=[ 19], 5.00th=[ 86], 10.00th=[ 120], 20.00th=[ 157], 00:27:58.331 | 30.00th=[ 163], 40.00th=[ 171], 50.00th=[ 174], 60.00th=[ 178], 00:27:58.331 | 70.00th=[ 180], 80.00th=[ 184], 90.00th=[ 211], 95.00th=[ 247], 00:27:58.331 | 99.00th=[ 266], 99.50th=[ 268], 99.90th=[ 271], 99.95th=[ 271], 00:27:58.331 | 99.99th=[ 271] 00:27:58.331 bw ( KiB/s): min= 272, max= 512, per=5.53%, avg=375.20, stdev=60.87, samples=20 00:27:58.331 iops : min= 68, max= 128, avg=93.80, stdev=15.22, samples=20 00:27:58.331 lat (msec) : 20=1.68%, 100=3.98%, 250=90.99%, 500=3.35% 00:27:58.331 cpu : usr=98.12%, sys=1.39%, ctx=47, majf=0, minf=59 00:27:58.331 IO depths : 1=0.7%, 2=1.9%, 4=9.1%, 8=76.1%, 16=12.2%, 32=0.0%, >=64=0.0% 00:27:58.331 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:58.331 complete : 0=0.0%, 4=89.4%, 8=5.5%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:58.331 issued rwts: total=954,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:58.331 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:58.331 filename2: (groupid=0, jobs=1): err= 0: pid=342817: Fri Jul 26 14:22:04 2024 00:27:58.331 read: IOPS=74, BW=299KiB/s (306kB/s)(3032KiB/10131msec) 00:27:58.331 slat (nsec): min=5582, max=86254, avg=28885.85, stdev=18354.25 00:27:58.331 clat (msec): min=132, max=305, avg=212.86, stdev=38.76 00:27:58.331 lat (msec): min=132, max=306, avg=212.89, stdev=38.76 00:27:58.331 clat percentiles (msec): 00:27:58.331 | 1.00th=[ 142], 5.00th=[ 159], 10.00th=[ 167], 20.00th=[ 178], 00:27:58.331 | 30.00th=[ 180], 40.00th=[ 186], 50.00th=[ 211], 60.00th=[ 239], 00:27:58.331 | 70.00th=[ 249], 80.00th=[ 253], 90.00th=[ 259], 95.00th=[ 266], 00:27:58.332 | 99.00th=[ 268], 99.50th=[ 305], 99.90th=[ 305], 99.95th=[ 305], 00:27:58.332 | 99.99th=[ 305] 00:27:58.332 bw ( KiB/s): min= 128, max= 384, per=4.37%, avg=296.80, stdev=72.58, samples=20 00:27:58.332 iops : min= 32, max= 96, avg=74.20, stdev=18.14, samples=20 00:27:58.332 lat (msec) : 250=72.30%, 500=27.70% 00:27:58.332 cpu : usr=97.86%, sys=1.73%, ctx=22, majf=0, minf=30 00:27:58.332 IO depths : 1=3.0%, 2=8.2%, 4=21.6%, 8=57.7%, 16=9.5%, 32=0.0%, >=64=0.0% 00:27:58.332 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:58.332 complete : 0=0.0%, 4=93.1%, 8=1.2%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:58.332 issued rwts: total=758,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:58.332 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:58.332 filename2: (groupid=0, jobs=1): err= 0: pid=342818: Fri Jul 26 14:22:04 2024 00:27:58.332 read: IOPS=66, BW=266KiB/s (272kB/s)(2688KiB/10120msec) 00:27:58.332 slat (usec): min=17, max=111, avg=69.43, stdev=14.37 00:27:58.332 clat (msec): min=114, max=348, avg=240.38, stdev=37.41 00:27:58.332 lat (msec): min=114, max=348, avg=240.45, stdev=37.42 00:27:58.332 clat percentiles (msec): 00:27:58.332 | 1.00th=[ 157], 5.00th=[ 178], 10.00th=[ 184], 20.00th=[ 209], 00:27:58.332 | 30.00th=[ 236], 40.00th=[ 243], 50.00th=[ 251], 60.00th=[ 253], 00:27:58.332 | 70.00th=[ 262], 80.00th=[ 264], 90.00th=[ 268], 95.00th=[ 296], 00:27:58.332 | 99.00th=[ 342], 99.50th=[ 347], 99.90th=[ 351], 99.95th=[ 351], 00:27:58.332 | 99.99th=[ 351] 00:27:58.332 bw ( KiB/s): min= 128, max= 384, per=3.87%, avg=262.40, stdev=50.44, samples=20 00:27:58.332 iops : min= 32, max= 96, avg=65.60, stdev=12.61, samples=20 00:27:58.332 lat (msec) : 250=50.15%, 500=49.85% 00:27:58.332 cpu : usr=96.85%, sys=2.03%, ctx=213, majf=0, minf=33 00:27:58.332 IO depths : 1=3.3%, 2=9.5%, 4=25.0%, 8=53.0%, 16=9.2%, 32=0.0%, >=64=0.0% 00:27:58.332 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:58.332 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:58.332 issued rwts: total=672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:58.332 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:58.332 filename2: (groupid=0, jobs=1): err= 0: pid=342819: Fri Jul 26 14:22:04 2024 00:27:58.332 read: IOPS=65, BW=261KiB/s (267kB/s)(2624KiB/10047msec) 00:27:58.332 slat (usec): min=16, max=101, avg=69.35, stdev=12.49 00:27:58.332 clat (msec): min=139, max=325, avg=244.42, stdev=29.59 00:27:58.332 lat (msec): min=139, max=325, avg=244.49, stdev=29.59 00:27:58.332 clat percentiles (msec): 00:27:58.332 | 1.00th=[ 178], 5.00th=[ 178], 10.00th=[ 207], 20.00th=[ 226], 00:27:58.332 | 30.00th=[ 243], 40.00th=[ 247], 50.00th=[ 251], 60.00th=[ 257], 00:27:58.332 | 70.00th=[ 262], 80.00th=[ 266], 90.00th=[ 268], 95.00th=[ 271], 00:27:58.332 | 99.00th=[ 326], 99.50th=[ 326], 99.90th=[ 326], 99.95th=[ 326], 00:27:58.332 | 99.99th=[ 326] 00:27:58.332 bw ( KiB/s): min= 128, max= 384, per=3.76%, avg=256.00, stdev=58.73, samples=20 00:27:58.332 iops : min= 32, max= 96, avg=64.00, stdev=14.68, samples=20 00:27:58.332 lat (msec) : 250=48.78%, 500=51.22% 00:27:58.332 cpu : usr=97.75%, sys=1.64%, ctx=50, majf=0, minf=32 00:27:58.332 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:27:58.332 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:58.332 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:58.332 issued rwts: total=656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:58.332 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:58.332 filename2: (groupid=0, jobs=1): err= 0: pid=342820: Fri Jul 26 14:22:04 2024 00:27:58.332 read: IOPS=69, BW=277KiB/s (284kB/s)(2808KiB/10136msec) 00:27:58.332 slat (usec): min=6, max=156, avg=61.69, stdev=16.59 00:27:58.332 clat (msec): min=60, max=364, avg=230.34, stdev=55.66 00:27:58.332 lat (msec): min=60, max=364, avg=230.40, stdev=55.67 00:27:58.332 clat percentiles (msec): 00:27:58.332 | 1.00th=[ 61], 5.00th=[ 132], 10.00th=[ 161], 20.00th=[ 180], 00:27:58.332 | 30.00th=[ 220], 40.00th=[ 241], 50.00th=[ 249], 60.00th=[ 253], 00:27:58.332 | 70.00th=[ 259], 80.00th=[ 266], 90.00th=[ 268], 95.00th=[ 288], 00:27:58.332 | 99.00th=[ 359], 99.50th=[ 363], 99.90th=[ 363], 99.95th=[ 363], 00:27:58.332 | 99.99th=[ 363] 00:27:58.332 bw ( KiB/s): min= 240, max= 513, per=4.04%, avg=274.45, stdev=61.69, samples=20 00:27:58.332 iops : min= 60, max= 128, avg=68.60, stdev=15.37, samples=20 00:27:58.332 lat (msec) : 100=4.56%, 250=46.87%, 500=48.58% 00:27:58.332 cpu : usr=98.11%, sys=1.47%, ctx=18, majf=0, minf=34 00:27:58.332 IO depths : 1=3.4%, 2=9.7%, 4=25.1%, 8=52.8%, 16=9.0%, 32=0.0%, >=64=0.0% 00:27:58.332 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:58.332 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:58.332 issued rwts: total=702,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:58.332 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:58.332 filename2: (groupid=0, jobs=1): err= 0: pid=342821: Fri Jul 26 14:22:04 2024 00:27:58.332 read: IOPS=64, BW=260KiB/s (266kB/s)(2624KiB/10100msec) 00:27:58.332 slat (nsec): min=8754, max=88707, avg=34509.10, stdev=21799.02 00:27:58.332 clat (msec): min=177, max=350, avg=246.00, stdev=29.66 00:27:58.332 lat (msec): min=177, max=350, avg=246.04, stdev=29.65 00:27:58.332 clat percentiles (msec): 00:27:58.332 | 1.00th=[ 178], 5.00th=[ 184], 10.00th=[ 209], 20.00th=[ 228], 00:27:58.332 | 30.00th=[ 239], 40.00th=[ 247], 50.00th=[ 253], 60.00th=[ 255], 00:27:58.332 | 70.00th=[ 262], 80.00th=[ 264], 90.00th=[ 268], 95.00th=[ 271], 00:27:58.332 | 99.00th=[ 351], 99.50th=[ 351], 99.90th=[ 351], 99.95th=[ 351], 00:27:58.332 | 99.99th=[ 351] 00:27:58.332 bw ( KiB/s): min= 127, max= 384, per=3.76%, avg=255.95, stdev=58.85, samples=20 00:27:58.332 iops : min= 31, max= 96, avg=63.95, stdev=14.80, samples=20 00:27:58.332 lat (msec) : 250=43.90%, 500=56.10% 00:27:58.332 cpu : usr=98.01%, sys=1.58%, ctx=18, majf=0, minf=42 00:27:58.332 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:27:58.332 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:58.332 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:58.332 issued rwts: total=656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:58.332 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:58.332 00:27:58.332 Run status group 0 (all jobs): 00:27:58.332 READ: bw=6777KiB/s (6939kB/s), 260KiB/s-377KiB/s (266kB/s-386kB/s), io=67.1MiB (70.3MB), run=10039-10136msec 00:27:58.332 14:22:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:27:58.332 14:22:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:27:58.332 14:22:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:27:58.332 14:22:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:58.332 14:22:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:27:58.332 14:22:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:58.332 14:22:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.332 14:22:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:58.332 14:22:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.332 14:22:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:58.332 14:22:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.332 14:22:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:58.332 14:22:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.332 14:22:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:27:58.332 14:22:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:27:58.332 14:22:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:27:58.332 14:22:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:58.332 14:22:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.332 14:22:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:58.332 14:22:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.332 14:22:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:27:58.332 14:22:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.332 14:22:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:58.332 14:22:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.332 14:22:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:27:58.332 14:22:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:27:58.332 14:22:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:27:58.332 14:22:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:27:58.332 14:22:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.332 14:22:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:58.332 14:22:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.332 14:22:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:27:58.332 14:22:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.332 14:22:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:58.332 14:22:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.332 14:22:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:27:58.332 14:22:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:27:58.332 14:22:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:27:58.332 14:22:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:27:58.332 14:22:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:27:58.332 14:22:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:27:58.332 14:22:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:27:58.332 14:22:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:27:58.333 14:22:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:27:58.333 14:22:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:27:58.333 14:22:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:27:58.333 14:22:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:27:58.333 14:22:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.333 14:22:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:58.333 bdev_null0 00:27:58.333 14:22:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.333 14:22:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:58.333 14:22:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.333 14:22:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:58.333 14:22:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.333 14:22:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:58.333 14:22:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.333 14:22:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:58.333 14:22:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.333 14:22:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:58.333 14:22:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.333 14:22:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:58.333 [2024-07-26 14:22:05.070045] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:58.333 14:22:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.333 14:22:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:27:58.333 14:22:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:27:58.333 14:22:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:27:58.333 14:22:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:27:58.333 14:22:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.333 14:22:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:58.333 bdev_null1 00:27:58.333 14:22:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.333 14:22:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:27:58.333 14:22:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.333 14:22:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:58.333 14:22:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.333 14:22:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:27:58.333 14:22:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.333 14:22:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:58.333 14:22:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.333 14:22:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:58.333 14:22:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.333 14:22:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:58.333 14:22:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.333 14:22:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:27:58.333 14:22:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:27:58.333 14:22:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:27:58.333 14:22:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:58.333 14:22:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:27:58.333 14:22:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:58.333 14:22:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:27:58.333 14:22:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:27:58.333 14:22:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:27:58.333 14:22:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:58.333 14:22:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:58.333 14:22:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:27:58.333 14:22:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:27:58.333 14:22:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:58.333 { 00:27:58.333 "params": { 00:27:58.333 "name": "Nvme$subsystem", 00:27:58.333 "trtype": "$TEST_TRANSPORT", 00:27:58.333 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:58.333 "adrfam": "ipv4", 00:27:58.333 "trsvcid": "$NVMF_PORT", 00:27:58.333 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:58.333 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:58.333 "hdgst": ${hdgst:-false}, 00:27:58.333 "ddgst": ${ddgst:-false} 00:27:58.333 }, 00:27:58.333 "method": "bdev_nvme_attach_controller" 00:27:58.333 } 00:27:58.333 EOF 00:27:58.333 )") 00:27:58.333 14:22:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:27:58.333 14:22:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:58.333 14:22:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:27:58.333 14:22:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:27:58.333 14:22:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:27:58.333 14:22:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:27:58.333 14:22:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:58.333 14:22:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:27:58.333 14:22:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:27:58.333 14:22:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:27:58.333 14:22:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:27:58.333 14:22:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:27:58.333 14:22:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:58.333 14:22:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:58.333 { 00:27:58.333 "params": { 00:27:58.333 "name": "Nvme$subsystem", 00:27:58.333 "trtype": "$TEST_TRANSPORT", 00:27:58.333 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:58.333 "adrfam": "ipv4", 00:27:58.333 "trsvcid": "$NVMF_PORT", 00:27:58.333 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:58.333 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:58.333 "hdgst": ${hdgst:-false}, 00:27:58.333 "ddgst": ${ddgst:-false} 00:27:58.333 }, 00:27:58.333 "method": "bdev_nvme_attach_controller" 00:27:58.333 } 00:27:58.333 EOF 00:27:58.333 )") 00:27:58.333 14:22:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:27:58.333 14:22:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:27:58.333 14:22:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:27:58.333 14:22:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:27:58.333 14:22:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:27:58.333 14:22:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:58.333 "params": { 00:27:58.333 "name": "Nvme0", 00:27:58.333 "trtype": "tcp", 00:27:58.333 "traddr": "10.0.0.2", 00:27:58.333 "adrfam": "ipv4", 00:27:58.333 "trsvcid": "4420", 00:27:58.333 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:58.333 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:58.333 "hdgst": false, 00:27:58.333 "ddgst": false 00:27:58.333 }, 00:27:58.333 "method": "bdev_nvme_attach_controller" 00:27:58.333 },{ 00:27:58.333 "params": { 00:27:58.333 "name": "Nvme1", 00:27:58.333 "trtype": "tcp", 00:27:58.333 "traddr": "10.0.0.2", 00:27:58.333 "adrfam": "ipv4", 00:27:58.333 "trsvcid": "4420", 00:27:58.333 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:58.333 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:58.333 "hdgst": false, 00:27:58.333 "ddgst": false 00:27:58.333 }, 00:27:58.333 "method": "bdev_nvme_attach_controller" 00:27:58.333 }' 00:27:58.333 14:22:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:27:58.333 14:22:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:27:58.333 14:22:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:27:58.333 14:22:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:58.333 14:22:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:27:58.333 14:22:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:27:58.333 14:22:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:27:58.333 14:22:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:27:58.333 14:22:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:27:58.333 14:22:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:58.334 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:27:58.334 ... 00:27:58.334 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:27:58.334 ... 00:27:58.334 fio-3.35 00:27:58.334 Starting 4 threads 00:27:58.334 EAL: No free 2048 kB hugepages reported on node 1 00:28:03.595 00:28:03.595 filename0: (groupid=0, jobs=1): err= 0: pid=344204: Fri Jul 26 14:22:11 2024 00:28:03.595 read: IOPS=1849, BW=14.5MiB/s (15.2MB/s)(72.3MiB/5005msec) 00:28:03.595 slat (nsec): min=6241, max=72581, avg=22219.78, stdev=9991.06 00:28:03.595 clat (usec): min=915, max=9304, avg=4247.56, stdev=313.71 00:28:03.595 lat (usec): min=934, max=9352, avg=4269.78, stdev=314.23 00:28:03.595 clat percentiles (usec): 00:28:03.595 | 1.00th=[ 3326], 5.00th=[ 3851], 10.00th=[ 3982], 20.00th=[ 4113], 00:28:03.595 | 30.00th=[ 4178], 40.00th=[ 4228], 50.00th=[ 4293], 60.00th=[ 4293], 00:28:03.595 | 70.00th=[ 4359], 80.00th=[ 4424], 90.00th=[ 4490], 95.00th=[ 4555], 00:28:03.595 | 99.00th=[ 4883], 99.50th=[ 5145], 99.90th=[ 7046], 99.95th=[ 9110], 00:28:03.595 | 99.99th=[ 9241] 00:28:03.595 bw ( KiB/s): min=14480, max=15168, per=25.31%, avg=14800.00, stdev=197.69, samples=10 00:28:03.595 iops : min= 1810, max= 1896, avg=1850.00, stdev=24.71, samples=10 00:28:03.595 lat (usec) : 1000=0.01% 00:28:03.595 lat (msec) : 2=0.06%, 4=10.22%, 10=89.71% 00:28:03.595 cpu : usr=95.26%, sys=4.04%, ctx=73, majf=0, minf=0 00:28:03.595 IO depths : 1=1.4%, 2=16.2%, 4=57.2%, 8=25.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:03.595 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:03.595 complete : 0=0.0%, 4=91.2%, 8=8.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:03.595 issued rwts: total=9258,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:03.595 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:03.595 filename0: (groupid=0, jobs=1): err= 0: pid=344205: Fri Jul 26 14:22:11 2024 00:28:03.595 read: IOPS=1846, BW=14.4MiB/s (15.1MB/s)(72.1MiB/5001msec) 00:28:03.595 slat (nsec): min=4747, max=75584, avg=22648.09, stdev=11713.87 00:28:03.595 clat (usec): min=875, max=7980, avg=4243.95, stdev=472.54 00:28:03.595 lat (usec): min=888, max=8004, avg=4266.59, stdev=473.40 00:28:03.595 clat percentiles (usec): 00:28:03.595 | 1.00th=[ 2212], 5.00th=[ 3818], 10.00th=[ 3982], 20.00th=[ 4113], 00:28:03.595 | 30.00th=[ 4146], 40.00th=[ 4228], 50.00th=[ 4228], 60.00th=[ 4293], 00:28:03.595 | 70.00th=[ 4359], 80.00th=[ 4424], 90.00th=[ 4490], 95.00th=[ 4555], 00:28:03.595 | 99.00th=[ 6521], 99.50th=[ 6915], 99.90th=[ 7439], 99.95th=[ 7504], 00:28:03.595 | 99.99th=[ 7963] 00:28:03.595 bw ( KiB/s): min=14592, max=15248, per=25.26%, avg=14776.89, stdev=201.29, samples=9 00:28:03.595 iops : min= 1824, max= 1906, avg=1847.11, stdev=25.16, samples=9 00:28:03.595 lat (usec) : 1000=0.08% 00:28:03.595 lat (msec) : 2=0.78%, 4=9.42%, 10=89.72% 00:28:03.595 cpu : usr=95.02%, sys=4.52%, ctx=10, majf=0, minf=0 00:28:03.595 IO depths : 1=1.2%, 2=21.0%, 4=53.2%, 8=24.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:03.595 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:03.595 complete : 0=0.0%, 4=90.5%, 8=9.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:03.595 issued rwts: total=9232,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:03.596 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:03.596 filename1: (groupid=0, jobs=1): err= 0: pid=344206: Fri Jul 26 14:22:11 2024 00:28:03.596 read: IOPS=1841, BW=14.4MiB/s (15.1MB/s)(72.0MiB/5003msec) 00:28:03.596 slat (nsec): min=4199, max=75661, avg=22019.65, stdev=11518.49 00:28:03.596 clat (usec): min=822, max=8238, avg=4262.70, stdev=406.19 00:28:03.596 lat (usec): min=835, max=8252, avg=4284.72, stdev=406.99 00:28:03.596 clat percentiles (usec): 00:28:03.596 | 1.00th=[ 3130], 5.00th=[ 3949], 10.00th=[ 4047], 20.00th=[ 4113], 00:28:03.596 | 30.00th=[ 4178], 40.00th=[ 4228], 50.00th=[ 4228], 60.00th=[ 4293], 00:28:03.596 | 70.00th=[ 4359], 80.00th=[ 4424], 90.00th=[ 4490], 95.00th=[ 4555], 00:28:03.596 | 99.00th=[ 5866], 99.50th=[ 6521], 99.90th=[ 7701], 99.95th=[ 8029], 00:28:03.596 | 99.99th=[ 8225] 00:28:03.596 bw ( KiB/s): min=14496, max=15102, per=25.18%, avg=14727.80, stdev=176.62, samples=10 00:28:03.596 iops : min= 1812, max= 1887, avg=1840.90, stdev=21.90, samples=10 00:28:03.596 lat (usec) : 1000=0.04% 00:28:03.596 lat (msec) : 2=0.41%, 4=6.90%, 10=92.64% 00:28:03.596 cpu : usr=95.00%, sys=4.54%, ctx=9, majf=0, minf=9 00:28:03.596 IO depths : 1=1.1%, 2=18.9%, 4=55.3%, 8=24.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:03.596 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:03.596 complete : 0=0.0%, 4=90.5%, 8=9.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:03.596 issued rwts: total=9211,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:03.596 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:03.596 filename1: (groupid=0, jobs=1): err= 0: pid=344207: Fri Jul 26 14:22:11 2024 00:28:03.596 read: IOPS=1816, BW=14.2MiB/s (14.9MB/s)(71.6MiB/5042msec) 00:28:03.596 slat (nsec): min=6418, max=79757, avg=22657.73, stdev=11919.91 00:28:03.596 clat (usec): min=704, max=44637, avg=4285.34, stdev=780.46 00:28:03.596 lat (usec): min=718, max=44656, avg=4308.00, stdev=780.54 00:28:03.596 clat percentiles (usec): 00:28:03.596 | 1.00th=[ 2008], 5.00th=[ 3949], 10.00th=[ 4047], 20.00th=[ 4113], 00:28:03.596 | 30.00th=[ 4178], 40.00th=[ 4228], 50.00th=[ 4228], 60.00th=[ 4293], 00:28:03.596 | 70.00th=[ 4359], 80.00th=[ 4424], 90.00th=[ 4490], 95.00th=[ 4686], 00:28:03.596 | 99.00th=[ 6718], 99.50th=[ 7242], 99.90th=[ 7635], 99.95th=[ 7701], 00:28:03.596 | 99.99th=[44827] 00:28:03.596 bw ( KiB/s): min=14092, max=14848, per=25.06%, avg=14655.60, stdev=231.96, samples=10 00:28:03.596 iops : min= 1761, max= 1856, avg=1831.90, stdev=29.13, samples=10 00:28:03.596 lat (usec) : 750=0.01%, 1000=0.12% 00:28:03.596 lat (msec) : 2=0.86%, 4=6.12%, 10=92.86%, 50=0.02% 00:28:03.596 cpu : usr=94.98%, sys=4.54%, ctx=8, majf=0, minf=9 00:28:03.596 IO depths : 1=0.8%, 2=21.0%, 4=53.4%, 8=24.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:03.596 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:03.596 complete : 0=0.0%, 4=90.4%, 8=9.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:03.596 issued rwts: total=9160,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:03.596 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:03.596 00:28:03.596 Run status group 0 (all jobs): 00:28:03.596 READ: bw=57.1MiB/s (59.9MB/s), 14.2MiB/s-14.5MiB/s (14.9MB/s-15.2MB/s), io=288MiB (302MB), run=5001-5042msec 00:28:03.596 14:22:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:28:03.596 14:22:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:28:03.596 14:22:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:03.596 14:22:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:03.596 14:22:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:28:03.596 14:22:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:03.596 14:22:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:03.596 14:22:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:03.596 14:22:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:03.596 14:22:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:03.596 14:22:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:03.596 14:22:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:03.596 14:22:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:03.596 14:22:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:03.596 14:22:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:28:03.596 14:22:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:28:03.596 14:22:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:03.596 14:22:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:03.596 14:22:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:03.596 14:22:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:03.596 14:22:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:28:03.596 14:22:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:03.596 14:22:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:03.596 14:22:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:03.596 00:28:03.596 real 0m24.336s 00:28:03.596 user 4m35.436s 00:28:03.596 sys 0m6.351s 00:28:03.596 14:22:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:03.596 14:22:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:03.596 ************************************ 00:28:03.596 END TEST fio_dif_rand_params 00:28:03.596 ************************************ 00:28:03.596 14:22:11 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:28:03.596 14:22:11 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:28:03.596 14:22:11 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:03.596 14:22:11 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:03.596 ************************************ 00:28:03.596 START TEST fio_dif_digest 00:28:03.596 ************************************ 00:28:03.596 14:22:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1125 -- # fio_dif_digest 00:28:03.596 14:22:11 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:28:03.596 14:22:11 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:28:03.596 14:22:11 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:28:03.596 14:22:11 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:28:03.596 14:22:11 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:28:03.596 14:22:11 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:28:03.596 14:22:11 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:28:03.596 14:22:11 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:28:03.596 14:22:11 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:28:03.596 14:22:11 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:28:03.596 14:22:11 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:28:03.596 14:22:11 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:28:03.596 14:22:11 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:28:03.596 14:22:11 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:28:03.596 14:22:11 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:28:03.596 14:22:11 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:28:03.596 14:22:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:03.596 14:22:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:03.596 bdev_null0 00:28:03.596 14:22:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:03.596 14:22:11 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:03.596 14:22:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:03.596 14:22:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:03.596 14:22:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:03.596 14:22:11 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:03.596 14:22:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:03.596 14:22:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:03.596 14:22:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:03.596 14:22:11 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:03.596 14:22:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:03.596 14:22:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:03.596 [2024-07-26 14:22:11.462681] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:03.596 14:22:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:03.596 14:22:11 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:28:03.596 14:22:11 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:28:03.596 14:22:11 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:28:03.596 14:22:11 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:03.596 14:22:11 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:28:03.596 14:22:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:03.596 14:22:11 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:28:03.596 14:22:11 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:28:03.596 14:22:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:28:03.596 14:22:11 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:03.596 14:22:11 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:28:03.596 14:22:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:03.596 14:22:11 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:03.596 { 00:28:03.596 "params": { 00:28:03.596 "name": "Nvme$subsystem", 00:28:03.596 "trtype": "$TEST_TRANSPORT", 00:28:03.596 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:03.596 "adrfam": "ipv4", 00:28:03.596 "trsvcid": "$NVMF_PORT", 00:28:03.596 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:03.597 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:03.597 "hdgst": ${hdgst:-false}, 00:28:03.597 "ddgst": ${ddgst:-false} 00:28:03.597 }, 00:28:03.597 "method": "bdev_nvme_attach_controller" 00:28:03.597 } 00:28:03.597 EOF 00:28:03.597 )") 00:28:03.597 14:22:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:28:03.597 14:22:11 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:28:03.597 14:22:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:03.597 14:22:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:28:03.597 14:22:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:28:03.597 14:22:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:03.597 14:22:11 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:28:03.597 14:22:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:03.597 14:22:11 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:28:03.597 14:22:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:28:03.597 14:22:11 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:28:03.597 14:22:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:03.597 14:22:11 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:28:03.597 14:22:11 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:28:03.597 14:22:11 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:03.597 "params": { 00:28:03.597 "name": "Nvme0", 00:28:03.597 "trtype": "tcp", 00:28:03.597 "traddr": "10.0.0.2", 00:28:03.597 "adrfam": "ipv4", 00:28:03.597 "trsvcid": "4420", 00:28:03.597 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:03.597 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:03.597 "hdgst": true, 00:28:03.597 "ddgst": true 00:28:03.597 }, 00:28:03.597 "method": "bdev_nvme_attach_controller" 00:28:03.597 }' 00:28:03.597 14:22:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:03.597 14:22:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:03.597 14:22:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:03.597 14:22:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:03.597 14:22:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:28:03.597 14:22:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:03.597 14:22:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:03.597 14:22:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:03.597 14:22:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:03.597 14:22:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:03.854 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:28:03.854 ... 00:28:03.854 fio-3.35 00:28:03.854 Starting 3 threads 00:28:03.854 EAL: No free 2048 kB hugepages reported on node 1 00:28:16.050 00:28:16.050 filename0: (groupid=0, jobs=1): err= 0: pid=345078: Fri Jul 26 14:22:22 2024 00:28:16.050 read: IOPS=186, BW=23.3MiB/s (24.4MB/s)(234MiB/10047msec) 00:28:16.050 slat (nsec): min=6132, max=44841, avg=16867.42, stdev=5236.10 00:28:16.050 clat (usec): min=12672, max=54081, avg=16049.20, stdev=1594.26 00:28:16.050 lat (usec): min=12686, max=54095, avg=16066.07, stdev=1594.03 00:28:16.050 clat percentiles (usec): 00:28:16.050 | 1.00th=[13698], 5.00th=[14222], 10.00th=[14746], 20.00th=[15139], 00:28:16.050 | 30.00th=[15401], 40.00th=[15664], 50.00th=[15926], 60.00th=[16188], 00:28:16.050 | 70.00th=[16581], 80.00th=[16909], 90.00th=[17433], 95.00th=[17957], 00:28:16.050 | 99.00th=[19006], 99.50th=[19792], 99.90th=[48497], 99.95th=[54264], 00:28:16.050 | 99.99th=[54264] 00:28:16.050 bw ( KiB/s): min=23086, max=24832, per=32.21%, avg=23938.30, stdev=438.61, samples=20 00:28:16.050 iops : min= 180, max= 194, avg=187.00, stdev= 3.46, samples=20 00:28:16.050 lat (msec) : 20=99.73%, 50=0.21%, 100=0.05% 00:28:16.050 cpu : usr=93.91%, sys=5.61%, ctx=20, majf=0, minf=140 00:28:16.050 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:16.050 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:16.050 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:16.050 issued rwts: total=1873,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:16.050 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:16.050 filename0: (groupid=0, jobs=1): err= 0: pid=345079: Fri Jul 26 14:22:22 2024 00:28:16.050 read: IOPS=190, BW=23.9MiB/s (25.0MB/s)(240MiB/10046msec) 00:28:16.050 slat (usec): min=6, max=122, avg=20.82, stdev= 7.42 00:28:16.050 clat (usec): min=12248, max=55673, avg=15674.29, stdev=1619.80 00:28:16.050 lat (usec): min=12269, max=55692, avg=15695.11, stdev=1619.59 00:28:16.050 clat percentiles (usec): 00:28:16.050 | 1.00th=[13042], 5.00th=[13829], 10.00th=[14222], 20.00th=[14746], 00:28:16.050 | 30.00th=[15008], 40.00th=[15270], 50.00th=[15664], 60.00th=[15926], 00:28:16.050 | 70.00th=[16188], 80.00th=[16581], 90.00th=[17171], 95.00th=[17433], 00:28:16.050 | 99.00th=[18220], 99.50th=[19006], 99.90th=[46924], 99.95th=[55837], 00:28:16.050 | 99.99th=[55837] 00:28:16.050 bw ( KiB/s): min=23296, max=25344, per=32.98%, avg=24512.00, stdev=504.36, samples=20 00:28:16.050 iops : min= 182, max= 198, avg=191.50, stdev= 3.94, samples=20 00:28:16.050 lat (msec) : 20=99.74%, 50=0.21%, 100=0.05% 00:28:16.050 cpu : usr=90.65%, sys=7.07%, ctx=416, majf=0, minf=274 00:28:16.050 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:16.050 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:16.050 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:16.050 issued rwts: total=1917,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:16.050 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:16.050 filename0: (groupid=0, jobs=1): err= 0: pid=345080: Fri Jul 26 14:22:22 2024 00:28:16.050 read: IOPS=203, BW=25.4MiB/s (26.7MB/s)(256MiB/10046msec) 00:28:16.050 slat (nsec): min=7510, max=51747, avg=16513.29, stdev=5132.24 00:28:16.050 clat (usec): min=11354, max=54240, avg=14703.09, stdev=1481.04 00:28:16.050 lat (usec): min=11368, max=54273, avg=14719.60, stdev=1481.48 00:28:16.050 clat percentiles (usec): 00:28:16.050 | 1.00th=[12518], 5.00th=[13042], 10.00th=[13435], 20.00th=[13829], 00:28:16.050 | 30.00th=[14222], 40.00th=[14484], 50.00th=[14746], 60.00th=[14877], 00:28:16.051 | 70.00th=[15139], 80.00th=[15401], 90.00th=[15795], 95.00th=[16188], 00:28:16.051 | 99.00th=[16909], 99.50th=[17433], 99.90th=[19268], 99.95th=[47449], 00:28:16.051 | 99.99th=[54264] 00:28:16.051 bw ( KiB/s): min=25344, max=26880, per=35.17%, avg=26137.60, stdev=454.17, samples=20 00:28:16.051 iops : min= 198, max= 210, avg=204.20, stdev= 3.55, samples=20 00:28:16.051 lat (msec) : 20=99.90%, 50=0.05%, 100=0.05% 00:28:16.051 cpu : usr=93.71%, sys=5.80%, ctx=20, majf=0, minf=186 00:28:16.051 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:16.051 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:16.051 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:16.051 issued rwts: total=2044,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:16.051 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:16.051 00:28:16.051 Run status group 0 (all jobs): 00:28:16.051 READ: bw=72.6MiB/s (76.1MB/s), 23.3MiB/s-25.4MiB/s (24.4MB/s-26.7MB/s), io=729MiB (765MB), run=10046-10047msec 00:28:16.051 14:22:22 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:28:16.051 14:22:22 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:28:16.051 14:22:22 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:28:16.051 14:22:22 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:16.051 14:22:22 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:28:16.051 14:22:22 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:16.051 14:22:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.051 14:22:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:16.051 14:22:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.051 14:22:22 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:16.051 14:22:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.051 14:22:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:16.051 14:22:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.051 00:28:16.051 real 0m11.182s 00:28:16.051 user 0m29.070s 00:28:16.051 sys 0m2.156s 00:28:16.051 14:22:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:16.051 14:22:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:16.051 ************************************ 00:28:16.051 END TEST fio_dif_digest 00:28:16.051 ************************************ 00:28:16.051 14:22:22 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:28:16.051 14:22:22 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:28:16.051 14:22:22 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:16.051 14:22:22 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:28:16.051 14:22:22 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:16.051 14:22:22 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:28:16.051 14:22:22 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:16.051 14:22:22 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:16.051 rmmod nvme_tcp 00:28:16.051 rmmod nvme_fabrics 00:28:16.051 rmmod nvme_keyring 00:28:16.051 14:22:22 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:16.051 14:22:22 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:28:16.051 14:22:22 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:28:16.051 14:22:22 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 338903 ']' 00:28:16.051 14:22:22 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 338903 00:28:16.051 14:22:22 nvmf_dif -- common/autotest_common.sh@950 -- # '[' -z 338903 ']' 00:28:16.051 14:22:22 nvmf_dif -- common/autotest_common.sh@954 -- # kill -0 338903 00:28:16.051 14:22:22 nvmf_dif -- common/autotest_common.sh@955 -- # uname 00:28:16.051 14:22:22 nvmf_dif -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:16.051 14:22:22 nvmf_dif -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 338903 00:28:16.051 14:22:22 nvmf_dif -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:16.051 14:22:22 nvmf_dif -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:16.051 14:22:22 nvmf_dif -- common/autotest_common.sh@968 -- # echo 'killing process with pid 338903' 00:28:16.051 killing process with pid 338903 00:28:16.051 14:22:22 nvmf_dif -- common/autotest_common.sh@969 -- # kill 338903 00:28:16.051 14:22:22 nvmf_dif -- common/autotest_common.sh@974 -- # wait 338903 00:28:16.051 14:22:22 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:28:16.051 14:22:22 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:28:16.309 Waiting for block devices as requested 00:28:16.309 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:28:16.309 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:28:16.568 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:28:16.568 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:28:16.568 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:28:16.568 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:28:16.828 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:28:16.828 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:28:16.828 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:28:17.088 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:28:17.088 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:28:17.088 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:28:17.358 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:28:17.358 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:28:17.358 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:28:17.358 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:28:17.617 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:28:17.617 14:22:25 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:17.617 14:22:25 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:17.617 14:22:25 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:17.617 14:22:25 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:17.617 14:22:25 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:17.617 14:22:25 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:17.617 14:22:25 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:20.149 14:22:27 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:20.149 00:28:20.149 real 1m7.248s 00:28:20.149 user 6m31.302s 00:28:20.149 sys 0m18.485s 00:28:20.149 14:22:27 nvmf_dif -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:20.149 14:22:27 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:20.149 ************************************ 00:28:20.149 END TEST nvmf_dif 00:28:20.149 ************************************ 00:28:20.149 14:22:27 -- spdk/autotest.sh@297 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:28:20.149 14:22:27 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:28:20.149 14:22:27 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:20.149 14:22:27 -- common/autotest_common.sh@10 -- # set +x 00:28:20.149 ************************************ 00:28:20.149 START TEST nvmf_abort_qd_sizes 00:28:20.149 ************************************ 00:28:20.149 14:22:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:28:20.149 * Looking for test storage... 00:28:20.149 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:20.149 14:22:27 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:20.149 14:22:27 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:28:20.149 14:22:27 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:20.149 14:22:27 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:20.149 14:22:27 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:20.149 14:22:27 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:20.149 14:22:27 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:20.149 14:22:27 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:20.149 14:22:27 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:20.149 14:22:27 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:20.149 14:22:27 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:20.149 14:22:27 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:20.149 14:22:27 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:28:20.149 14:22:27 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:28:20.149 14:22:27 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:20.149 14:22:27 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:20.149 14:22:27 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:20.149 14:22:27 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:20.149 14:22:27 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:20.149 14:22:27 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:20.149 14:22:27 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:20.149 14:22:27 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:20.149 14:22:27 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:20.149 14:22:27 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:20.149 14:22:27 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:20.149 14:22:27 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:28:20.149 14:22:27 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:20.149 14:22:27 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:28:20.149 14:22:27 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:20.149 14:22:27 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:20.149 14:22:27 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:20.149 14:22:27 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:20.149 14:22:27 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:20.149 14:22:27 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:20.149 14:22:27 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:20.149 14:22:27 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:20.149 14:22:27 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:28:20.149 14:22:27 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:20.149 14:22:27 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:20.149 14:22:27 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:20.149 14:22:27 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:20.149 14:22:27 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:20.149 14:22:27 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:20.149 14:22:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:20.149 14:22:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:20.149 14:22:27 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:20.150 14:22:27 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:20.150 14:22:27 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:28:20.150 14:22:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:22.050 14:22:29 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:22.050 14:22:29 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:28:22.050 14:22:29 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:22.050 14:22:29 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:22.050 14:22:29 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:22.050 14:22:29 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:22.050 14:22:29 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:22.050 14:22:29 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:28:22.050 14:22:29 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:22.050 14:22:29 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:28:22.050 14:22:29 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:28:22.050 14:22:29 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:28:22.050 14:22:29 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:28:22.050 14:22:29 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:28:22.050 14:22:29 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:28:22.050 14:22:29 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:22.050 14:22:29 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:22.050 14:22:29 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:22.050 14:22:29 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:22.050 14:22:29 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:22.050 14:22:29 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:22.050 14:22:29 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:22.050 14:22:29 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:22.050 14:22:29 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:22.050 14:22:29 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:22.050 14:22:29 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:22.050 14:22:29 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:22.050 14:22:29 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:22.051 14:22:29 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:22.051 14:22:29 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:22.051 14:22:29 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:22.051 14:22:29 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:22.051 14:22:29 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:22.051 14:22:29 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:28:22.051 Found 0000:09:00.0 (0x8086 - 0x159b) 00:28:22.051 14:22:29 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:22.051 14:22:29 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:22.051 14:22:29 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:22.051 14:22:29 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:22.051 14:22:29 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:22.051 14:22:29 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:22.051 14:22:29 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:28:22.051 Found 0000:09:00.1 (0x8086 - 0x159b) 00:28:22.051 14:22:29 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:22.051 14:22:29 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:22.051 14:22:29 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:22.051 14:22:29 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:22.051 14:22:29 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:22.051 14:22:29 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:22.051 14:22:29 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:22.051 14:22:29 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:22.051 14:22:29 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:22.051 14:22:29 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:22.051 14:22:29 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:22.051 14:22:29 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:22.051 14:22:29 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:22.051 14:22:29 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:22.051 14:22:29 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:22.051 14:22:29 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:28:22.051 Found net devices under 0000:09:00.0: cvl_0_0 00:28:22.051 14:22:29 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:22.051 14:22:29 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:22.051 14:22:29 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:22.051 14:22:29 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:22.051 14:22:29 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:22.051 14:22:29 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:22.051 14:22:29 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:22.051 14:22:29 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:22.051 14:22:29 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:28:22.051 Found net devices under 0000:09:00.1: cvl_0_1 00:28:22.051 14:22:29 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:22.051 14:22:29 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:22.051 14:22:29 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:28:22.051 14:22:29 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:22.051 14:22:29 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:22.051 14:22:29 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:22.051 14:22:29 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:22.051 14:22:29 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:22.051 14:22:29 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:22.051 14:22:29 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:22.051 14:22:29 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:22.051 14:22:29 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:22.051 14:22:29 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:22.051 14:22:29 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:22.051 14:22:29 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:22.051 14:22:29 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:22.051 14:22:29 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:22.051 14:22:29 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:22.051 14:22:29 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:22.051 14:22:29 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:22.051 14:22:29 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:22.051 14:22:29 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:22.051 14:22:29 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:22.051 14:22:29 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:22.051 14:22:29 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:22.051 14:22:29 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:22.051 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:22.051 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.211 ms 00:28:22.051 00:28:22.051 --- 10.0.0.2 ping statistics --- 00:28:22.051 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:22.051 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:28:22.051 14:22:29 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:22.051 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:22.051 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.185 ms 00:28:22.051 00:28:22.051 --- 10.0.0.1 ping statistics --- 00:28:22.051 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:22.051 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:28:22.051 14:22:29 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:22.051 14:22:29 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:28:22.051 14:22:29 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:28:22.051 14:22:29 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:22.985 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:28:22.985 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:28:22.985 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:28:22.986 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:28:22.986 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:28:22.986 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:28:22.986 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:28:22.986 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:28:22.986 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:28:22.986 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:28:23.245 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:28:23.245 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:28:23.245 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:28:23.245 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:28:23.245 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:28:23.245 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:28:24.185 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:28:24.185 14:22:32 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:24.185 14:22:32 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:24.185 14:22:32 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:24.185 14:22:32 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:24.185 14:22:32 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:24.185 14:22:32 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:24.185 14:22:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:28:24.185 14:22:32 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:24.185 14:22:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:24.185 14:22:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:24.185 14:22:32 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=349872 00:28:24.185 14:22:32 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:28:24.185 14:22:32 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 349872 00:28:24.185 14:22:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # '[' -z 349872 ']' 00:28:24.185 14:22:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:24.185 14:22:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:24.185 14:22:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:24.185 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:24.185 14:22:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:24.185 14:22:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:24.185 [2024-07-26 14:22:32.192910] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:28:24.185 [2024-07-26 14:22:32.193003] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:24.444 EAL: No free 2048 kB hugepages reported on node 1 00:28:24.444 [2024-07-26 14:22:32.255798] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:24.444 [2024-07-26 14:22:32.358327] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:24.444 [2024-07-26 14:22:32.358382] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:24.444 [2024-07-26 14:22:32.358405] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:24.444 [2024-07-26 14:22:32.358416] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:24.444 [2024-07-26 14:22:32.358425] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:24.444 [2024-07-26 14:22:32.358510] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:24.444 [2024-07-26 14:22:32.358575] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:24.444 [2024-07-26 14:22:32.358641] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:24.444 [2024-07-26 14:22:32.358644] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:24.702 14:22:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:24.702 14:22:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # return 0 00:28:24.702 14:22:32 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:24.702 14:22:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:24.702 14:22:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:24.702 14:22:32 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:24.702 14:22:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:28:24.702 14:22:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:28:24.702 14:22:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:28:24.702 14:22:32 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:28:24.702 14:22:32 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:28:24.702 14:22:32 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:0b:00.0 ]] 00:28:24.703 14:22:32 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:28:24.703 14:22:32 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:28:24.703 14:22:32 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:0b:00.0 ]] 00:28:24.703 14:22:32 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:28:24.703 14:22:32 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:28:24.703 14:22:32 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:28:24.703 14:22:32 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:28:24.703 14:22:32 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:0b:00.0 00:28:24.703 14:22:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:28:24.703 14:22:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:0b:00.0 00:28:24.703 14:22:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:28:24.703 14:22:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:28:24.703 14:22:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:24.703 14:22:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:24.703 ************************************ 00:28:24.703 START TEST spdk_target_abort 00:28:24.703 ************************************ 00:28:24.703 14:22:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1125 -- # spdk_target 00:28:24.703 14:22:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:28:24.703 14:22:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:0b:00.0 -b spdk_target 00:28:24.703 14:22:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.703 14:22:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:27.981 spdk_targetn1 00:28:27.981 14:22:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:27.982 14:22:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:27.982 14:22:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:27.982 14:22:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:27.982 [2024-07-26 14:22:35.397469] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:27.982 14:22:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:27.982 14:22:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:28:27.982 14:22:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:27.982 14:22:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:27.982 14:22:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:27.982 14:22:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:28:27.982 14:22:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:27.982 14:22:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:27.982 14:22:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:27.982 14:22:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:28:27.982 14:22:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:27.982 14:22:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:27.982 [2024-07-26 14:22:35.429733] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:27.982 14:22:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:27.982 14:22:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:28:27.982 14:22:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:28:27.982 14:22:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:28:27.982 14:22:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:28:27.982 14:22:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:28:27.982 14:22:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:28:27.982 14:22:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:28:27.982 14:22:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:28:27.982 14:22:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:28:27.982 14:22:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:27.982 14:22:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:28:27.982 14:22:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:27.982 14:22:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:28:27.982 14:22:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:27.982 14:22:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:28:27.982 14:22:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:27.982 14:22:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:27.982 14:22:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:27.982 14:22:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:27.982 14:22:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:27.982 14:22:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:27.982 EAL: No free 2048 kB hugepages reported on node 1 00:28:31.260 Initializing NVMe Controllers 00:28:31.260 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:28:31.260 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:31.260 Initialization complete. Launching workers. 00:28:31.260 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 13584, failed: 0 00:28:31.260 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1276, failed to submit 12308 00:28:31.260 success 757, unsuccess 519, failed 0 00:28:31.260 14:22:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:31.260 14:22:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:31.260 EAL: No free 2048 kB hugepages reported on node 1 00:28:34.537 Initializing NVMe Controllers 00:28:34.537 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:28:34.537 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:34.537 Initialization complete. Launching workers. 00:28:34.537 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8799, failed: 0 00:28:34.537 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1267, failed to submit 7532 00:28:34.537 success 356, unsuccess 911, failed 0 00:28:34.537 14:22:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:34.537 14:22:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:34.537 EAL: No free 2048 kB hugepages reported on node 1 00:28:37.813 Initializing NVMe Controllers 00:28:37.813 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:28:37.813 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:37.813 Initialization complete. Launching workers. 00:28:37.813 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31896, failed: 0 00:28:37.813 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2839, failed to submit 29057 00:28:37.813 success 493, unsuccess 2346, failed 0 00:28:37.813 14:22:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:28:37.813 14:22:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:37.813 14:22:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:37.813 14:22:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:37.813 14:22:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:28:37.813 14:22:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:37.813 14:22:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:38.743 14:22:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:38.743 14:22:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 349872 00:28:38.743 14:22:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # '[' -z 349872 ']' 00:28:38.743 14:22:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # kill -0 349872 00:28:38.743 14:22:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # uname 00:28:38.743 14:22:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:38.743 14:22:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 349872 00:28:38.743 14:22:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:38.743 14:22:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:38.743 14:22:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 349872' 00:28:38.743 killing process with pid 349872 00:28:38.743 14:22:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@969 -- # kill 349872 00:28:38.743 14:22:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@974 -- # wait 349872 00:28:39.001 00:28:39.001 real 0m14.283s 00:28:39.001 user 0m53.942s 00:28:39.001 sys 0m2.590s 00:28:39.001 14:22:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:39.001 14:22:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:39.001 ************************************ 00:28:39.001 END TEST spdk_target_abort 00:28:39.001 ************************************ 00:28:39.001 14:22:46 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:28:39.001 14:22:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:28:39.001 14:22:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:39.001 14:22:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:39.001 ************************************ 00:28:39.001 START TEST kernel_target_abort 00:28:39.001 ************************************ 00:28:39.001 14:22:46 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1125 -- # kernel_target 00:28:39.001 14:22:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:28:39.001 14:22:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:28:39.001 14:22:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:39.001 14:22:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:39.001 14:22:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:39.001 14:22:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:39.001 14:22:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:39.001 14:22:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:39.001 14:22:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:39.001 14:22:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:39.001 14:22:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:39.001 14:22:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:28:39.001 14:22:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:28:39.001 14:22:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:28:39.001 14:22:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:39.001 14:22:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:39.001 14:22:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:28:39.001 14:22:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:28:39.001 14:22:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:28:39.001 14:22:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:28:39.001 14:22:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:28:39.001 14:22:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:28:40.375 Waiting for block devices as requested 00:28:40.375 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:28:40.375 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:28:40.375 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:28:40.375 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:28:40.375 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:28:40.633 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:28:40.633 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:28:40.633 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:28:40.633 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:28:40.890 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:28:40.890 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:28:40.890 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:28:41.149 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:28:41.149 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:28:41.149 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:28:41.149 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:28:41.407 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:28:41.407 14:22:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:28:41.407 14:22:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:28:41.407 14:22:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:28:41.407 14:22:49 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:28:41.407 14:22:49 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:28:41.407 14:22:49 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:28:41.407 14:22:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:28:41.407 14:22:49 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:28:41.407 14:22:49 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:28:41.407 No valid GPT data, bailing 00:28:41.407 14:22:49 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:28:41.665 14:22:49 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:28:41.665 14:22:49 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:28:41.665 14:22:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:28:41.665 14:22:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:28:41.665 14:22:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:41.665 14:22:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:41.665 14:22:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:28:41.665 14:22:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:28:41.665 14:22:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:28:41.665 14:22:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:28:41.665 14:22:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:28:41.665 14:22:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:28:41.665 14:22:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:28:41.665 14:22:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:28:41.665 14:22:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:28:41.665 14:22:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:28:41.665 14:22:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.1 -t tcp -s 4420 00:28:41.665 00:28:41.665 Discovery Log Number of Records 2, Generation counter 2 00:28:41.665 =====Discovery Log Entry 0====== 00:28:41.665 trtype: tcp 00:28:41.665 adrfam: ipv4 00:28:41.665 subtype: current discovery subsystem 00:28:41.665 treq: not specified, sq flow control disable supported 00:28:41.665 portid: 1 00:28:41.665 trsvcid: 4420 00:28:41.665 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:28:41.665 traddr: 10.0.0.1 00:28:41.665 eflags: none 00:28:41.665 sectype: none 00:28:41.665 =====Discovery Log Entry 1====== 00:28:41.665 trtype: tcp 00:28:41.665 adrfam: ipv4 00:28:41.665 subtype: nvme subsystem 00:28:41.665 treq: not specified, sq flow control disable supported 00:28:41.665 portid: 1 00:28:41.665 trsvcid: 4420 00:28:41.665 subnqn: nqn.2016-06.io.spdk:testnqn 00:28:41.665 traddr: 10.0.0.1 00:28:41.665 eflags: none 00:28:41.665 sectype: none 00:28:41.665 14:22:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:28:41.665 14:22:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:28:41.665 14:22:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:28:41.665 14:22:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:28:41.665 14:22:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:28:41.665 14:22:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:28:41.665 14:22:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:28:41.665 14:22:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:28:41.665 14:22:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:28:41.665 14:22:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:41.665 14:22:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:28:41.665 14:22:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:41.666 14:22:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:28:41.666 14:22:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:41.666 14:22:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:28:41.666 14:22:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:41.666 14:22:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:28:41.666 14:22:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:41.666 14:22:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:41.666 14:22:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:41.666 14:22:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:41.666 EAL: No free 2048 kB hugepages reported on node 1 00:28:44.943 Initializing NVMe Controllers 00:28:44.943 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:28:44.943 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:44.943 Initialization complete. Launching workers. 00:28:44.943 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 56865, failed: 0 00:28:44.943 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 56865, failed to submit 0 00:28:44.943 success 0, unsuccess 56865, failed 0 00:28:44.943 14:22:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:44.943 14:22:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:44.943 EAL: No free 2048 kB hugepages reported on node 1 00:28:48.218 Initializing NVMe Controllers 00:28:48.218 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:28:48.218 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:48.218 Initialization complete. Launching workers. 00:28:48.218 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 103393, failed: 0 00:28:48.219 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 26102, failed to submit 77291 00:28:48.219 success 0, unsuccess 26102, failed 0 00:28:48.219 14:22:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:48.219 14:22:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:48.219 EAL: No free 2048 kB hugepages reported on node 1 00:28:51.494 Initializing NVMe Controllers 00:28:51.494 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:28:51.494 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:51.494 Initialization complete. Launching workers. 00:28:51.494 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 100172, failed: 0 00:28:51.494 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 25038, failed to submit 75134 00:28:51.494 success 0, unsuccess 25038, failed 0 00:28:51.494 14:22:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:28:51.494 14:22:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:28:51.494 14:22:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:28:51.494 14:22:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:51.494 14:22:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:51.494 14:22:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:28:51.494 14:22:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:51.494 14:22:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:28:51.494 14:22:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:28:51.494 14:22:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:52.429 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:28:52.429 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:28:52.429 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:28:52.429 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:28:52.429 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:28:52.429 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:28:52.429 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:28:52.429 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:28:52.429 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:28:52.429 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:28:52.429 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:28:52.429 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:28:52.429 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:28:52.429 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:28:52.429 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:28:52.429 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:28:53.367 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:28:53.367 00:28:53.367 real 0m14.448s 00:28:53.367 user 0m6.683s 00:28:53.367 sys 0m3.244s 00:28:53.367 14:23:01 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:53.367 14:23:01 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:53.367 ************************************ 00:28:53.367 END TEST kernel_target_abort 00:28:53.367 ************************************ 00:28:53.367 14:23:01 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:28:53.367 14:23:01 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:28:53.367 14:23:01 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:53.367 14:23:01 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:28:53.367 14:23:01 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:53.367 14:23:01 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:28:53.367 14:23:01 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:53.367 14:23:01 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:53.367 rmmod nvme_tcp 00:28:53.367 rmmod nvme_fabrics 00:28:53.626 rmmod nvme_keyring 00:28:53.626 14:23:01 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:53.626 14:23:01 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:28:53.626 14:23:01 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:28:53.627 14:23:01 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 349872 ']' 00:28:53.627 14:23:01 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 349872 00:28:53.627 14:23:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # '[' -z 349872 ']' 00:28:53.627 14:23:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # kill -0 349872 00:28:53.627 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (349872) - No such process 00:28:53.627 14:23:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@977 -- # echo 'Process with pid 349872 is not found' 00:28:53.627 Process with pid 349872 is not found 00:28:53.627 14:23:01 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:28:53.627 14:23:01 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:28:54.562 Waiting for block devices as requested 00:28:54.562 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:28:54.562 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:28:54.820 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:28:54.820 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:28:54.820 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:28:55.079 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:28:55.079 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:28:55.079 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:28:55.079 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:28:55.337 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:28:55.337 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:28:55.595 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:28:55.595 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:28:55.595 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:28:55.595 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:28:55.853 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:28:55.853 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:28:55.853 14:23:03 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:55.853 14:23:03 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:55.853 14:23:03 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:55.854 14:23:03 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:55.854 14:23:03 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:55.854 14:23:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:55.854 14:23:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:58.386 14:23:05 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:58.386 00:28:58.386 real 0m38.259s 00:28:58.386 user 1m2.645s 00:28:58.386 sys 0m9.256s 00:28:58.386 14:23:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:58.386 14:23:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:58.386 ************************************ 00:28:58.386 END TEST nvmf_abort_qd_sizes 00:28:58.386 ************************************ 00:28:58.386 14:23:05 -- spdk/autotest.sh@299 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:28:58.386 14:23:05 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:28:58.386 14:23:05 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:58.386 14:23:05 -- common/autotest_common.sh@10 -- # set +x 00:28:58.386 ************************************ 00:28:58.386 START TEST keyring_file 00:28:58.386 ************************************ 00:28:58.386 14:23:05 keyring_file -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:28:58.386 * Looking for test storage... 00:28:58.386 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:28:58.386 14:23:05 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:28:58.386 14:23:05 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:58.386 14:23:05 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:28:58.386 14:23:05 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:58.386 14:23:05 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:58.386 14:23:05 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:58.386 14:23:05 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:58.386 14:23:05 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:58.386 14:23:05 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:58.386 14:23:05 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:58.386 14:23:05 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:58.386 14:23:05 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:58.386 14:23:05 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:58.386 14:23:05 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:28:58.386 14:23:05 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:28:58.386 14:23:05 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:58.386 14:23:05 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:58.386 14:23:05 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:58.386 14:23:05 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:58.386 14:23:06 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:58.386 14:23:06 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:58.386 14:23:06 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:58.386 14:23:06 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:58.386 14:23:06 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.386 14:23:06 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.386 14:23:06 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.386 14:23:06 keyring_file -- paths/export.sh@5 -- # export PATH 00:28:58.386 14:23:06 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.386 14:23:06 keyring_file -- nvmf/common.sh@47 -- # : 0 00:28:58.386 14:23:06 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:58.386 14:23:06 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:58.386 14:23:06 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:58.386 14:23:06 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:58.386 14:23:06 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:58.386 14:23:06 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:58.386 14:23:06 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:58.386 14:23:06 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:58.386 14:23:06 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:28:58.386 14:23:06 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:28:58.386 14:23:06 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:28:58.386 14:23:06 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:28:58.386 14:23:06 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:28:58.386 14:23:06 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:28:58.386 14:23:06 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:28:58.386 14:23:06 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:28:58.386 14:23:06 keyring_file -- keyring/common.sh@17 -- # name=key0 00:28:58.386 14:23:06 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:28:58.386 14:23:06 keyring_file -- keyring/common.sh@17 -- # digest=0 00:28:58.386 14:23:06 keyring_file -- keyring/common.sh@18 -- # mktemp 00:28:58.386 14:23:06 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.i2JPTVqY40 00:28:58.386 14:23:06 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:28:58.386 14:23:06 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:28:58.386 14:23:06 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:28:58.386 14:23:06 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:28:58.386 14:23:06 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:28:58.386 14:23:06 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:28:58.386 14:23:06 keyring_file -- nvmf/common.sh@705 -- # python - 00:28:58.386 14:23:06 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.i2JPTVqY40 00:28:58.386 14:23:06 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.i2JPTVqY40 00:28:58.386 14:23:06 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.i2JPTVqY40 00:28:58.386 14:23:06 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:28:58.386 14:23:06 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:28:58.386 14:23:06 keyring_file -- keyring/common.sh@17 -- # name=key1 00:28:58.386 14:23:06 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:28:58.386 14:23:06 keyring_file -- keyring/common.sh@17 -- # digest=0 00:28:58.386 14:23:06 keyring_file -- keyring/common.sh@18 -- # mktemp 00:28:58.386 14:23:06 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.Ayisxa1siR 00:28:58.386 14:23:06 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:28:58.386 14:23:06 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:28:58.386 14:23:06 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:28:58.386 14:23:06 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:28:58.386 14:23:06 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:28:58.386 14:23:06 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:28:58.386 14:23:06 keyring_file -- nvmf/common.sh@705 -- # python - 00:28:58.386 14:23:06 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.Ayisxa1siR 00:28:58.386 14:23:06 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.Ayisxa1siR 00:28:58.386 14:23:06 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.Ayisxa1siR 00:28:58.386 14:23:06 keyring_file -- keyring/file.sh@30 -- # tgtpid=355642 00:28:58.386 14:23:06 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:28:58.386 14:23:06 keyring_file -- keyring/file.sh@32 -- # waitforlisten 355642 00:28:58.386 14:23:06 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 355642 ']' 00:28:58.386 14:23:06 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:58.386 14:23:06 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:58.386 14:23:06 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:58.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:58.386 14:23:06 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:58.386 14:23:06 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:28:58.386 [2024-07-26 14:23:06.138400] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:28:58.386 [2024-07-26 14:23:06.138501] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid355642 ] 00:28:58.386 EAL: No free 2048 kB hugepages reported on node 1 00:28:58.386 [2024-07-26 14:23:06.199471] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:58.386 [2024-07-26 14:23:06.305366] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:58.645 14:23:06 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:58.645 14:23:06 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:28:58.645 14:23:06 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:28:58.645 14:23:06 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:58.645 14:23:06 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:28:58.645 [2024-07-26 14:23:06.521388] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:58.645 null0 00:28:58.645 [2024-07-26 14:23:06.553439] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:28:58.645 [2024-07-26 14:23:06.553924] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:28:58.645 [2024-07-26 14:23:06.561444] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:28:58.645 14:23:06 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:58.645 14:23:06 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:28:58.645 14:23:06 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:28:58.645 14:23:06 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:28:58.645 14:23:06 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:28:58.645 14:23:06 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:58.645 14:23:06 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:28:58.645 14:23:06 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:58.645 14:23:06 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:28:58.645 14:23:06 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:58.645 14:23:06 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:28:58.645 [2024-07-26 14:23:06.569466] nvmf_rpc.c: 788:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:28:58.645 request: 00:28:58.645 { 00:28:58.645 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:28:58.645 "secure_channel": false, 00:28:58.645 "listen_address": { 00:28:58.645 "trtype": "tcp", 00:28:58.645 "traddr": "127.0.0.1", 00:28:58.645 "trsvcid": "4420" 00:28:58.645 }, 00:28:58.645 "method": "nvmf_subsystem_add_listener", 00:28:58.645 "req_id": 1 00:28:58.645 } 00:28:58.645 Got JSON-RPC error response 00:28:58.645 response: 00:28:58.645 { 00:28:58.645 "code": -32602, 00:28:58.645 "message": "Invalid parameters" 00:28:58.645 } 00:28:58.645 14:23:06 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:28:58.645 14:23:06 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:28:58.645 14:23:06 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:58.645 14:23:06 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:58.645 14:23:06 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:58.645 14:23:06 keyring_file -- keyring/file.sh@46 -- # bperfpid=355646 00:28:58.645 14:23:06 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:28:58.645 14:23:06 keyring_file -- keyring/file.sh@48 -- # waitforlisten 355646 /var/tmp/bperf.sock 00:28:58.645 14:23:06 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 355646 ']' 00:28:58.645 14:23:06 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:58.645 14:23:06 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:58.645 14:23:06 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:58.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:58.645 14:23:06 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:58.645 14:23:06 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:28:58.645 [2024-07-26 14:23:06.613799] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:28:58.645 [2024-07-26 14:23:06.613881] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid355646 ] 00:28:58.645 EAL: No free 2048 kB hugepages reported on node 1 00:28:58.904 [2024-07-26 14:23:06.670284] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:58.904 [2024-07-26 14:23:06.774422] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:58.904 14:23:06 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:58.904 14:23:06 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:28:58.904 14:23:06 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.i2JPTVqY40 00:28:58.904 14:23:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.i2JPTVqY40 00:28:59.162 14:23:07 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.Ayisxa1siR 00:28:59.162 14:23:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.Ayisxa1siR 00:28:59.419 14:23:07 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:28:59.419 14:23:07 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:28:59.419 14:23:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:59.419 14:23:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:59.419 14:23:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:59.677 14:23:07 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.i2JPTVqY40 == \/\t\m\p\/\t\m\p\.\i\2\J\P\T\V\q\Y\4\0 ]] 00:28:59.677 14:23:07 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:28:59.677 14:23:07 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:28:59.677 14:23:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:59.677 14:23:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:59.677 14:23:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:28:59.934 14:23:07 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.Ayisxa1siR == \/\t\m\p\/\t\m\p\.\A\y\i\s\x\a\1\s\i\R ]] 00:28:59.934 14:23:07 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:28:59.934 14:23:07 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:59.934 14:23:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:59.934 14:23:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:59.934 14:23:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:59.934 14:23:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:00.192 14:23:08 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:29:00.192 14:23:08 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:29:00.192 14:23:08 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:29:00.192 14:23:08 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:00.192 14:23:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:00.192 14:23:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:00.192 14:23:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:00.450 14:23:08 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:29:00.450 14:23:08 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:00.450 14:23:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:00.707 [2024-07-26 14:23:08.606835] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:00.707 nvme0n1 00:29:00.707 14:23:08 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:29:00.707 14:23:08 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:00.708 14:23:08 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:00.708 14:23:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:00.708 14:23:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:00.708 14:23:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:00.967 14:23:08 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:29:00.967 14:23:08 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:29:00.967 14:23:08 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:29:00.967 14:23:08 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:00.967 14:23:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:00.967 14:23:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:00.967 14:23:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:01.263 14:23:09 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:29:01.263 14:23:09 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:01.571 Running I/O for 1 seconds... 00:29:02.590 00:29:02.590 Latency(us) 00:29:02.590 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:02.590 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:29:02.590 nvme0n1 : 1.01 10066.90 39.32 0.00 0.00 12662.44 4053.52 19320.98 00:29:02.590 =================================================================================================================== 00:29:02.590 Total : 10066.90 39.32 0.00 0.00 12662.44 4053.52 19320.98 00:29:02.590 0 00:29:02.590 14:23:10 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:29:02.590 14:23:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:29:02.590 14:23:10 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:29:02.590 14:23:10 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:02.590 14:23:10 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:02.590 14:23:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:02.590 14:23:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:02.590 14:23:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:02.847 14:23:10 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:29:02.847 14:23:10 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:29:02.847 14:23:10 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:29:02.847 14:23:10 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:02.847 14:23:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:02.847 14:23:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:02.847 14:23:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:03.105 14:23:11 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:29:03.105 14:23:11 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:29:03.105 14:23:11 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:29:03.105 14:23:11 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:29:03.105 14:23:11 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:29:03.105 14:23:11 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:03.105 14:23:11 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:29:03.105 14:23:11 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:03.105 14:23:11 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:29:03.105 14:23:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:29:03.363 [2024-07-26 14:23:11.311228] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:29:03.363 [2024-07-26 14:23:11.311567] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c09a0 (107): Transport endpoint is not connected 00:29:03.363 [2024-07-26 14:23:11.312560] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c09a0 (9): Bad file descriptor 00:29:03.363 [2024-07-26 14:23:11.313559] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:03.363 [2024-07-26 14:23:11.313579] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:29:03.363 [2024-07-26 14:23:11.313593] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:03.363 request: 00:29:03.363 { 00:29:03.363 "name": "nvme0", 00:29:03.363 "trtype": "tcp", 00:29:03.363 "traddr": "127.0.0.1", 00:29:03.363 "adrfam": "ipv4", 00:29:03.363 "trsvcid": "4420", 00:29:03.363 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:03.363 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:03.363 "prchk_reftag": false, 00:29:03.363 "prchk_guard": false, 00:29:03.363 "hdgst": false, 00:29:03.363 "ddgst": false, 00:29:03.363 "psk": "key1", 00:29:03.363 "method": "bdev_nvme_attach_controller", 00:29:03.363 "req_id": 1 00:29:03.363 } 00:29:03.363 Got JSON-RPC error response 00:29:03.363 response: 00:29:03.363 { 00:29:03.363 "code": -5, 00:29:03.363 "message": "Input/output error" 00:29:03.363 } 00:29:03.363 14:23:11 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:29:03.363 14:23:11 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:03.363 14:23:11 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:03.363 14:23:11 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:03.363 14:23:11 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:29:03.363 14:23:11 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:03.363 14:23:11 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:03.363 14:23:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:03.363 14:23:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:03.363 14:23:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:03.621 14:23:11 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:29:03.621 14:23:11 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:29:03.621 14:23:11 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:29:03.621 14:23:11 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:03.621 14:23:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:03.621 14:23:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:03.621 14:23:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:03.878 14:23:11 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:29:03.878 14:23:11 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:29:03.878 14:23:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:29:04.136 14:23:12 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:29:04.136 14:23:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:29:04.393 14:23:12 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:29:04.393 14:23:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:04.393 14:23:12 keyring_file -- keyring/file.sh@77 -- # jq length 00:29:04.650 14:23:12 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:29:04.651 14:23:12 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.i2JPTVqY40 00:29:04.651 14:23:12 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.i2JPTVqY40 00:29:04.651 14:23:12 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:29:04.651 14:23:12 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.i2JPTVqY40 00:29:04.651 14:23:12 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:29:04.651 14:23:12 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:04.651 14:23:12 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:29:04.651 14:23:12 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:04.651 14:23:12 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.i2JPTVqY40 00:29:04.651 14:23:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.i2JPTVqY40 00:29:04.908 [2024-07-26 14:23:12.807673] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.i2JPTVqY40': 0100660 00:29:04.908 [2024-07-26 14:23:12.807707] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:29:04.908 request: 00:29:04.908 { 00:29:04.908 "name": "key0", 00:29:04.908 "path": "/tmp/tmp.i2JPTVqY40", 00:29:04.908 "method": "keyring_file_add_key", 00:29:04.908 "req_id": 1 00:29:04.908 } 00:29:04.908 Got JSON-RPC error response 00:29:04.908 response: 00:29:04.908 { 00:29:04.908 "code": -1, 00:29:04.908 "message": "Operation not permitted" 00:29:04.908 } 00:29:04.908 14:23:12 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:29:04.908 14:23:12 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:04.908 14:23:12 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:04.908 14:23:12 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:04.908 14:23:12 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.i2JPTVqY40 00:29:04.908 14:23:12 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.i2JPTVqY40 00:29:04.908 14:23:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.i2JPTVqY40 00:29:05.166 14:23:13 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.i2JPTVqY40 00:29:05.166 14:23:13 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:29:05.166 14:23:13 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:05.166 14:23:13 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:05.166 14:23:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:05.166 14:23:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:05.166 14:23:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:05.429 14:23:13 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:29:05.429 14:23:13 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:05.429 14:23:13 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:29:05.429 14:23:13 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:05.429 14:23:13 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:29:05.429 14:23:13 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:05.429 14:23:13 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:29:05.429 14:23:13 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:05.429 14:23:13 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:05.429 14:23:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:05.686 [2024-07-26 14:23:13.537671] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.i2JPTVqY40': No such file or directory 00:29:05.686 [2024-07-26 14:23:13.537702] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:29:05.686 [2024-07-26 14:23:13.537745] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:29:05.686 [2024-07-26 14:23:13.537757] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:05.686 [2024-07-26 14:23:13.537769] bdev_nvme.c:6296:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:29:05.686 request: 00:29:05.686 { 00:29:05.686 "name": "nvme0", 00:29:05.686 "trtype": "tcp", 00:29:05.686 "traddr": "127.0.0.1", 00:29:05.686 "adrfam": "ipv4", 00:29:05.686 "trsvcid": "4420", 00:29:05.686 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:05.686 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:05.686 "prchk_reftag": false, 00:29:05.686 "prchk_guard": false, 00:29:05.686 "hdgst": false, 00:29:05.686 "ddgst": false, 00:29:05.686 "psk": "key0", 00:29:05.686 "method": "bdev_nvme_attach_controller", 00:29:05.686 "req_id": 1 00:29:05.686 } 00:29:05.686 Got JSON-RPC error response 00:29:05.686 response: 00:29:05.686 { 00:29:05.686 "code": -19, 00:29:05.686 "message": "No such device" 00:29:05.686 } 00:29:05.686 14:23:13 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:29:05.686 14:23:13 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:05.686 14:23:13 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:05.686 14:23:13 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:05.686 14:23:13 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:29:05.686 14:23:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:29:05.943 14:23:13 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:29:05.943 14:23:13 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:29:05.943 14:23:13 keyring_file -- keyring/common.sh@17 -- # name=key0 00:29:05.943 14:23:13 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:29:05.943 14:23:13 keyring_file -- keyring/common.sh@17 -- # digest=0 00:29:05.943 14:23:13 keyring_file -- keyring/common.sh@18 -- # mktemp 00:29:05.943 14:23:13 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.PBizHmhTpi 00:29:05.943 14:23:13 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:29:05.943 14:23:13 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:29:05.943 14:23:13 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:29:05.943 14:23:13 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:29:05.943 14:23:13 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:29:05.943 14:23:13 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:29:05.943 14:23:13 keyring_file -- nvmf/common.sh@705 -- # python - 00:29:05.943 14:23:13 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.PBizHmhTpi 00:29:05.944 14:23:13 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.PBizHmhTpi 00:29:05.944 14:23:13 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.PBizHmhTpi 00:29:05.944 14:23:13 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.PBizHmhTpi 00:29:05.944 14:23:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.PBizHmhTpi 00:29:06.201 14:23:14 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:06.201 14:23:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:06.459 nvme0n1 00:29:06.459 14:23:14 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:29:06.459 14:23:14 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:06.459 14:23:14 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:06.459 14:23:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:06.459 14:23:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:06.459 14:23:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:06.716 14:23:14 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:29:06.716 14:23:14 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:29:06.716 14:23:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:29:06.973 14:23:14 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:29:06.973 14:23:14 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:29:06.973 14:23:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:06.973 14:23:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:06.973 14:23:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:07.231 14:23:15 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:29:07.231 14:23:15 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:29:07.231 14:23:15 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:07.231 14:23:15 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:07.231 14:23:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:07.231 14:23:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:07.231 14:23:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:07.488 14:23:15 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:29:07.488 14:23:15 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:29:07.488 14:23:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:29:07.745 14:23:15 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:29:07.745 14:23:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:07.745 14:23:15 keyring_file -- keyring/file.sh@104 -- # jq length 00:29:08.003 14:23:15 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:29:08.003 14:23:15 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.PBizHmhTpi 00:29:08.003 14:23:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.PBizHmhTpi 00:29:08.260 14:23:16 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.Ayisxa1siR 00:29:08.260 14:23:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.Ayisxa1siR 00:29:08.518 14:23:16 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:08.518 14:23:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:08.775 nvme0n1 00:29:08.775 14:23:16 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:29:08.775 14:23:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:29:09.033 14:23:16 keyring_file -- keyring/file.sh@112 -- # config='{ 00:29:09.033 "subsystems": [ 00:29:09.033 { 00:29:09.033 "subsystem": "keyring", 00:29:09.033 "config": [ 00:29:09.033 { 00:29:09.033 "method": "keyring_file_add_key", 00:29:09.033 "params": { 00:29:09.033 "name": "key0", 00:29:09.033 "path": "/tmp/tmp.PBizHmhTpi" 00:29:09.033 } 00:29:09.033 }, 00:29:09.033 { 00:29:09.033 "method": "keyring_file_add_key", 00:29:09.033 "params": { 00:29:09.033 "name": "key1", 00:29:09.033 "path": "/tmp/tmp.Ayisxa1siR" 00:29:09.033 } 00:29:09.033 } 00:29:09.033 ] 00:29:09.033 }, 00:29:09.033 { 00:29:09.033 "subsystem": "iobuf", 00:29:09.033 "config": [ 00:29:09.033 { 00:29:09.033 "method": "iobuf_set_options", 00:29:09.033 "params": { 00:29:09.033 "small_pool_count": 8192, 00:29:09.033 "large_pool_count": 1024, 00:29:09.033 "small_bufsize": 8192, 00:29:09.033 "large_bufsize": 135168 00:29:09.033 } 00:29:09.033 } 00:29:09.033 ] 00:29:09.033 }, 00:29:09.033 { 00:29:09.033 "subsystem": "sock", 00:29:09.033 "config": [ 00:29:09.033 { 00:29:09.033 "method": "sock_set_default_impl", 00:29:09.033 "params": { 00:29:09.033 "impl_name": "posix" 00:29:09.033 } 00:29:09.033 }, 00:29:09.033 { 00:29:09.033 "method": "sock_impl_set_options", 00:29:09.033 "params": { 00:29:09.033 "impl_name": "ssl", 00:29:09.033 "recv_buf_size": 4096, 00:29:09.033 "send_buf_size": 4096, 00:29:09.033 "enable_recv_pipe": true, 00:29:09.033 "enable_quickack": false, 00:29:09.033 "enable_placement_id": 0, 00:29:09.033 "enable_zerocopy_send_server": true, 00:29:09.033 "enable_zerocopy_send_client": false, 00:29:09.033 "zerocopy_threshold": 0, 00:29:09.033 "tls_version": 0, 00:29:09.033 "enable_ktls": false 00:29:09.033 } 00:29:09.033 }, 00:29:09.033 { 00:29:09.033 "method": "sock_impl_set_options", 00:29:09.033 "params": { 00:29:09.033 "impl_name": "posix", 00:29:09.033 "recv_buf_size": 2097152, 00:29:09.033 "send_buf_size": 2097152, 00:29:09.033 "enable_recv_pipe": true, 00:29:09.033 "enable_quickack": false, 00:29:09.033 "enable_placement_id": 0, 00:29:09.033 "enable_zerocopy_send_server": true, 00:29:09.033 "enable_zerocopy_send_client": false, 00:29:09.033 "zerocopy_threshold": 0, 00:29:09.033 "tls_version": 0, 00:29:09.033 "enable_ktls": false 00:29:09.033 } 00:29:09.033 } 00:29:09.033 ] 00:29:09.033 }, 00:29:09.033 { 00:29:09.033 "subsystem": "vmd", 00:29:09.033 "config": [] 00:29:09.033 }, 00:29:09.033 { 00:29:09.033 "subsystem": "accel", 00:29:09.033 "config": [ 00:29:09.033 { 00:29:09.033 "method": "accel_set_options", 00:29:09.033 "params": { 00:29:09.033 "small_cache_size": 128, 00:29:09.033 "large_cache_size": 16, 00:29:09.034 "task_count": 2048, 00:29:09.034 "sequence_count": 2048, 00:29:09.034 "buf_count": 2048 00:29:09.034 } 00:29:09.034 } 00:29:09.034 ] 00:29:09.034 }, 00:29:09.034 { 00:29:09.034 "subsystem": "bdev", 00:29:09.034 "config": [ 00:29:09.034 { 00:29:09.034 "method": "bdev_set_options", 00:29:09.034 "params": { 00:29:09.034 "bdev_io_pool_size": 65535, 00:29:09.034 "bdev_io_cache_size": 256, 00:29:09.034 "bdev_auto_examine": true, 00:29:09.034 "iobuf_small_cache_size": 128, 00:29:09.034 "iobuf_large_cache_size": 16 00:29:09.034 } 00:29:09.034 }, 00:29:09.034 { 00:29:09.034 "method": "bdev_raid_set_options", 00:29:09.034 "params": { 00:29:09.034 "process_window_size_kb": 1024, 00:29:09.034 "process_max_bandwidth_mb_sec": 0 00:29:09.034 } 00:29:09.034 }, 00:29:09.034 { 00:29:09.034 "method": "bdev_iscsi_set_options", 00:29:09.034 "params": { 00:29:09.034 "timeout_sec": 30 00:29:09.034 } 00:29:09.034 }, 00:29:09.034 { 00:29:09.034 "method": "bdev_nvme_set_options", 00:29:09.034 "params": { 00:29:09.034 "action_on_timeout": "none", 00:29:09.034 "timeout_us": 0, 00:29:09.034 "timeout_admin_us": 0, 00:29:09.034 "keep_alive_timeout_ms": 10000, 00:29:09.034 "arbitration_burst": 0, 00:29:09.034 "low_priority_weight": 0, 00:29:09.034 "medium_priority_weight": 0, 00:29:09.034 "high_priority_weight": 0, 00:29:09.034 "nvme_adminq_poll_period_us": 10000, 00:29:09.034 "nvme_ioq_poll_period_us": 0, 00:29:09.034 "io_queue_requests": 512, 00:29:09.034 "delay_cmd_submit": true, 00:29:09.034 "transport_retry_count": 4, 00:29:09.034 "bdev_retry_count": 3, 00:29:09.034 "transport_ack_timeout": 0, 00:29:09.034 "ctrlr_loss_timeout_sec": 0, 00:29:09.034 "reconnect_delay_sec": 0, 00:29:09.034 "fast_io_fail_timeout_sec": 0, 00:29:09.034 "disable_auto_failback": false, 00:29:09.034 "generate_uuids": false, 00:29:09.034 "transport_tos": 0, 00:29:09.034 "nvme_error_stat": false, 00:29:09.034 "rdma_srq_size": 0, 00:29:09.034 "io_path_stat": false, 00:29:09.034 "allow_accel_sequence": false, 00:29:09.034 "rdma_max_cq_size": 0, 00:29:09.034 "rdma_cm_event_timeout_ms": 0, 00:29:09.034 "dhchap_digests": [ 00:29:09.034 "sha256", 00:29:09.034 "sha384", 00:29:09.034 "sha512" 00:29:09.034 ], 00:29:09.034 "dhchap_dhgroups": [ 00:29:09.034 "null", 00:29:09.034 "ffdhe2048", 00:29:09.034 "ffdhe3072", 00:29:09.034 "ffdhe4096", 00:29:09.034 "ffdhe6144", 00:29:09.034 "ffdhe8192" 00:29:09.034 ] 00:29:09.034 } 00:29:09.034 }, 00:29:09.034 { 00:29:09.034 "method": "bdev_nvme_attach_controller", 00:29:09.034 "params": { 00:29:09.034 "name": "nvme0", 00:29:09.034 "trtype": "TCP", 00:29:09.034 "adrfam": "IPv4", 00:29:09.034 "traddr": "127.0.0.1", 00:29:09.034 "trsvcid": "4420", 00:29:09.034 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:09.034 "prchk_reftag": false, 00:29:09.034 "prchk_guard": false, 00:29:09.034 "ctrlr_loss_timeout_sec": 0, 00:29:09.034 "reconnect_delay_sec": 0, 00:29:09.034 "fast_io_fail_timeout_sec": 0, 00:29:09.034 "psk": "key0", 00:29:09.034 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:09.034 "hdgst": false, 00:29:09.034 "ddgst": false 00:29:09.034 } 00:29:09.034 }, 00:29:09.034 { 00:29:09.034 "method": "bdev_nvme_set_hotplug", 00:29:09.034 "params": { 00:29:09.034 "period_us": 100000, 00:29:09.034 "enable": false 00:29:09.034 } 00:29:09.034 }, 00:29:09.034 { 00:29:09.034 "method": "bdev_wait_for_examine" 00:29:09.034 } 00:29:09.034 ] 00:29:09.034 }, 00:29:09.034 { 00:29:09.034 "subsystem": "nbd", 00:29:09.034 "config": [] 00:29:09.034 } 00:29:09.034 ] 00:29:09.034 }' 00:29:09.034 14:23:16 keyring_file -- keyring/file.sh@114 -- # killprocess 355646 00:29:09.034 14:23:16 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 355646 ']' 00:29:09.034 14:23:16 keyring_file -- common/autotest_common.sh@954 -- # kill -0 355646 00:29:09.034 14:23:16 keyring_file -- common/autotest_common.sh@955 -- # uname 00:29:09.034 14:23:16 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:09.034 14:23:16 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 355646 00:29:09.034 14:23:17 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:09.034 14:23:17 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:09.034 14:23:17 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 355646' 00:29:09.034 killing process with pid 355646 00:29:09.034 14:23:17 keyring_file -- common/autotest_common.sh@969 -- # kill 355646 00:29:09.034 Received shutdown signal, test time was about 1.000000 seconds 00:29:09.034 00:29:09.034 Latency(us) 00:29:09.034 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:09.034 =================================================================================================================== 00:29:09.034 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:09.034 14:23:17 keyring_file -- common/autotest_common.sh@974 -- # wait 355646 00:29:09.292 14:23:17 keyring_file -- keyring/file.sh@117 -- # bperfpid=357116 00:29:09.293 14:23:17 keyring_file -- keyring/file.sh@119 -- # waitforlisten 357116 /var/tmp/bperf.sock 00:29:09.293 14:23:17 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 357116 ']' 00:29:09.293 14:23:17 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:29:09.293 14:23:17 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:09.293 14:23:17 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:29:09.293 "subsystems": [ 00:29:09.293 { 00:29:09.293 "subsystem": "keyring", 00:29:09.293 "config": [ 00:29:09.293 { 00:29:09.293 "method": "keyring_file_add_key", 00:29:09.293 "params": { 00:29:09.293 "name": "key0", 00:29:09.293 "path": "/tmp/tmp.PBizHmhTpi" 00:29:09.293 } 00:29:09.293 }, 00:29:09.293 { 00:29:09.293 "method": "keyring_file_add_key", 00:29:09.293 "params": { 00:29:09.293 "name": "key1", 00:29:09.293 "path": "/tmp/tmp.Ayisxa1siR" 00:29:09.293 } 00:29:09.293 } 00:29:09.293 ] 00:29:09.293 }, 00:29:09.293 { 00:29:09.293 "subsystem": "iobuf", 00:29:09.293 "config": [ 00:29:09.293 { 00:29:09.293 "method": "iobuf_set_options", 00:29:09.293 "params": { 00:29:09.293 "small_pool_count": 8192, 00:29:09.293 "large_pool_count": 1024, 00:29:09.293 "small_bufsize": 8192, 00:29:09.293 "large_bufsize": 135168 00:29:09.293 } 00:29:09.293 } 00:29:09.293 ] 00:29:09.293 }, 00:29:09.293 { 00:29:09.293 "subsystem": "sock", 00:29:09.293 "config": [ 00:29:09.293 { 00:29:09.293 "method": "sock_set_default_impl", 00:29:09.293 "params": { 00:29:09.293 "impl_name": "posix" 00:29:09.293 } 00:29:09.293 }, 00:29:09.293 { 00:29:09.293 "method": "sock_impl_set_options", 00:29:09.293 "params": { 00:29:09.293 "impl_name": "ssl", 00:29:09.293 "recv_buf_size": 4096, 00:29:09.293 "send_buf_size": 4096, 00:29:09.293 "enable_recv_pipe": true, 00:29:09.293 "enable_quickack": false, 00:29:09.293 "enable_placement_id": 0, 00:29:09.293 "enable_zerocopy_send_server": true, 00:29:09.293 "enable_zerocopy_send_client": false, 00:29:09.293 "zerocopy_threshold": 0, 00:29:09.293 "tls_version": 0, 00:29:09.293 "enable_ktls": false 00:29:09.293 } 00:29:09.293 }, 00:29:09.293 { 00:29:09.293 "method": "sock_impl_set_options", 00:29:09.293 "params": { 00:29:09.293 "impl_name": "posix", 00:29:09.293 "recv_buf_size": 2097152, 00:29:09.293 "send_buf_size": 2097152, 00:29:09.293 "enable_recv_pipe": true, 00:29:09.293 "enable_quickack": false, 00:29:09.293 "enable_placement_id": 0, 00:29:09.293 "enable_zerocopy_send_server": true, 00:29:09.293 "enable_zerocopy_send_client": false, 00:29:09.293 "zerocopy_threshold": 0, 00:29:09.293 "tls_version": 0, 00:29:09.293 "enable_ktls": false 00:29:09.293 } 00:29:09.293 } 00:29:09.293 ] 00:29:09.293 }, 00:29:09.293 { 00:29:09.293 "subsystem": "vmd", 00:29:09.293 "config": [] 00:29:09.293 }, 00:29:09.293 { 00:29:09.293 "subsystem": "accel", 00:29:09.293 "config": [ 00:29:09.293 { 00:29:09.293 "method": "accel_set_options", 00:29:09.293 "params": { 00:29:09.293 "small_cache_size": 128, 00:29:09.293 "large_cache_size": 16, 00:29:09.293 "task_count": 2048, 00:29:09.293 "sequence_count": 2048, 00:29:09.293 "buf_count": 2048 00:29:09.293 } 00:29:09.293 } 00:29:09.293 ] 00:29:09.293 }, 00:29:09.293 { 00:29:09.293 "subsystem": "bdev", 00:29:09.293 "config": [ 00:29:09.293 { 00:29:09.293 "method": "bdev_set_options", 00:29:09.293 "params": { 00:29:09.293 "bdev_io_pool_size": 65535, 00:29:09.293 "bdev_io_cache_size": 256, 00:29:09.293 "bdev_auto_examine": true, 00:29:09.293 "iobuf_small_cache_size": 128, 00:29:09.293 "iobuf_large_cache_size": 16 00:29:09.293 } 00:29:09.293 }, 00:29:09.293 { 00:29:09.293 "method": "bdev_raid_set_options", 00:29:09.293 "params": { 00:29:09.293 "process_window_size_kb": 1024, 00:29:09.293 "process_max_bandwidth_mb_sec": 0 00:29:09.293 } 00:29:09.293 }, 00:29:09.293 { 00:29:09.293 "method": "bdev_iscsi_set_options", 00:29:09.293 "params": { 00:29:09.293 "timeout_sec": 30 00:29:09.293 } 00:29:09.293 }, 00:29:09.293 { 00:29:09.293 "method": "bdev_nvme_set_options", 00:29:09.293 "params": { 00:29:09.293 "action_on_timeout": "none", 00:29:09.293 "timeout_us": 0, 00:29:09.293 "timeout_admin_us": 0, 00:29:09.293 "keep_alive_timeout_ms": 10000, 00:29:09.293 "arbitration_burst": 0, 00:29:09.293 "low_priority_weight": 0, 00:29:09.293 "medium_priority_weight": 0, 00:29:09.293 "high_priority_weight": 0, 00:29:09.293 "nvme_adminq_poll_period_us": 10000, 00:29:09.293 "nvme_ioq_poll_period_us": 0, 00:29:09.293 "io_queue_requests": 512, 00:29:09.293 "delay_cmd_submit": true, 00:29:09.293 "transport_retry_count": 4, 00:29:09.293 "bdev_retry_count": 3, 00:29:09.293 "transport_ack_timeout": 0, 00:29:09.293 "ctrlr_loss_timeout_sec": 0, 00:29:09.293 "reconnect_delay_sec": 0, 00:29:09.293 "fast_io_fail_timeout_sec": 0, 00:29:09.293 "disable_auto_failback": false, 00:29:09.293 "generate_uuids": false, 00:29:09.293 "transport_tos": 0, 00:29:09.293 "nvme_error_stat": false, 00:29:09.293 "rdma_srq_size": 0, 00:29:09.293 "io_path_stat": false, 00:29:09.293 "allow_accel_sequence": false, 00:29:09.293 "rdma_max_cq_size": 0, 00:29:09.293 "rdma_cm_event_timeout_ms": 0, 00:29:09.293 "dhchap_digests": [ 00:29:09.293 "sha256", 00:29:09.293 "sha384", 00:29:09.293 "sha512" 00:29:09.293 ], 00:29:09.293 "dhchap_dhgroups": [ 00:29:09.293 "null", 00:29:09.293 "ffdhe2048", 00:29:09.293 "ffdhe3072", 00:29:09.293 "ffdhe4096", 00:29:09.293 "ffdhe6144", 00:29:09.293 "ffdhe8192" 00:29:09.293 ] 00:29:09.293 } 00:29:09.293 }, 00:29:09.293 { 00:29:09.293 "method": "bdev_nvme_attach_controller", 00:29:09.293 "params": { 00:29:09.293 "name": "nvme0", 00:29:09.293 "trtype": "TCP", 00:29:09.293 "adrfam": "IPv4", 00:29:09.293 "traddr": "127.0.0.1", 00:29:09.293 "trsvcid": "4420", 00:29:09.293 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:09.293 "prchk_reftag": false, 00:29:09.293 "prchk_guard": false, 00:29:09.293 "ctrlr_loss_timeout_sec": 0, 00:29:09.293 "reconnect_delay_sec": 0, 00:29:09.293 "fast_io_fail_timeout_sec": 0, 00:29:09.293 "psk": "key0", 00:29:09.293 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:09.293 "hdgst": false, 00:29:09.293 "ddgst": false 00:29:09.293 } 00:29:09.293 }, 00:29:09.293 { 00:29:09.293 "method": "bdev_nvme_set_hotplug", 00:29:09.294 "params": { 00:29:09.294 "period_us": 100000, 00:29:09.294 "enable": false 00:29:09.294 } 00:29:09.294 }, 00:29:09.294 { 00:29:09.294 "method": "bdev_wait_for_examine" 00:29:09.294 } 00:29:09.294 ] 00:29:09.294 }, 00:29:09.294 { 00:29:09.294 "subsystem": "nbd", 00:29:09.294 "config": [] 00:29:09.294 } 00:29:09.294 ] 00:29:09.294 }' 00:29:09.294 14:23:17 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:09.294 14:23:17 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:09.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:09.294 14:23:17 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:09.294 14:23:17 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:29:09.552 [2024-07-26 14:23:17.312682] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:29:09.552 [2024-07-26 14:23:17.312762] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid357116 ] 00:29:09.552 EAL: No free 2048 kB hugepages reported on node 1 00:29:09.552 [2024-07-26 14:23:17.374270] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:09.552 [2024-07-26 14:23:17.491689] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:09.810 [2024-07-26 14:23:17.680569] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:10.374 14:23:18 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:10.374 14:23:18 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:29:10.374 14:23:18 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:29:10.374 14:23:18 keyring_file -- keyring/file.sh@120 -- # jq length 00:29:10.374 14:23:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:10.631 14:23:18 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:29:10.631 14:23:18 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:29:10.631 14:23:18 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:10.631 14:23:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:10.631 14:23:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:10.631 14:23:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:10.631 14:23:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:10.888 14:23:18 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:29:10.888 14:23:18 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:29:10.888 14:23:18 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:29:10.888 14:23:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:10.888 14:23:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:10.888 14:23:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:10.888 14:23:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:11.146 14:23:19 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:29:11.146 14:23:19 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:29:11.146 14:23:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:29:11.146 14:23:19 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:29:11.404 14:23:19 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:29:11.404 14:23:19 keyring_file -- keyring/file.sh@1 -- # cleanup 00:29:11.404 14:23:19 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.PBizHmhTpi /tmp/tmp.Ayisxa1siR 00:29:11.404 14:23:19 keyring_file -- keyring/file.sh@20 -- # killprocess 357116 00:29:11.404 14:23:19 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 357116 ']' 00:29:11.404 14:23:19 keyring_file -- common/autotest_common.sh@954 -- # kill -0 357116 00:29:11.404 14:23:19 keyring_file -- common/autotest_common.sh@955 -- # uname 00:29:11.404 14:23:19 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:11.404 14:23:19 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 357116 00:29:11.404 14:23:19 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:11.404 14:23:19 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:11.404 14:23:19 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 357116' 00:29:11.404 killing process with pid 357116 00:29:11.404 14:23:19 keyring_file -- common/autotest_common.sh@969 -- # kill 357116 00:29:11.404 Received shutdown signal, test time was about 1.000000 seconds 00:29:11.404 00:29:11.404 Latency(us) 00:29:11.404 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:11.404 =================================================================================================================== 00:29:11.404 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:29:11.404 14:23:19 keyring_file -- common/autotest_common.sh@974 -- # wait 357116 00:29:11.662 14:23:19 keyring_file -- keyring/file.sh@21 -- # killprocess 355642 00:29:11.662 14:23:19 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 355642 ']' 00:29:11.662 14:23:19 keyring_file -- common/autotest_common.sh@954 -- # kill -0 355642 00:29:11.662 14:23:19 keyring_file -- common/autotest_common.sh@955 -- # uname 00:29:11.662 14:23:19 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:11.662 14:23:19 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 355642 00:29:11.662 14:23:19 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:11.662 14:23:19 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:11.662 14:23:19 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 355642' 00:29:11.662 killing process with pid 355642 00:29:11.662 14:23:19 keyring_file -- common/autotest_common.sh@969 -- # kill 355642 00:29:11.662 [2024-07-26 14:23:19.579223] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:29:11.662 14:23:19 keyring_file -- common/autotest_common.sh@974 -- # wait 355642 00:29:12.227 00:29:12.227 real 0m14.075s 00:29:12.227 user 0m35.382s 00:29:12.227 sys 0m3.135s 00:29:12.227 14:23:20 keyring_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:12.227 14:23:20 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:29:12.227 ************************************ 00:29:12.227 END TEST keyring_file 00:29:12.227 ************************************ 00:29:12.227 14:23:20 -- spdk/autotest.sh@300 -- # [[ y == y ]] 00:29:12.227 14:23:20 -- spdk/autotest.sh@301 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:29:12.227 14:23:20 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:29:12.227 14:23:20 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:12.227 14:23:20 -- common/autotest_common.sh@10 -- # set +x 00:29:12.227 ************************************ 00:29:12.227 START TEST keyring_linux 00:29:12.227 ************************************ 00:29:12.227 14:23:20 keyring_linux -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:29:12.227 * Looking for test storage... 00:29:12.227 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:29:12.227 14:23:20 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:29:12.227 14:23:20 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:12.227 14:23:20 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:29:12.227 14:23:20 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:12.227 14:23:20 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:12.227 14:23:20 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:12.227 14:23:20 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:12.227 14:23:20 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:12.227 14:23:20 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:12.227 14:23:20 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:12.227 14:23:20 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:12.227 14:23:20 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:12.227 14:23:20 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:12.227 14:23:20 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:29:12.227 14:23:20 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:29:12.227 14:23:20 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:12.227 14:23:20 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:12.227 14:23:20 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:12.227 14:23:20 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:12.227 14:23:20 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:12.227 14:23:20 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:12.227 14:23:20 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:12.227 14:23:20 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:12.227 14:23:20 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:12.227 14:23:20 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:12.227 14:23:20 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:12.227 14:23:20 keyring_linux -- paths/export.sh@5 -- # export PATH 00:29:12.227 14:23:20 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:12.227 14:23:20 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:29:12.227 14:23:20 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:12.227 14:23:20 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:12.227 14:23:20 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:12.227 14:23:20 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:12.227 14:23:20 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:12.227 14:23:20 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:12.227 14:23:20 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:12.227 14:23:20 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:12.227 14:23:20 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:29:12.227 14:23:20 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:29:12.227 14:23:20 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:29:12.227 14:23:20 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:29:12.227 14:23:20 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:29:12.227 14:23:20 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:29:12.227 14:23:20 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:29:12.227 14:23:20 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:29:12.227 14:23:20 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:29:12.227 14:23:20 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:29:12.227 14:23:20 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:29:12.227 14:23:20 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:29:12.227 14:23:20 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:29:12.227 14:23:20 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:29:12.227 14:23:20 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:29:12.227 14:23:20 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:29:12.227 14:23:20 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:29:12.227 14:23:20 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:29:12.227 14:23:20 keyring_linux -- nvmf/common.sh@705 -- # python - 00:29:12.227 14:23:20 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:29:12.227 14:23:20 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:29:12.227 /tmp/:spdk-test:key0 00:29:12.227 14:23:20 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:29:12.227 14:23:20 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:29:12.227 14:23:20 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:29:12.227 14:23:20 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:29:12.227 14:23:20 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:29:12.227 14:23:20 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:29:12.227 14:23:20 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:29:12.227 14:23:20 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:29:12.227 14:23:20 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:29:12.227 14:23:20 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:29:12.227 14:23:20 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:29:12.227 14:23:20 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:29:12.227 14:23:20 keyring_linux -- nvmf/common.sh@705 -- # python - 00:29:12.227 14:23:20 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:29:12.227 14:23:20 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:29:12.227 /tmp/:spdk-test:key1 00:29:12.227 14:23:20 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=357479 00:29:12.227 14:23:20 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:29:12.227 14:23:20 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 357479 00:29:12.227 14:23:20 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 357479 ']' 00:29:12.227 14:23:20 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:12.227 14:23:20 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:12.227 14:23:20 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:12.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:12.227 14:23:20 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:12.227 14:23:20 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:29:12.485 [2024-07-26 14:23:20.260040] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:29:12.485 [2024-07-26 14:23:20.260136] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid357479 ] 00:29:12.485 EAL: No free 2048 kB hugepages reported on node 1 00:29:12.485 [2024-07-26 14:23:20.316875] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:12.485 [2024-07-26 14:23:20.423944] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:12.743 14:23:20 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:12.743 14:23:20 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:29:12.743 14:23:20 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:29:12.743 14:23:20 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:12.743 14:23:20 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:29:12.743 [2024-07-26 14:23:20.664162] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:12.743 null0 00:29:12.743 [2024-07-26 14:23:20.696226] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:29:12.743 [2024-07-26 14:23:20.696667] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:29:12.743 14:23:20 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:12.743 14:23:20 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:29:12.743 677995908 00:29:12.743 14:23:20 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:29:12.743 741578610 00:29:12.743 14:23:20 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=357608 00:29:12.743 14:23:20 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:29:12.743 14:23:20 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 357608 /var/tmp/bperf.sock 00:29:12.743 14:23:20 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 357608 ']' 00:29:12.743 14:23:20 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:12.743 14:23:20 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:12.743 14:23:20 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:12.743 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:12.743 14:23:20 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:12.743 14:23:20 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:29:13.001 [2024-07-26 14:23:20.763422] Starting SPDK v24.09-pre git sha1 477912bde / DPDK 24.03.0 initialization... 00:29:13.001 [2024-07-26 14:23:20.763523] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid357608 ] 00:29:13.001 EAL: No free 2048 kB hugepages reported on node 1 00:29:13.001 [2024-07-26 14:23:20.818362] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:13.001 [2024-07-26 14:23:20.923074] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:13.001 14:23:20 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:13.001 14:23:20 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:29:13.001 14:23:20 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:29:13.001 14:23:20 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:29:13.259 14:23:21 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:29:13.259 14:23:21 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:13.824 14:23:21 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:29:13.824 14:23:21 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:29:13.824 [2024-07-26 14:23:21.787676] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:14.082 nvme0n1 00:29:14.082 14:23:21 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:29:14.082 14:23:21 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:29:14.082 14:23:21 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:29:14.082 14:23:21 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:29:14.082 14:23:21 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:29:14.082 14:23:21 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:14.339 14:23:22 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:29:14.339 14:23:22 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:29:14.339 14:23:22 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:29:14.339 14:23:22 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:29:14.339 14:23:22 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:14.339 14:23:22 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:14.339 14:23:22 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:29:14.596 14:23:22 keyring_linux -- keyring/linux.sh@25 -- # sn=677995908 00:29:14.596 14:23:22 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:29:14.596 14:23:22 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:29:14.596 14:23:22 keyring_linux -- keyring/linux.sh@26 -- # [[ 677995908 == \6\7\7\9\9\5\9\0\8 ]] 00:29:14.596 14:23:22 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 677995908 00:29:14.596 14:23:22 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:29:14.596 14:23:22 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:14.596 Running I/O for 1 seconds... 00:29:15.525 00:29:15.525 Latency(us) 00:29:15.525 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:15.525 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:29:15.525 nvme0n1 : 1.01 11114.64 43.42 0.00 0.00 11438.37 4344.79 15922.82 00:29:15.525 =================================================================================================================== 00:29:15.525 Total : 11114.64 43.42 0.00 0.00 11438.37 4344.79 15922.82 00:29:15.525 0 00:29:15.525 14:23:23 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:29:15.525 14:23:23 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:29:15.783 14:23:23 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:29:15.783 14:23:23 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:29:15.783 14:23:23 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:29:15.783 14:23:23 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:29:15.783 14:23:23 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:29:15.783 14:23:23 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:16.040 14:23:23 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:29:16.040 14:23:23 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:29:16.040 14:23:23 keyring_linux -- keyring/linux.sh@23 -- # return 00:29:16.040 14:23:23 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:29:16.040 14:23:23 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:29:16.040 14:23:23 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:29:16.040 14:23:23 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:29:16.040 14:23:23 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:16.040 14:23:23 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:29:16.040 14:23:23 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:16.040 14:23:23 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:29:16.040 14:23:23 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:29:16.297 [2024-07-26 14:23:24.240265] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:29:16.297 [2024-07-26 14:23:24.240566] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11eef10 (107): Transport endpoint is not connected 00:29:16.297 [2024-07-26 14:23:24.241544] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11eef10 (9): Bad file descriptor 00:29:16.297 [2024-07-26 14:23:24.242544] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:16.297 [2024-07-26 14:23:24.242571] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:29:16.297 [2024-07-26 14:23:24.242602] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:16.297 request: 00:29:16.297 { 00:29:16.297 "name": "nvme0", 00:29:16.297 "trtype": "tcp", 00:29:16.297 "traddr": "127.0.0.1", 00:29:16.297 "adrfam": "ipv4", 00:29:16.297 "trsvcid": "4420", 00:29:16.297 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:16.297 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:16.297 "prchk_reftag": false, 00:29:16.297 "prchk_guard": false, 00:29:16.297 "hdgst": false, 00:29:16.297 "ddgst": false, 00:29:16.297 "psk": ":spdk-test:key1", 00:29:16.297 "method": "bdev_nvme_attach_controller", 00:29:16.297 "req_id": 1 00:29:16.297 } 00:29:16.297 Got JSON-RPC error response 00:29:16.297 response: 00:29:16.297 { 00:29:16.297 "code": -5, 00:29:16.297 "message": "Input/output error" 00:29:16.297 } 00:29:16.297 14:23:24 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:29:16.297 14:23:24 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:16.298 14:23:24 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:16.298 14:23:24 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:16.298 14:23:24 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:29:16.298 14:23:24 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:29:16.298 14:23:24 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:29:16.298 14:23:24 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:29:16.298 14:23:24 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:29:16.298 14:23:24 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:29:16.298 14:23:24 keyring_linux -- keyring/linux.sh@33 -- # sn=677995908 00:29:16.298 14:23:24 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 677995908 00:29:16.298 1 links removed 00:29:16.298 14:23:24 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:29:16.298 14:23:24 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:29:16.298 14:23:24 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:29:16.298 14:23:24 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:29:16.298 14:23:24 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:29:16.298 14:23:24 keyring_linux -- keyring/linux.sh@33 -- # sn=741578610 00:29:16.298 14:23:24 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 741578610 00:29:16.298 1 links removed 00:29:16.298 14:23:24 keyring_linux -- keyring/linux.sh@41 -- # killprocess 357608 00:29:16.298 14:23:24 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 357608 ']' 00:29:16.298 14:23:24 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 357608 00:29:16.298 14:23:24 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:29:16.298 14:23:24 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:16.298 14:23:24 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 357608 00:29:16.298 14:23:24 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:16.298 14:23:24 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:16.298 14:23:24 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 357608' 00:29:16.298 killing process with pid 357608 00:29:16.298 14:23:24 keyring_linux -- common/autotest_common.sh@969 -- # kill 357608 00:29:16.298 Received shutdown signal, test time was about 1.000000 seconds 00:29:16.298 00:29:16.298 Latency(us) 00:29:16.298 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:16.298 =================================================================================================================== 00:29:16.298 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:16.298 14:23:24 keyring_linux -- common/autotest_common.sh@974 -- # wait 357608 00:29:16.556 14:23:24 keyring_linux -- keyring/linux.sh@42 -- # killprocess 357479 00:29:16.556 14:23:24 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 357479 ']' 00:29:16.556 14:23:24 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 357479 00:29:16.556 14:23:24 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:29:16.556 14:23:24 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:16.556 14:23:24 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 357479 00:29:16.814 14:23:24 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:16.814 14:23:24 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:16.814 14:23:24 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 357479' 00:29:16.814 killing process with pid 357479 00:29:16.814 14:23:24 keyring_linux -- common/autotest_common.sh@969 -- # kill 357479 00:29:16.814 14:23:24 keyring_linux -- common/autotest_common.sh@974 -- # wait 357479 00:29:17.072 00:29:17.072 real 0m4.945s 00:29:17.072 user 0m9.689s 00:29:17.072 sys 0m1.544s 00:29:17.072 14:23:25 keyring_linux -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:17.072 14:23:25 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:29:17.072 ************************************ 00:29:17.072 END TEST keyring_linux 00:29:17.072 ************************************ 00:29:17.072 14:23:25 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:29:17.072 14:23:25 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:29:17.072 14:23:25 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:29:17.072 14:23:25 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:29:17.072 14:23:25 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:29:17.072 14:23:25 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:29:17.072 14:23:25 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:29:17.072 14:23:25 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:29:17.072 14:23:25 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:29:17.072 14:23:25 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:29:17.072 14:23:25 -- spdk/autotest.sh@360 -- # '[' 0 -eq 1 ']' 00:29:17.072 14:23:25 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:29:17.072 14:23:25 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:29:17.072 14:23:25 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:29:17.072 14:23:25 -- spdk/autotest.sh@379 -- # [[ 0 -eq 1 ]] 00:29:17.072 14:23:25 -- spdk/autotest.sh@384 -- # trap - SIGINT SIGTERM EXIT 00:29:17.072 14:23:25 -- spdk/autotest.sh@386 -- # timing_enter post_cleanup 00:29:17.072 14:23:25 -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:17.072 14:23:25 -- common/autotest_common.sh@10 -- # set +x 00:29:17.072 14:23:25 -- spdk/autotest.sh@387 -- # autotest_cleanup 00:29:17.072 14:23:25 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:29:17.072 14:23:25 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:29:17.072 14:23:25 -- common/autotest_common.sh@10 -- # set +x 00:29:18.973 INFO: APP EXITING 00:29:18.973 INFO: killing all VMs 00:29:18.973 INFO: killing vhost app 00:29:18.973 INFO: EXIT DONE 00:29:20.346 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:29:20.346 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:29:20.346 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:29:20.346 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:29:20.346 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:29:20.346 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:29:20.346 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:29:20.346 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:29:20.346 0000:0b:00.0 (8086 0a54): Already using the nvme driver 00:29:20.346 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:29:20.346 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:29:20.346 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:29:20.346 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:29:20.346 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:29:20.346 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:29:20.346 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:29:20.346 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:29:21.723 Cleaning 00:29:21.723 Removing: /var/run/dpdk/spdk0/config 00:29:21.723 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:29:21.723 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:29:21.723 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:29:21.723 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:29:21.723 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:29:21.723 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:29:21.723 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:29:21.723 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:29:21.723 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:29:21.723 Removing: /var/run/dpdk/spdk0/hugepage_info 00:29:21.723 Removing: /var/run/dpdk/spdk1/config 00:29:21.723 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:29:21.723 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:29:21.723 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:29:21.723 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:29:21.723 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:29:21.723 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:29:21.723 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:29:21.723 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:29:21.723 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:29:21.723 Removing: /var/run/dpdk/spdk1/hugepage_info 00:29:21.723 Removing: /var/run/dpdk/spdk1/mp_socket 00:29:21.723 Removing: /var/run/dpdk/spdk2/config 00:29:21.723 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:29:21.723 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:29:21.723 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:29:21.723 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:29:21.723 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:29:21.723 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:29:21.723 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:29:21.723 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:29:21.723 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:29:21.723 Removing: /var/run/dpdk/spdk2/hugepage_info 00:29:21.723 Removing: /var/run/dpdk/spdk3/config 00:29:21.723 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:29:21.723 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:29:21.723 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:29:21.723 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:29:21.723 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:29:21.723 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:29:21.723 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:29:21.723 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:29:21.723 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:29:21.723 Removing: /var/run/dpdk/spdk3/hugepage_info 00:29:21.723 Removing: /var/run/dpdk/spdk4/config 00:29:21.723 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:29:21.723 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:29:21.723 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:29:21.723 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:29:21.723 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:29:21.723 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:29:21.723 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:29:21.723 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:29:21.723 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:29:21.723 Removing: /var/run/dpdk/spdk4/hugepage_info 00:29:21.723 Removing: /dev/shm/bdev_svc_trace.1 00:29:21.723 Removing: /dev/shm/nvmf_trace.0 00:29:21.723 Removing: /dev/shm/spdk_tgt_trace.pid104726 00:29:21.723 Removing: /var/run/dpdk/spdk0 00:29:21.723 Removing: /var/run/dpdk/spdk1 00:29:21.723 Removing: /var/run/dpdk/spdk2 00:29:21.723 Removing: /var/run/dpdk/spdk3 00:29:21.723 Removing: /var/run/dpdk/spdk4 00:29:21.723 Removing: /var/run/dpdk/spdk_pid102443 00:29:21.723 Removing: /var/run/dpdk/spdk_pid103292 00:29:21.723 Removing: /var/run/dpdk/spdk_pid104726 00:29:21.723 Removing: /var/run/dpdk/spdk_pid105063 00:29:21.723 Removing: /var/run/dpdk/spdk_pid105735 00:29:21.723 Removing: /var/run/dpdk/spdk_pid105875 00:29:21.723 Removing: /var/run/dpdk/spdk_pid106593 00:29:21.723 Removing: /var/run/dpdk/spdk_pid106717 00:29:21.723 Removing: /var/run/dpdk/spdk_pid106959 00:29:21.723 Removing: /var/run/dpdk/spdk_pid108160 00:29:21.981 Removing: /var/run/dpdk/spdk_pid109200 00:29:21.981 Removing: /var/run/dpdk/spdk_pid109383 00:29:21.981 Removing: /var/run/dpdk/spdk_pid109574 00:29:21.981 Removing: /var/run/dpdk/spdk_pid109774 00:29:21.981 Removing: /var/run/dpdk/spdk_pid109966 00:29:21.981 Removing: /var/run/dpdk/spdk_pid110245 00:29:21.981 Removing: /var/run/dpdk/spdk_pid110405 00:29:21.981 Removing: /var/run/dpdk/spdk_pid110583 00:29:21.981 Removing: /var/run/dpdk/spdk_pid110908 00:29:21.981 Removing: /var/run/dpdk/spdk_pid113275 00:29:21.981 Removing: /var/run/dpdk/spdk_pid113439 00:29:21.981 Removing: /var/run/dpdk/spdk_pid113603 00:29:21.981 Removing: /var/run/dpdk/spdk_pid113613 00:29:21.981 Removing: /var/run/dpdk/spdk_pid114033 00:29:21.981 Removing: /var/run/dpdk/spdk_pid114047 00:29:21.982 Removing: /var/run/dpdk/spdk_pid114473 00:29:21.982 Removing: /var/run/dpdk/spdk_pid114481 00:29:21.982 Removing: /var/run/dpdk/spdk_pid114768 00:29:21.982 Removing: /var/run/dpdk/spdk_pid114780 00:29:21.982 Removing: /var/run/dpdk/spdk_pid114943 00:29:21.982 Removing: /var/run/dpdk/spdk_pid115042 00:29:21.982 Removing: /var/run/dpdk/spdk_pid115443 00:29:21.982 Removing: /var/run/dpdk/spdk_pid115601 00:29:21.982 Removing: /var/run/dpdk/spdk_pid115793 00:29:21.982 Removing: /var/run/dpdk/spdk_pid117930 00:29:21.982 Removing: /var/run/dpdk/spdk_pid120484 00:29:21.982 Removing: /var/run/dpdk/spdk_pid127333 00:29:21.982 Removing: /var/run/dpdk/spdk_pid127749 00:29:21.982 Removing: /var/run/dpdk/spdk_pid130251 00:29:21.982 Removing: /var/run/dpdk/spdk_pid130529 00:29:21.982 Removing: /var/run/dpdk/spdk_pid133037 00:29:21.982 Removing: /var/run/dpdk/spdk_pid137367 00:29:21.982 Removing: /var/run/dpdk/spdk_pid139440 00:29:21.982 Removing: /var/run/dpdk/spdk_pid145838 00:29:21.982 Removing: /var/run/dpdk/spdk_pid151043 00:29:21.982 Removing: /var/run/dpdk/spdk_pid152250 00:29:21.982 Removing: /var/run/dpdk/spdk_pid152912 00:29:21.982 Removing: /var/run/dpdk/spdk_pid163137 00:29:21.982 Removing: /var/run/dpdk/spdk_pid165402 00:29:21.982 Removing: /var/run/dpdk/spdk_pid191320 00:29:21.982 Removing: /var/run/dpdk/spdk_pid194593 00:29:21.982 Removing: /var/run/dpdk/spdk_pid198425 00:29:21.982 Removing: /var/run/dpdk/spdk_pid202256 00:29:21.982 Removing: /var/run/dpdk/spdk_pid202258 00:29:21.982 Removing: /var/run/dpdk/spdk_pid202912 00:29:21.982 Removing: /var/run/dpdk/spdk_pid203572 00:29:21.982 Removing: /var/run/dpdk/spdk_pid204129 00:29:21.982 Removing: /var/run/dpdk/spdk_pid204524 00:29:21.982 Removing: /var/run/dpdk/spdk_pid204645 00:29:21.982 Removing: /var/run/dpdk/spdk_pid204784 00:29:21.982 Removing: /var/run/dpdk/spdk_pid204923 00:29:21.982 Removing: /var/run/dpdk/spdk_pid204925 00:29:21.982 Removing: /var/run/dpdk/spdk_pid205588 00:29:21.982 Removing: /var/run/dpdk/spdk_pid206243 00:29:21.982 Removing: /var/run/dpdk/spdk_pid206779 00:29:21.982 Removing: /var/run/dpdk/spdk_pid207178 00:29:21.982 Removing: /var/run/dpdk/spdk_pid207295 00:29:21.982 Removing: /var/run/dpdk/spdk_pid207443 00:29:21.982 Removing: /var/run/dpdk/spdk_pid208477 00:29:21.982 Removing: /var/run/dpdk/spdk_pid209732 00:29:21.982 Removing: /var/run/dpdk/spdk_pid215055 00:29:21.982 Removing: /var/run/dpdk/spdk_pid240007 00:29:21.982 Removing: /var/run/dpdk/spdk_pid242788 00:29:21.982 Removing: /var/run/dpdk/spdk_pid243965 00:29:21.982 Removing: /var/run/dpdk/spdk_pid245167 00:29:21.982 Removing: /var/run/dpdk/spdk_pid245303 00:29:21.982 Removing: /var/run/dpdk/spdk_pid245439 00:29:21.982 Removing: /var/run/dpdk/spdk_pid245478 00:29:21.982 Removing: /var/run/dpdk/spdk_pid245907 00:29:21.982 Removing: /var/run/dpdk/spdk_pid247220 00:29:21.982 Removing: /var/run/dpdk/spdk_pid247963 00:29:21.982 Removing: /var/run/dpdk/spdk_pid248286 00:29:21.982 Removing: /var/run/dpdk/spdk_pid249889 00:29:21.982 Removing: /var/run/dpdk/spdk_pid250437 00:29:21.982 Removing: /var/run/dpdk/spdk_pid250878 00:29:21.982 Removing: /var/run/dpdk/spdk_pid253394 00:29:21.982 Removing: /var/run/dpdk/spdk_pid259558 00:29:21.982 Removing: /var/run/dpdk/spdk_pid262322 00:29:21.982 Removing: /var/run/dpdk/spdk_pid266606 00:29:21.982 Removing: /var/run/dpdk/spdk_pid267552 00:29:21.982 Removing: /var/run/dpdk/spdk_pid268659 00:29:21.982 Removing: /var/run/dpdk/spdk_pid271341 00:29:21.982 Removing: /var/run/dpdk/spdk_pid273574 00:29:21.982 Removing: /var/run/dpdk/spdk_pid277776 00:29:21.982 Removing: /var/run/dpdk/spdk_pid277784 00:29:21.982 Removing: /var/run/dpdk/spdk_pid280552 00:29:21.982 Removing: /var/run/dpdk/spdk_pid280733 00:29:21.982 Removing: /var/run/dpdk/spdk_pid280944 00:29:21.982 Removing: /var/run/dpdk/spdk_pid281210 00:29:21.982 Removing: /var/run/dpdk/spdk_pid281221 00:29:21.982 Removing: /var/run/dpdk/spdk_pid283990 00:29:21.982 Removing: /var/run/dpdk/spdk_pid284319 00:29:21.982 Removing: /var/run/dpdk/spdk_pid286978 00:29:21.982 Removing: /var/run/dpdk/spdk_pid288835 00:29:21.982 Removing: /var/run/dpdk/spdk_pid292254 00:29:21.982 Removing: /var/run/dpdk/spdk_pid295571 00:29:21.982 Removing: /var/run/dpdk/spdk_pid302448 00:29:21.982 Removing: /var/run/dpdk/spdk_pid306914 00:29:21.982 Removing: /var/run/dpdk/spdk_pid306916 00:29:21.982 Removing: /var/run/dpdk/spdk_pid319149 00:29:21.982 Removing: /var/run/dpdk/spdk_pid319553 00:29:21.982 Removing: /var/run/dpdk/spdk_pid320028 00:29:21.982 Removing: /var/run/dpdk/spdk_pid320478 00:29:21.982 Removing: /var/run/dpdk/spdk_pid321040 00:29:21.982 Removing: /var/run/dpdk/spdk_pid321469 00:29:21.982 Removing: /var/run/dpdk/spdk_pid321879 00:29:21.982 Removing: /var/run/dpdk/spdk_pid322292 00:29:21.982 Removing: /var/run/dpdk/spdk_pid324789 00:29:21.982 Removing: /var/run/dpdk/spdk_pid324939 00:29:21.982 Removing: /var/run/dpdk/spdk_pid328747 00:29:21.982 Removing: /var/run/dpdk/spdk_pid328905 00:29:21.982 Removing: /var/run/dpdk/spdk_pid330512 00:29:21.982 Removing: /var/run/dpdk/spdk_pid336051 00:29:21.982 Removing: /var/run/dpdk/spdk_pid336057 00:29:21.982 Removing: /var/run/dpdk/spdk_pid338951 00:29:21.982 Removing: /var/run/dpdk/spdk_pid340354 00:29:21.982 Removing: /var/run/dpdk/spdk_pid341760 00:29:21.982 Removing: /var/run/dpdk/spdk_pid342619 00:29:21.982 Removing: /var/run/dpdk/spdk_pid344069 00:29:21.982 Removing: /var/run/dpdk/spdk_pid344906 00:29:21.982 Removing: /var/run/dpdk/spdk_pid350290 00:29:21.982 Removing: /var/run/dpdk/spdk_pid350563 00:29:21.982 Removing: /var/run/dpdk/spdk_pid350961 00:29:21.982 Removing: /var/run/dpdk/spdk_pid352512 00:29:21.982 Removing: /var/run/dpdk/spdk_pid352913 00:29:21.982 Removing: /var/run/dpdk/spdk_pid353192 00:29:21.982 Removing: /var/run/dpdk/spdk_pid355642 00:29:21.982 Removing: /var/run/dpdk/spdk_pid355646 00:29:21.982 Removing: /var/run/dpdk/spdk_pid357116 00:29:21.982 Removing: /var/run/dpdk/spdk_pid357479 00:29:21.982 Removing: /var/run/dpdk/spdk_pid357608 00:29:21.982 Clean 00:29:22.240 14:23:30 -- common/autotest_common.sh@1451 -- # return 0 00:29:22.240 14:23:30 -- spdk/autotest.sh@388 -- # timing_exit post_cleanup 00:29:22.240 14:23:30 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:22.240 14:23:30 -- common/autotest_common.sh@10 -- # set +x 00:29:22.240 14:23:30 -- spdk/autotest.sh@390 -- # timing_exit autotest 00:29:22.240 14:23:30 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:22.240 14:23:30 -- common/autotest_common.sh@10 -- # set +x 00:29:22.240 14:23:30 -- spdk/autotest.sh@391 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:29:22.240 14:23:30 -- spdk/autotest.sh@393 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:29:22.240 14:23:30 -- spdk/autotest.sh@393 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:29:22.240 14:23:30 -- spdk/autotest.sh@395 -- # hash lcov 00:29:22.240 14:23:30 -- spdk/autotest.sh@395 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:29:22.240 14:23:30 -- spdk/autotest.sh@397 -- # hostname 00:29:22.240 14:23:30 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-06 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:29:22.511 geninfo: WARNING: invalid characters removed from testname! 00:29:54.571 14:23:57 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:29:54.571 14:24:01 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:29:57.097 14:24:04 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:00.374 14:24:07 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:03.652 14:24:10 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:06.177 14:24:13 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:09.458 14:24:16 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:30:09.458 14:24:16 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:09.458 14:24:16 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:30:09.458 14:24:16 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:09.458 14:24:16 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:09.458 14:24:16 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:09.458 14:24:16 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:09.458 14:24:16 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:09.458 14:24:16 -- paths/export.sh@5 -- $ export PATH 00:30:09.458 14:24:16 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:09.458 14:24:16 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:30:09.458 14:24:16 -- common/autobuild_common.sh@447 -- $ date +%s 00:30:09.458 14:24:16 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721996656.XXXXXX 00:30:09.458 14:24:16 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721996656.71QNFj 00:30:09.458 14:24:16 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:30:09.458 14:24:16 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:30:09.458 14:24:16 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:30:09.458 14:24:16 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:30:09.458 14:24:16 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:30:09.458 14:24:16 -- common/autobuild_common.sh@463 -- $ get_config_params 00:30:09.458 14:24:16 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:30:09.458 14:24:16 -- common/autotest_common.sh@10 -- $ set +x 00:30:09.458 14:24:16 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:30:09.458 14:24:16 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:30:09.458 14:24:16 -- pm/common@17 -- $ local monitor 00:30:09.458 14:24:16 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:09.458 14:24:16 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:09.458 14:24:16 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:09.458 14:24:16 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:09.458 14:24:16 -- pm/common@21 -- $ date +%s 00:30:09.458 14:24:16 -- pm/common@25 -- $ sleep 1 00:30:09.458 14:24:16 -- pm/common@21 -- $ date +%s 00:30:09.458 14:24:16 -- pm/common@21 -- $ date +%s 00:30:09.458 14:24:16 -- pm/common@21 -- $ date +%s 00:30:09.458 14:24:16 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721996656 00:30:09.458 14:24:16 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721996656 00:30:09.458 14:24:16 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721996656 00:30:09.458 14:24:16 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721996656 00:30:09.458 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721996656_collect-vmstat.pm.log 00:30:09.458 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721996656_collect-cpu-load.pm.log 00:30:09.458 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721996656_collect-cpu-temp.pm.log 00:30:09.458 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721996656_collect-bmc-pm.bmc.pm.log 00:30:10.028 14:24:17 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:30:10.028 14:24:17 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j48 00:30:10.028 14:24:17 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:30:10.028 14:24:17 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:30:10.028 14:24:17 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:30:10.028 14:24:17 -- spdk/autopackage.sh@19 -- $ timing_finish 00:30:10.028 14:24:17 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:30:10.028 14:24:17 -- common/autotest_common.sh@737 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:30:10.028 14:24:17 -- common/autotest_common.sh@739 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:30:10.028 14:24:17 -- spdk/autopackage.sh@20 -- $ exit 0 00:30:10.028 14:24:17 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:30:10.028 14:24:17 -- pm/common@29 -- $ signal_monitor_resources TERM 00:30:10.028 14:24:17 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:30:10.028 14:24:17 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:10.028 14:24:17 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:30:10.028 14:24:17 -- pm/common@44 -- $ pid=367706 00:30:10.028 14:24:17 -- pm/common@50 -- $ kill -TERM 367706 00:30:10.028 14:24:17 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:10.028 14:24:17 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:30:10.028 14:24:17 -- pm/common@44 -- $ pid=367708 00:30:10.028 14:24:17 -- pm/common@50 -- $ kill -TERM 367708 00:30:10.028 14:24:17 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:10.028 14:24:17 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:30:10.028 14:24:17 -- pm/common@44 -- $ pid=367710 00:30:10.028 14:24:17 -- pm/common@50 -- $ kill -TERM 367710 00:30:10.028 14:24:17 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:10.029 14:24:17 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:30:10.029 14:24:17 -- pm/common@44 -- $ pid=367732 00:30:10.029 14:24:17 -- pm/common@50 -- $ sudo -E kill -TERM 367732 00:30:10.029 + [[ -n 18629 ]] 00:30:10.029 + sudo kill 18629 00:30:10.039 [Pipeline] } 00:30:10.059 [Pipeline] // stage 00:30:10.065 [Pipeline] } 00:30:10.084 [Pipeline] // timeout 00:30:10.089 [Pipeline] } 00:30:10.107 [Pipeline] // catchError 00:30:10.113 [Pipeline] } 00:30:10.132 [Pipeline] // wrap 00:30:10.138 [Pipeline] } 00:30:10.154 [Pipeline] // catchError 00:30:10.162 [Pipeline] stage 00:30:10.164 [Pipeline] { (Epilogue) 00:30:10.176 [Pipeline] catchError 00:30:10.178 [Pipeline] { 00:30:10.192 [Pipeline] echo 00:30:10.194 Cleanup processes 00:30:10.201 [Pipeline] sh 00:30:10.487 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:30:10.487 367835 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:30:10.487 367974 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:30:10.502 [Pipeline] sh 00:30:10.789 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:30:10.789 ++ awk '{print $1}' 00:30:10.789 ++ grep -v 'sudo pgrep' 00:30:10.789 + sudo kill -9 367835 00:30:10.801 [Pipeline] sh 00:30:11.086 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:30:21.077 [Pipeline] sh 00:30:21.364 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:30:21.364 Artifacts sizes are good 00:30:21.380 [Pipeline] archiveArtifacts 00:30:21.388 Archiving artifacts 00:30:22.070 [Pipeline] sh 00:30:22.434 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:30:22.449 [Pipeline] cleanWs 00:30:22.459 [WS-CLEANUP] Deleting project workspace... 00:30:22.459 [WS-CLEANUP] Deferred wipeout is used... 00:30:22.465 [WS-CLEANUP] done 00:30:22.467 [Pipeline] } 00:30:22.488 [Pipeline] // catchError 00:30:22.501 [Pipeline] sh 00:30:22.786 + logger -p user.info -t JENKINS-CI 00:30:22.795 [Pipeline] } 00:30:22.811 [Pipeline] // stage 00:30:22.818 [Pipeline] } 00:30:22.836 [Pipeline] // node 00:30:22.843 [Pipeline] End of Pipeline 00:30:22.876 Finished: SUCCESS